From patchwork Fri Oct 21 03:24:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 6496 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp458893wrr; Thu, 20 Oct 2022 20:25:06 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Dd/6YDMe6ehXyDZfdmpT50ZjAOzn7N6NKTEZCd8jGbxl3B1EtkVcd50UUniNdTavrWqEZ X-Received: by 2002:a63:2253:0:b0:43c:c924:e56a with SMTP id t19-20020a632253000000b0043cc924e56amr14064401pgm.122.1666322705918; Thu, 20 Oct 2022 20:25:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666322705; cv=none; d=google.com; s=arc-20160816; b=RKk1Y8NRH8+vNRkeOQ9yuQj4cNrP2zFspCTY71itElqE07+LetbONfLdPcXoAN5LRO d6fctJgBM0JKvu2r9jZ/+7KQyNvNh9jvr4yd0PoK/qIfBPGwln3PVdJKPBjgn2XiFi8w uB39y5nTWYdWwxyJk0PnZpKiOsTlUxjojiE6DzwYIBGYyT8fdN0tdinZBCzfNG1gK5N1 JSDNIa1sQqxQSJDf0SZUsuNUZ4bUml6Luk70k+2pqV0ANEZk16X10Pj8teEMs0waQXKz q8OeBUxrFdJAz4W2vfotIjpH0iY+GYPzMfAZgHOIDfUXNdOQY8v1f/H8M6vvg4uGxrAi VxSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3YDxtwYk13Eg8oX62TDkjnLYaWguBEGcZS5fYx2UnH0=; b=ZpXVn8rg6fRCVF1DsjW7HeGwMCEazqgC2tqGcfWMiz1Ek5dQEKQRvoetZZCJIyfneO BGEcP2LbMwa4xdvYggK6LEn/GkQqQ4RU3yQmGQmJ9+52fqv48WvnASTp1MVoDrnNYSEQ +x1tyY4YvT+XTsWjCLI2HsZADJd8Eu5+scjpA4WAunUeWnFKCqTFztyZ8+2CzVTu0bEa S/TcVjeln3h2eAwPxiFFNC8xUPM6U1CD5bqQXtTzrAAVlkwFrpdGIDxtvXl147P4fOIH YhUUDbbCGT1tsMLJX8UE/Kz4fgBZIMQ1DIjWJ6gFaXwvCALygbnFVz1yUsmadcf/6/m5 EN3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FPaYQwtM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x138-20020a633190000000b004626957c3c7si24051549pgx.193.2022.10.20.20.24.53; Thu, 20 Oct 2022 20:25:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FPaYQwtM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229991AbiJUDYU (ORCPT + 99 others); Thu, 20 Oct 2022 23:24:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229939AbiJUDYP (ORCPT ); Thu, 20 Oct 2022 23:24:15 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B99D01B65C8 for ; Thu, 20 Oct 2022 20:24:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666322654; x=1697858654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DZlZbkmzdjTJqsSTh0PWMpWULB1+KKo5zc/qeSpD1kM=; b=FPaYQwtMlGIygD5c5XwrdlDH0d1K99m5v9aroVtLozfMUN1gN3NyZfBu 4lmZydFRf28L0vKlQRWlxu26ZLaBkSDcTBRewhZPFZf/h6dnDRKvieazA 1DHONo4f8tHtv5CTfgx/FN4s8+ZZM1BYiQ2fQ8UiLWxEA24Y3q92OJQP2 dBS1ydgX3BT73WRXdGAYqi3z1LYTqHM0pDEB/YhcthR9fFYuKoVTjWX0z Kv6wkt21djkVRZIc9QhfKSx4Qqcx3iegep4tkPn5b2gtiEWsmpCZfSWte xqsImrqjNAxo1OvFRwUjpMZrEEQFkkkjDLoOpTR6LrCWdpNMjTRi8OsMB g==; X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="333471603" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="333471603" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2022 20:24:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="719459559" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="719459559" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by FMSMGA003.fm.intel.com with ESMTP; 20 Oct 2022 20:24:11 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Andrey Konovalov , Kees Cook Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v7 1/3] mm/slub: only zero requested size of buffer for kzalloc when debug enabled Date: Fri, 21 Oct 2022 11:24:03 +0800 Message-Id: <20221021032405.1825078-2-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221021032405.1825078-1-feng.tang@intel.com> References: <20221021032405.1825078-1-feng.tang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747265997472089953?= X-GMAIL-MSGID: =?utf-8?q?1747265997472089953?= kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that redzone sanity check could be added to the extra space later. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more when debug is enabled. (Thanks to Kees Cook's effort to sanitize all ksize() user cases [1], this won't be a big issue). [1]. https://lore.kernel.org/all/20220922031013.2150682-1-keescook@chromium.org/#r Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Andrey Konovalov Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.c | 7 ++++--- mm/slab.h | 18 ++++++++++++++++-- mm/slub.c | 10 +++++++--- 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..4594de0e3d6b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,8 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, + cachep->object_size); return objp; } @@ -3506,13 +3507,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index 0202a8c2f0d2..8b4ee02fc14a 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -720,12 +720,26 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { + unsigned int zero_size = s->object_size; size_t i; flags &= gfp_allowed_mask; + /* + * For kmalloc object, the allocated memory size(object_size) is likely + * larger than the requested size(orig_size). If redzone check is + * enabled for the extra space, don't zero it, as it will be redzoned + * soon. The redzone operation for this extra space could be seen as a + * replacement of current poisoning under certain debug option, and + * won't break other sanity checks. + */ + if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && + (s->flags & SLAB_KMALLOC)) + zero_size = orig_size; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -736,7 +750,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); diff --git a/mm/slub.c b/mm/slub.c index 12354fb8d6e4..17292c2d3eee 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3395,7 +3395,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + /* + * When init equals 'true', like for kzalloc() family, only + * @orig_size bytes will be zeroed instead of s->object_size + */ + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3852,11 +3856,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; }