Message ID | 20230328095807.7014-2-songmuchun@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2102002vqo; Tue, 28 Mar 2023 03:15:55 -0700 (PDT) X-Google-Smtp-Source: AKy350bDypnBPGx2ZewrDig1tJScEXfXeasQSU6q9VIXmR12ZFFlK3Dlk6i/c3fnTPTz9WxKmyDL X-Received: by 2002:a17:90b:1bc2:b0:23d:39e0:142 with SMTP id oa2-20020a17090b1bc200b0023d39e00142mr16452821pjb.42.1679998555372; Tue, 28 Mar 2023 03:15:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679998555; cv=none; d=google.com; s=arc-20160816; b=OXYPk4xTUHGdTfFfmmMuu8/1uQy1vPhLjz18zRjmdYZXVXCDwEk2n2VJO7ElXcoNq8 eP1Fkzg59ulpVJVBUgiTOX7CdmVNJ6TfQisDDsq20zv3ArYl21yDaAeHBX4NKEpH45jg 79lF0XrgoMq5STFKzHuIiF1fOTvTBraUWRnlrYMHDfBVGBVfWpz+/szAgntiwTKsKO5O HDTqbILolvTCz+4luRuyfzqR4MhEPvtlhYWqpojbvELU8S83zFVSHlRlL+sL2mR64dlV wtJ8Y36GHZSZTURD0j+IqMCQkzEH7/XYrBWiFu0+LSYjyMX9x5IbnY8iQG6ZIrCXvR29 JrAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oXReBzDsI5mUppZ3rGyjpCu82GIXuKJKMV0lZiD0JNk=; b=Z2RIGUNl+xySW0zcYsFMC3pMUrPPBpRbRM3HWpRqNMYBzPg++y9Z1CoIvRzEv6HRKP p6/QkGSN6aKTgTi+oxlST/eslyVjgI8mYg4MLh6dyBjO26Cux3010nfmeRspegVPHY7H KwwjQGuYTbswv2hYDq/6QULVmHWjFEw5mkpYzB3K6wKckybwnX2ZZ98QKdMCIfWyn0GH b/mO3TrNOy7mmo/hvm84yE2UqS+8h4lGpHV8PmILb6vvhQ8F2P2PRAv7EDFilPtH3fgq D0EAOs7suKxdQYO+iwwseBMkrqunaEAVf9iEci6G1o5XhPZJM5YjyYR+SA8iZOWlD26y qR9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b="GNYV/SpQ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n9-20020a17090a9f0900b0023f3f30482bsi12970519pjp.138.2023.03.28.03.15.42; Tue, 28 Mar 2023 03:15:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b="GNYV/SpQ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232911AbjC1J7N (ORCPT <rfc822;kartikey406@gmail.com> + 99 others); Tue, 28 Mar 2023 05:59:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232642AbjC1J66 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 28 Mar 2023 05:58:58 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 801AF55B8 for <linux-kernel@vger.kernel.org>; Tue, 28 Mar 2023 02:58:33 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id s8so7588127pfk.5 for <linux-kernel@vger.kernel.org>; Tue, 28 Mar 2023 02:58:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1679997513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oXReBzDsI5mUppZ3rGyjpCu82GIXuKJKMV0lZiD0JNk=; b=GNYV/SpQ4PF1Rk9guHwx0aYyiVbzxvCe8LgWKY4fKwXFWvJ0gHYKR9uI9Hyt7SuIba lqZhGe+iNAZBDttHfVPfrJtzXlSY4W0BwXLoi8+mDBLpZrOaqvyR3cnJuUvQfnijfFWF DgXTjFXJbvdNpSpbmeMZZq9T6E47bY/Bfun3K0PrAfzuydbRgjCh+tcBC+yQ0FegohJ2 LDusHWD4zsYHXyzzOsDLqMZyFzXOZExMZ6+6Nz5aE4HYnhqn7ndtS3fNHvJhBx1KBdK6 QyPuMZMYaMgxyNGf7ut9mP8hd0RslzKwLhfQJ46QWZOfUXnZ+YLqil4EchVkeDWoIrO7 Af0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679997513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oXReBzDsI5mUppZ3rGyjpCu82GIXuKJKMV0lZiD0JNk=; b=oEWvUltUFHld7pUlLYVMLs/kcHcHaAvSzSkVCYu6e6xclZiTQswjNidqDuyFr0xP8k hRCX/okzEpqYmicLYvvlXvgZGyqgb2DPuftB+udH9PMxQHjAuZAr+AaMsTW2TSzacNoM nLpt1i5v/8gJ+Ik8u9QiMsmcBTm4kffGw9x82KUfXBO+BUjcQtNA6W/fMO/SRGhncwV0 E+J8VpC/OmAe6dxwWkORxUsxOL0cZZoQb183PvJMg5tuftPQbMbH7/dM/evUe2K/DfiT fCMdD/G81bbTZXsUPs2NVz8dgcKIiTaYxpmQrEapDxCFLVC/4X+noGYZY3o3fQ1mE8Ps MRIw== X-Gm-Message-State: AAQBX9ciAxl6pgDqYgogexxESNGXL2EO7SW5hGBt2Xs/1MR+byMfm9pc knbYtgofVG8zs5b1cMnlYSJquQ== X-Received: by 2002:aa7:96f8:0:b0:600:cc40:2589 with SMTP id i24-20020aa796f8000000b00600cc402589mr3361407pfq.3.1679997512966; Tue, 28 Mar 2023 02:58:32 -0700 (PDT) Received: from PXLDJ45XCM.bytedance.net ([139.177.225.236]) by smtp.gmail.com with ESMTPSA id m26-20020aa78a1a000000b005a8a5be96b2sm17207556pfa.104.2023.03.28.02.58.28 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 28 Mar 2023 02:58:32 -0700 (PDT) From: Muchun Song <songmuchun@bytedance.com> To: glider@google.com, elver@google.com, dvyukov@google.com, akpm@linux-foundation.org, jannh@google.com, sjpark@amazon.de, muchun.song@linux.dev Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song <songmuchun@bytedance.com> Subject: [PATCH 1/6] mm: kfence: simplify kfence pool initialization Date: Tue, 28 Mar 2023 17:58:02 +0800 Message-Id: <20230328095807.7014-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230328095807.7014-1-songmuchun@bytedance.com> References: <20230328095807.7014-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761606164985259353?= X-GMAIL-MSGID: =?utf-8?q?1761606164985259353?= |
Series |
Simplify kfence code
|
|
Commit Message
Muchun Song
March 28, 2023, 9:58 a.m. UTC
There are three similar loops to initialize kfence pool, we could merge
all of them into one loop to simplify the code and make code more
efficient.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
mm/kfence/core.c | 47 ++++++-----------------------------------------
1 file changed, 6 insertions(+), 41 deletions(-)
Comments
On Tue, 28 Mar 2023 at 11:58, Muchun Song <songmuchun@bytedance.com> wrote: > > There are three similar loops to initialize kfence pool, we could merge > all of them into one loop to simplify the code and make code more > efficient. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Marco Elver <elver@google.com> > --- > mm/kfence/core.c | 47 ++++++----------------------------------------- > 1 file changed, 6 insertions(+), 41 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index 7d01a2c76e80..de62a84d4830 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h) > static unsigned long kfence_init_pool(void) > { > unsigned long addr = (unsigned long)__kfence_pool; > - struct page *pages; > int i; > > if (!arch_kfence_init_pool()) > return addr; > - > - pages = virt_to_page(__kfence_pool); > - > - /* > - * Set up object pages: they must have PG_slab set, to avoid freeing > - * these as real pages. > - * > - * We also want to avoid inserting kfence_free() in the kfree() > - * fast-path in SLUB, and therefore need to ensure kfree() correctly > - * enters __slab_free() slow-path. > - */ > - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > - > - if (!i || (i % 2)) > - continue; > - > - __folio_set_slab(slab_folio(slab)); > -#ifdef CONFIG_MEMCG > - slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | > - MEMCG_DATA_OBJCGS; > -#endif > - } > - > /* > * Protect the first 2 pages. The first page is mostly unnecessary, and > * merely serves as an extended guard page. However, adding one > @@ -581,8 +556,9 @@ static unsigned long kfence_init_pool(void) > addr += PAGE_SIZE; > } > > - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > + for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++, addr += 2 * PAGE_SIZE) { > struct kfence_metadata *meta = &kfence_metadata[i]; > + struct slab *slab = page_slab(virt_to_page(addr)); > > /* Initialize metadata. */ > INIT_LIST_HEAD(&meta->list); > @@ -593,26 +569,15 @@ static unsigned long kfence_init_pool(void) > > /* Protect the right redzone. */ > if (unlikely(!kfence_protect(addr + PAGE_SIZE))) > - goto reset_slab; > - > - addr += 2 * PAGE_SIZE; > - } > - > - return 0; > - > -reset_slab: > - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > + return addr; > > - if (!i || (i % 2)) > - continue; > + __folio_set_slab(slab_folio(slab)); > #ifdef CONFIG_MEMCG > - slab->memcg_data = 0; > + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; > #endif > - __folio_clear_slab(slab_folio(slab)); > } > > - return addr; > + return 0; > } > > static bool __init kfence_init_pool_early(void) > -- > 2.11.0 >
On Tue, 28 Mar 2023 at 13:55, Marco Elver <elver@google.com> wrote: > > On Tue, 28 Mar 2023 at 11:58, Muchun Song <songmuchun@bytedance.com> wrote: > > > > There are three similar loops to initialize kfence pool, we could merge > > all of them into one loop to simplify the code and make code more > > efficient. > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > Reviewed-by: Marco Elver <elver@google.com> > > > --- > > mm/kfence/core.c | 47 ++++++----------------------------------------- > > 1 file changed, 6 insertions(+), 41 deletions(-) > > > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > > index 7d01a2c76e80..de62a84d4830 100644 > > --- a/mm/kfence/core.c > > +++ b/mm/kfence/core.c > > @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h) > > static unsigned long kfence_init_pool(void) > > { > > unsigned long addr = (unsigned long)__kfence_pool; > > - struct page *pages; > > int i; > > > > if (!arch_kfence_init_pool()) > > return addr; > > - > > - pages = virt_to_page(__kfence_pool); > > - > > - /* > > - * Set up object pages: they must have PG_slab set, to avoid freeing > > - * these as real pages. > > - * > > - * We also want to avoid inserting kfence_free() in the kfree() > > - * fast-path in SLUB, and therefore need to ensure kfree() correctly > > - * enters __slab_free() slow-path. > > - */ Actually: can you retain this comment somewhere? > > - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > > - struct slab *slab = page_slab(nth_page(pages, i)); > > - > > - if (!i || (i % 2)) > > - continue; > > - > > - __folio_set_slab(slab_folio(slab)); > > -#ifdef CONFIG_MEMCG > > - slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | > > - MEMCG_DATA_OBJCGS; > > -#endif > > - } > > - > > /* > > * Protect the first 2 pages. The first page is mostly unnecessary, and > > * merely serves as an extended guard page. However, adding one > > @@ -581,8 +556,9 @@ static unsigned long kfence_init_pool(void) > > addr += PAGE_SIZE; > > } > > > > - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > > + for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++, addr += 2 * PAGE_SIZE) { > > struct kfence_metadata *meta = &kfence_metadata[i]; > > + struct slab *slab = page_slab(virt_to_page(addr)); > > > > /* Initialize metadata. */ > > INIT_LIST_HEAD(&meta->list); > > @@ -593,26 +569,15 @@ static unsigned long kfence_init_pool(void) > > > > /* Protect the right redzone. */ > > if (unlikely(!kfence_protect(addr + PAGE_SIZE))) > > - goto reset_slab; > > - > > - addr += 2 * PAGE_SIZE; > > - } > > - > > - return 0; > > - > > -reset_slab: > > - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > > - struct slab *slab = page_slab(nth_page(pages, i)); > > + return addr; > > > > - if (!i || (i % 2)) > > - continue; > > + __folio_set_slab(slab_folio(slab)); > > #ifdef CONFIG_MEMCG > > - slab->memcg_data = 0; > > + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; > > #endif > > - __folio_clear_slab(slab_folio(slab)); > > } > > > > - return addr; > > + return 0; > > } > > > > static bool __init kfence_init_pool_early(void) > > -- > > 2.11.0 > >
> On Mar 28, 2023, at 20:05, Marco Elver <elver@google.com> wrote: > > On Tue, 28 Mar 2023 at 13:55, Marco Elver <elver@google.com> wrote: >> >> On Tue, 28 Mar 2023 at 11:58, Muchun Song <songmuchun@bytedance.com> wrote: >>> >>> There are three similar loops to initialize kfence pool, we could merge >>> all of them into one loop to simplify the code and make code more >>> efficient. >>> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com> >> >> Reviewed-by: Marco Elver <elver@google.com> >> >>> --- >>> mm/kfence/core.c | 47 ++++++----------------------------------------- >>> 1 file changed, 6 insertions(+), 41 deletions(-) >>> >>> diff --git a/mm/kfence/core.c b/mm/kfence/core.c >>> index 7d01a2c76e80..de62a84d4830 100644 >>> --- a/mm/kfence/core.c >>> +++ b/mm/kfence/core.c >>> @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h) >>> static unsigned long kfence_init_pool(void) >>> { >>> unsigned long addr = (unsigned long)__kfence_pool; >>> - struct page *pages; >>> int i; >>> >>> if (!arch_kfence_init_pool()) >>> return addr; >>> - >>> - pages = virt_to_page(__kfence_pool); >>> - >>> - /* >>> - * Set up object pages: they must have PG_slab set, to avoid freeing >>> - * these as real pages. >>> - * >>> - * We also want to avoid inserting kfence_free() in the kfree() >>> - * fast-path in SLUB, and therefore need to ensure kfree() correctly >>> - * enters __slab_free() slow-path. >>> - */ > > Actually: can you retain this comment somewhere? Sure, I'll move this to right place. Thanks.
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 7d01a2c76e80..de62a84d4830 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h) static unsigned long kfence_init_pool(void) { unsigned long addr = (unsigned long)__kfence_pool; - struct page *pages; int i; if (!arch_kfence_init_pool()) return addr; - - pages = virt_to_page(__kfence_pool); - - /* - * Set up object pages: they must have PG_slab set, to avoid freeing - * these as real pages. - * - * We also want to avoid inserting kfence_free() in the kfree() - * fast-path in SLUB, and therefore need to ensure kfree() correctly - * enters __slab_free() slow-path. - */ - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(nth_page(pages, i)); - - if (!i || (i % 2)) - continue; - - __folio_set_slab(slab_folio(slab)); -#ifdef CONFIG_MEMCG - slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | - MEMCG_DATA_OBJCGS; -#endif - } - /* * Protect the first 2 pages. The first page is mostly unnecessary, and * merely serves as an extended guard page. However, adding one @@ -581,8 +556,9 @@ static unsigned long kfence_init_pool(void) addr += PAGE_SIZE; } - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++, addr += 2 * PAGE_SIZE) { struct kfence_metadata *meta = &kfence_metadata[i]; + struct slab *slab = page_slab(virt_to_page(addr)); /* Initialize metadata. */ INIT_LIST_HEAD(&meta->list); @@ -593,26 +569,15 @@ static unsigned long kfence_init_pool(void) /* Protect the right redzone. */ if (unlikely(!kfence_protect(addr + PAGE_SIZE))) - goto reset_slab; - - addr += 2 * PAGE_SIZE; - } - - return 0; - -reset_slab: - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(nth_page(pages, i)); + return addr; - if (!i || (i % 2)) - continue; + __folio_set_slab(slab_folio(slab)); #ifdef CONFIG_MEMCG - slab->memcg_data = 0; + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; #endif - __folio_clear_slab(slab_folio(slab)); } - return addr; + return 0; } static bool __init kfence_init_pool_early(void)