Message ID | 20230718134120.81199-2-aaron.lu@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp1758789vqt; Tue, 18 Jul 2023 06:49:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlGHhU1Bc64AhTXDYz72TVd5L3D9qJqno38SQdqu1yITHI/yH4DKIOPHDO/dhxj/nH6mgbpn X-Received: by 2002:a05:6a20:394d:b0:12e:caac:f263 with SMTP id r13-20020a056a20394d00b0012ecaacf263mr14906487pzg.20.1689688143583; Tue, 18 Jul 2023 06:49:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689688143; cv=none; d=google.com; s=arc-20160816; b=Yhr/p1gVW017LzWqMLG+A2CIZHpL9yiFKCV70SHUTwjxlDxi+SpH+q26C8zq98VYjF /OghDvcH0F9aoQpMm0+EWOHYXT55cHBmpqHiIjnPm/f4ArKNHwYjnYqXldLtcuKNw/e7 4IQe8mCXtiwRlpvn+IPSWuqLtp6fqRJwsPqr+8g3ZI6glJu61f/8EVAqxduKkgPrSV2t dbRdPD8rn+rayLqS99KyEuOBmOJup0rNrclKHp2FdvLngbzVYR/qQH8jljJ5SgUYhH9j EAm8hIkntqotPfLU0X1L5C2lHxyheIU4n0M4dnmKY6FKT3hAAjQmW+YbfJ6iXfgQdJsI 0XTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4/sMv3PBdVcNhV7sCjksTZrYeGzSqCUgmYvjHo1lM1w=; fh=4ed2R7g6IAKyDvVc8idVdT+eA7jXSko9FHhVHVAkWGU=; b=OMn3j2QdzCDlQf9/WAFbUzLvA5xnG+wVbKP1VZRVzOxZfeBtkLNrl6zFhYFTKGWMzX lnvR7IVmm5fMpk96R7RPybC5gyeuoyB/xAColXwnA0kpo4GjLJv93vSjtEF/Ux9egN4C vUp7xYa0WCB0JGXlrZxOSWWyRFn/4LK9I27NDappmkmZPLpENxdNb5THrmDqguDdoSW6 6XeK2cTyC8q1weyzTcb/VUMy5pghF4bPf++GBMzXrv+3me488iuC3AiXHIBDy+PAUxii QWoDilxgYLMtCUFG5VLV4A8NRabtcIWsxzq7FMJwRejoWGGPK0iwzPJm9vos0fHEJfqP 7H4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=S1Rel40A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m26-20020a63581a000000b00553ebb05d26si1626423pgb.108.2023.07.18.06.48.49; Tue, 18 Jul 2023 06:49:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=S1Rel40A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232590AbjGRNlh (ORCPT <rfc822;assdfgzxcv4@gmail.com> + 99 others); Tue, 18 Jul 2023 09:41:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230339AbjGRNle (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 18 Jul 2023 09:41:34 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA928D1 for <linux-kernel@vger.kernel.org>; Tue, 18 Jul 2023 06:41:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689687693; x=1721223693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8vAuW7kn5P49yEV9cQ8npEoDT7TB1IR2RrGZ1EvUpxo=; b=S1Rel40Aym6AL47/843YZ3ZQz7zQ9ZY9NmiiDBnnKny0QlKL3Xs1ewpL kY9rQB/+qhrtfEEcCTj63vlDuxnXL0X8veFSDVM4sNABl6keCNlpZvXf8 xRa8MbJa6rp+wsgfR3rTRMhjCEhPphdNJH00EHjTu15vC6/Too3JhGMng XEksF/C19tRJH/p+Hio7Yp/2TjPDvE49GzlROQFFkQUAXILVhrl6fK31P Ww2epB7YmE8BR4s8pc4EwxgLFk0SRgHyUZvRvol7XtJ15xdCuvxAw8Ia7 NbuOB/7nNH9wrTJy1mIRzP3tgfuX04a96AOJiYux+q+huHG1iPg9h1BeJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10775"; a="345800670" X-IronPort-AV: E=Sophos;i="6.01,214,1684825200"; d="scan'208";a="345800670" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2023 06:41:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10775"; a="847706517" X-IronPort-AV: E=Sophos;i="6.01,214,1684825200"; d="scan'208";a="847706517" Received: from ziqianlu-desk2.sh.intel.com ([10.239.159.54]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2023 06:41:29 -0700 From: Aaron Lu <aaron.lu@intel.com> To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <vschneid@redhat.com>, Tim Chen <tim.c.chen@intel.com>, Nitin Tekchandani <nitin.tekchandani@intel.com>, Yu Chen <yu.c.chen@intel.com>, Waiman Long <longman@redhat.com>, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] sched/fair: free allocated memory on error in alloc_fair_sched_group() Date: Tue, 18 Jul 2023 21:41:17 +0800 Message-ID: <20230718134120.81199-2-aaron.lu@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230718134120.81199-1-aaron.lu@intel.com> References: <20230718134120.81199-1-aaron.lu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771766434758541548 X-GMAIL-MSGID: 1771766434758541548 |
Series |
Reduce cost of accessing tg->load_avg
|
|
Commit Message
Aaron Lu
July 18, 2023, 1:41 p.m. UTC
There is one struct cfs_rq and one struct se on each cpu for a taskgroup
and when allocation for tg->cfs_rq[X] failed, the already allocated
tg->cfs_rq[0]..tg->cfs_rq[X-1] should be freed. The same for tg->se.
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
---
kernel/sched/fair.c | 23 ++++++++++++++++-------
1 file changed, 16 insertions(+), 7 deletions(-)
Comments
On 2023-07-18 at 21:41:17 +0800, Aaron Lu wrote: > There is one struct cfs_rq and one struct se on each cpu for a taskgroup > and when allocation for tg->cfs_rq[X] failed, the already allocated > tg->cfs_rq[0]..tg->cfs_rq[X-1] should be freed. The same for tg->se. > > Signed-off-by: Aaron Lu <aaron.lu@intel.com> > --- > kernel/sched/fair.c | 23 ++++++++++++++++------- > 1 file changed, 16 insertions(+), 7 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index a80a73909dc2..0f913487928d 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -12443,10 +12443,10 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > tg->cfs_rq = kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); > if (!tg->cfs_rq) > - goto err; > + return 0; > tg->se = kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); > if (!tg->se) > - goto err; > + goto err_free_rq_pointer; > > tg->shares = NICE_0_LOAD; > > @@ -12456,12 +12456,12 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > cfs_rq = kzalloc_node(sizeof(struct cfs_rq), > GFP_KERNEL, cpu_to_node(i)); > if (!cfs_rq) > - goto err; > + goto err_free; > > se = kzalloc_node(sizeof(struct sched_entity_stats), > GFP_KERNEL, cpu_to_node(i)); > if (!se) > - goto err_free_rq; > + goto err_free; > > init_cfs_rq(cfs_rq); > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > @@ -12470,9 +12470,18 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > return 1; > > -err_free_rq: > - kfree(cfs_rq); > -err: > +err_free: > + for_each_possible_cpu(i) { > + kfree(tg->cfs_rq[i]); > + kfree(tg->se[i]); > + > + if (!tg->cfs_rq[i] && !tg->se[i]) > + break; > + } > + kfree(tg->se); > +err_free_rq_pointer: > + kfree(tg->cfs_rq); > + Not sure if I overlooked, if alloc_fair_sched_group() fails in sched_create_group(), would sched_free_group()->free_fair_sched_group() do the cleanup? thanks, Chenyu
On Tue, Jul 18, 2023 at 11:13:12PM +0800, Chen Yu wrote: > On 2023-07-18 at 21:41:17 +0800, Aaron Lu wrote: > > There is one struct cfs_rq and one struct se on each cpu for a taskgroup > > and when allocation for tg->cfs_rq[X] failed, the already allocated > > tg->cfs_rq[0]..tg->cfs_rq[X-1] should be freed. The same for tg->se. > > > > Signed-off-by: Aaron Lu <aaron.lu@intel.com> > > --- > > kernel/sched/fair.c | 23 ++++++++++++++++------- > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index a80a73909dc2..0f913487928d 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -12443,10 +12443,10 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > tg->cfs_rq = kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); > > if (!tg->cfs_rq) > > - goto err; > > + return 0; > > tg->se = kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); > > if (!tg->se) > > - goto err; > > + goto err_free_rq_pointer; > > > > tg->shares = NICE_0_LOAD; > > > > @@ -12456,12 +12456,12 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > cfs_rq = kzalloc_node(sizeof(struct cfs_rq), > > GFP_KERNEL, cpu_to_node(i)); > > if (!cfs_rq) > > - goto err; > > + goto err_free; > > > > se = kzalloc_node(sizeof(struct sched_entity_stats), > > GFP_KERNEL, cpu_to_node(i)); > > if (!se) > > - goto err_free_rq; > > + goto err_free; > > > > init_cfs_rq(cfs_rq); > > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > > @@ -12470,9 +12470,18 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > return 1; > > > > -err_free_rq: > > - kfree(cfs_rq); > > -err: > > +err_free: > > + for_each_possible_cpu(i) { > > + kfree(tg->cfs_rq[i]); > > + kfree(tg->se[i]); > > + > > + if (!tg->cfs_rq[i] && !tg->se[i]) > > + break; > > + } > > + kfree(tg->se); > > +err_free_rq_pointer: > > + kfree(tg->cfs_rq); > > + > Not sure if I overlooked, if alloc_fair_sched_group() fails in sched_create_group(), > would sched_free_group()->free_fair_sched_group() do the cleanup? You are right, I overlooked... Thanks for pointing this out.
On Wed, Jul 19, 2023 at 10:13:55AM +0800, Aaron Lu wrote: > On Tue, Jul 18, 2023 at 11:13:12PM +0800, Chen Yu wrote: > > On 2023-07-18 at 21:41:17 +0800, Aaron Lu wrote: > > > There is one struct cfs_rq and one struct se on each cpu for a taskgroup > > > and when allocation for tg->cfs_rq[X] failed, the already allocated > > > tg->cfs_rq[0]..tg->cfs_rq[X-1] should be freed. The same for tg->se. > > > > > > Signed-off-by: Aaron Lu <aaron.lu@intel.com> > > > --- > > > kernel/sched/fair.c | 23 ++++++++++++++++------- > > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index a80a73909dc2..0f913487928d 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -12443,10 +12443,10 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > > > tg->cfs_rq = kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); > > > if (!tg->cfs_rq) > > > - goto err; > > > + return 0; > > > tg->se = kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); > > > if (!tg->se) > > > - goto err; > > > + goto err_free_rq_pointer; > > > > > > tg->shares = NICE_0_LOAD; > > > > > > @@ -12456,12 +12456,12 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > cfs_rq = kzalloc_node(sizeof(struct cfs_rq), > > > GFP_KERNEL, cpu_to_node(i)); > > > if (!cfs_rq) > > > - goto err; > > > + goto err_free; > > > > > > se = kzalloc_node(sizeof(struct sched_entity_stats), > > > GFP_KERNEL, cpu_to_node(i)); > > > if (!se) > > > - goto err_free_rq; > > > + goto err_free; > > > > > > init_cfs_rq(cfs_rq); > > > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > > > @@ -12470,9 +12470,18 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > > > return 1; > > > > > > -err_free_rq: > > > - kfree(cfs_rq); > > > -err: > > > +err_free: > > > + for_each_possible_cpu(i) { > > > + kfree(tg->cfs_rq[i]); > > > + kfree(tg->se[i]); > > > + > > > + if (!tg->cfs_rq[i] && !tg->se[i]) > > > + break; > > > + } > > > + kfree(tg->se); > > > +err_free_rq_pointer: > > > + kfree(tg->cfs_rq); > > > + > > Not sure if I overlooked, if alloc_fair_sched_group() fails in sched_create_group(), > > would sched_free_group()->free_fair_sched_group() do the cleanup? > > You are right, I overlooked... Thanks for pointing this out. While preparing v2, one thing still looks strange in the existing code: int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) { ... ... for_each_possible_cpu(i) { cfs_rq = kzalloc_node(sizeof(struct cfs_rq), GFP_KERNEL, cpu_to_node(i)); if (!cfs_rq) goto err; se = kzalloc_node(sizeof(struct sched_entity_stats), GFP_KERNEL, cpu_to_node(i)); if (!se) goto err_free_rq; ~~~~~~~~~~~~~~~~ Since free_fair_sched_group() will take care freeing these memories on error path, it looks unnecessary to do this free for a random cfs_rq here. Or do I miss something again...? init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); init_entity_runnable_average(se); } return 1; err_free_rq: kfree(cfs_rq); err: return 0; } I plan to change it to this: diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a80a73909dc2..ef2618ad26eb 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12461,7 +12461,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) se = kzalloc_node(sizeof(struct sched_entity_stats), GFP_KERNEL, cpu_to_node(i)); if (!se) - goto err_free_rq; + goto err; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); @@ -12470,8 +12470,6 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) return 1; -err_free_rq: - kfree(cfs_rq); err: return 0; }
On 2023-08-02 at 15:01:05 +0800, Aaron Lu wrote: > On Wed, Jul 19, 2023 at 10:13:55AM +0800, Aaron Lu wrote: > > On Tue, Jul 18, 2023 at 11:13:12PM +0800, Chen Yu wrote: > > > On 2023-07-18 at 21:41:17 +0800, Aaron Lu wrote: > > > > There is one struct cfs_rq and one struct se on each cpu for a taskgroup > > > > and when allocation for tg->cfs_rq[X] failed, the already allocated > > > > tg->cfs_rq[0]..tg->cfs_rq[X-1] should be freed. The same for tg->se. > > > > > > > > Signed-off-by: Aaron Lu <aaron.lu@intel.com> > > > > --- > > > > kernel/sched/fair.c | 23 ++++++++++++++++------- > > > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > > index a80a73909dc2..0f913487928d 100644 > > > > --- a/kernel/sched/fair.c > > > > +++ b/kernel/sched/fair.c > > > > @@ -12443,10 +12443,10 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > > > > > tg->cfs_rq = kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); > > > > if (!tg->cfs_rq) > > > > - goto err; > > > > + return 0; > > > > tg->se = kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); > > > > if (!tg->se) > > > > - goto err; > > > > + goto err_free_rq_pointer; > > > > > > > > tg->shares = NICE_0_LOAD; > > > > > > > > @@ -12456,12 +12456,12 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > cfs_rq = kzalloc_node(sizeof(struct cfs_rq), > > > > GFP_KERNEL, cpu_to_node(i)); > > > > if (!cfs_rq) > > > > - goto err; > > > > + goto err_free; > > > > > > > > se = kzalloc_node(sizeof(struct sched_entity_stats), > > > > GFP_KERNEL, cpu_to_node(i)); > > > > if (!se) > > > > - goto err_free_rq; > > > > + goto err_free; > > > > > > > > init_cfs_rq(cfs_rq); > > > > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > > > > @@ -12470,9 +12470,18 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > > > > > > > return 1; > > > > > > > > -err_free_rq: > > > > - kfree(cfs_rq); > > > > -err: > > > > +err_free: > > > > + for_each_possible_cpu(i) { > > > > + kfree(tg->cfs_rq[i]); > > > > + kfree(tg->se[i]); > > > > + > > > > + if (!tg->cfs_rq[i] && !tg->se[i]) > > > > + break; > > > > + } > > > > + kfree(tg->se); > > > > +err_free_rq_pointer: > > > > + kfree(tg->cfs_rq); > > > > + > > > Not sure if I overlooked, if alloc_fair_sched_group() fails in sched_create_group(), > > > would sched_free_group()->free_fair_sched_group() do the cleanup? > > > > You are right, I overlooked... Thanks for pointing this out. > > While preparing v2, one thing still looks strange in the existing code: > > int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > { > ... ... > > for_each_possible_cpu(i) { > cfs_rq = kzalloc_node(sizeof(struct cfs_rq), > GFP_KERNEL, cpu_to_node(i)); > if (!cfs_rq) > goto err; > > se = kzalloc_node(sizeof(struct sched_entity_stats), > GFP_KERNEL, cpu_to_node(i)); > if (!se) > goto err_free_rq; > ~~~~~~~~~~~~~~~~ > > Since free_fair_sched_group() will take care freeing these memories on > error path, it looks unnecessary to do this free for a random cfs_rq > here. Or do I miss something again...? > > init_cfs_rq(cfs_rq); > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > init_entity_runnable_average(se); > } > > return 1; > > err_free_rq: > kfree(cfs_rq); > err: > return 0; > } > > I plan to change it to this: > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index a80a73909dc2..ef2618ad26eb 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -12461,7 +12461,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > se = kzalloc_node(sizeof(struct sched_entity_stats), > GFP_KERNEL, cpu_to_node(i)); > if (!se) > - goto err_free_rq; > + goto err; > > init_cfs_rq(cfs_rq); > init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); > @@ -12470,8 +12470,6 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) > > return 1; > > -err_free_rq: > - kfree(cfs_rq); > err: > return 0; > } It seems that the err_free_rq was introduced in Commit dfc12eb26a28 ("sched: Fix memory leak in two error corner cases") The memory leak was detected by the static tool, while that tools is unaware of free_fair_sched_group() which will take care of this. Not sure if a self-maintained function is preferred, or let other function to take care of that. For now it seems to be a duplicated free. And alloc_rt_sched_group() has the same issue. thanks, Chenyu
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a80a73909dc2..0f913487928d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12443,10 +12443,10 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) tg->cfs_rq = kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) - goto err; + return 0; tg->se = kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); if (!tg->se) - goto err; + goto err_free_rq_pointer; tg->shares = NICE_0_LOAD; @@ -12456,12 +12456,12 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) cfs_rq = kzalloc_node(sizeof(struct cfs_rq), GFP_KERNEL, cpu_to_node(i)); if (!cfs_rq) - goto err; + goto err_free; se = kzalloc_node(sizeof(struct sched_entity_stats), GFP_KERNEL, cpu_to_node(i)); if (!se) - goto err_free_rq; + goto err_free; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); @@ -12470,9 +12470,18 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) return 1; -err_free_rq: - kfree(cfs_rq); -err: +err_free: + for_each_possible_cpu(i) { + kfree(tg->cfs_rq[i]); + kfree(tg->se[i]); + + if (!tg->cfs_rq[i] && !tg->se[i]) + break; + } + kfree(tg->se); +err_free_rq_pointer: + kfree(tg->cfs_rq); + return 0; }