Message ID | 168564612103.527584.4866621411469438225.stgit@bmoger-ubuntu |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp555029vqr; Thu, 1 Jun 2023 12:10:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6u31NbpBDyyDjUmQ0jbTFcP8e+5TcYMiyiULurADkHgwxLKlfTkPjxxXeaEWW76RbtOBqe X-Received: by 2002:a05:6358:71d:b0:123:46bf:718e with SMTP id e29-20020a056358071d00b0012346bf718emr7906912rwj.1.1685646659006; Thu, 01 Jun 2023 12:10:59 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1685646658; cv=pass; d=google.com; s=arc-20160816; b=Cp5tdWQxAThmxeP1SaCUJGJGn/D+j+SCoMpBh1KMmcqflH9khLrvKcrMwv+b2meMjg +IfnmCLwqaWP2Ck78LeZoQpixxbrb29CxB/Yx+Y1M0GqbelY2iQ0L8cjq6ZOMrmAxEed NpMAMMbmDSPvl3pf39KwnAg2aNJ4Bf3w4Ykta7WKfHeEr+W3r6W/7k2wgQV161nh8Rrf LWcFnIbORoODs9fLCJEJa5qNP1CEfEfH/2hXcknsAEPoeFl7ocEl9YDkzLCFsL1ZOfw4 yy+qlNWHOfioeG7j3m3ct9XR0vz9YES8Yk0aO/5wnBX1U/ec4MJJ4cCAfyNezocCCKuo GHRw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:dkim-signature; bh=mxBu8aT+dt1aSZHgIZBlGrThsrvlYt0Itxizp0QbX9Q=; b=QtQKjtoy2QQQ73bjiSxeI9jWqchtxq8mkG1V5pkFGl7YZxgUkAMwL6UNnN425cFcYs F8mKDtiEBHxThBToFRt5XYHTG7AtzTppK1e6f6BaN9YFCUcAh38vDTamQS5gIFSe/oUa 1UOv1dgoR0fZBIRhlw/GX9uU/dVgrNbbLx34L8ilqgXkTWRKV4LtsjsLTEli/sVn+Ndh hjinXiREqD+hn/faPhhCq+P35SHuUjV7t+hStJHPFZbd0jIFlgZA0eVMxZnxJXtFScKQ Nb71qkwdTP3V9DWG1BgE28wME00b4BfTHTezh5PSETVFaOlk33py4pUlwt/zaQP0nz+i GyLA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=RAYUeuDE; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q16-20020a17090a2e1000b00258d8f6eabdsi64187pjd.66.2023.06.01.12.10.37; Thu, 01 Jun 2023 12:10:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=RAYUeuDE; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231315AbjFATCn (ORCPT <rfc822;limurcpp@gmail.com> + 99 others); Thu, 1 Jun 2023 15:02:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbjFATCl (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 1 Jun 2023 15:02:41 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2081.outbound.protection.outlook.com [40.107.212.81]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EFBCE5C; Thu, 1 Jun 2023 12:02:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FqmYZu+bCE7ysv59y8IxgXPd3zYUbXdUPZFMR8zPawyX1FtNoD/M4bj49LdoEk4/lI09yIieMsCSB/r8D5fd7YgFDBI2s3+MVTGlW4EXyyaiI3Y7tqp3d6xpdZc3CioBlu+nDj4VPMyJAUIlPhZKsLciTiioXECDAW0INLOtXOkubnTZDSXnbbevQKClo/puhqnkpANCqQh66+XAxzzXn8WHj0jHd3WJW1m1aP8eVt6MsciO49nL/5XI8Nvc4OnDkBcz2iZFKH2EShrFwo3m9gxq5Y6ac4c9OXF/JjohSLMWrVQ/Jn/M2MP6igERirCRXcgT/zPMjF85iVdXwOH2mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mxBu8aT+dt1aSZHgIZBlGrThsrvlYt0Itxizp0QbX9Q=; b=oWzkkX/yMOH5bnzxYCzJyxLLc6UIy71hfxCB+FLqx4GATafU/vFW6DIgIaki0et/Qvy3D4I6iqTPPNVwjfxcrrcqBtoSjEuZ7Lq9GoTFQNeG/9jYLgXAy+v97IiTmeBVvv4yOe8STbKh1JaSDa9GtjmivjG3sov+XYDgVBvS4s2v3fYaGlBMYQwMc2i48koGD8R/+vG5nOJ2kSIl2JSZv5GwgW+o27LDClcCKuVjCtHEDvugXqj61zHFCH7lcywhnEA6yS1ELHohy3Xd1ieTNK7bKet7wEjuoxwjjWRL4q5ionv2L4W0ebz6pEPxtYJTiSERJ1MR62FXBpjFV2yISA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=intel.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mxBu8aT+dt1aSZHgIZBlGrThsrvlYt0Itxizp0QbX9Q=; b=RAYUeuDEAp7ChCLa7YdoHHF36EJDLg1zmHpumcM/1QLJnKsQ0ozMNXYUWSILV//vh++GSocxf2DLwz0xRbEW78XtX6KFm8fPa2Vw6hpxpx7a6eYlyR+4fnaU7m1hzUzZGIXqglZdrqeIfFcjPAAyKffGfnao8P1IDn3qz4jJx1U= Received: from BN1PR14CA0016.namprd14.prod.outlook.com (2603:10b6:408:e3::21) by SN7PR12MB7298.namprd12.prod.outlook.com (2603:10b6:806:2ae::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Thu, 1 Jun 2023 19:02:12 +0000 Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e3:cafe::e1) by BN1PR14CA0016.outlook.office365.com (2603:10b6:408:e3::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.23 via Frontend Transport; Thu, 1 Jun 2023 19:02:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6455.23 via Frontend Transport; Thu, 1 Jun 2023 19:02:12 +0000 Received: from [127.0.1.1] (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 1 Jun 2023 14:02:09 -0500 Subject: [PATCH v5 7/8] x86/resctrl: Move default control group creation during mount From: Babu Moger <babu.moger@amd.com> To: <corbet@lwn.net>, <reinette.chatre@intel.com>, <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de> CC: <fenghua.yu@intel.com>, <dave.hansen@linux.intel.com>, <x86@kernel.org>, <hpa@zytor.com>, <paulmck@kernel.org>, <akpm@linux-foundation.org>, <quic_neeraju@quicinc.com>, <rdunlap@infradead.org>, <damien.lemoal@opensource.wdc.com>, <songmuchun@bytedance.com>, <peterz@infradead.org>, <jpoimboe@kernel.org>, <pbonzini@redhat.com>, <babu.moger@amd.com>, <chang.seok.bae@intel.com>, <pawan.kumar.gupta@linux.intel.com>, <jmattson@google.com>, <daniel.sneddon@linux.intel.com>, <sandipan.das@amd.com>, <tony.luck@intel.com>, <james.morse@arm.com>, <linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <bagasdotme@gmail.com>, <eranian@google.com>, <christophe.leroy@csgroup.eu>, <pawan.kumar.gupta@linux.intel.com>, <jarkko@kernel.org>, <adrian.hunter@intel.com>, <quic_jiles@quicinc.com>, <peternewman@google.com>, <babu.moger@amd.com> Date: Thu, 1 Jun 2023 14:02:01 -0500 Message-ID: <168564612103.527584.4866621411469438225.stgit@bmoger-ubuntu> In-Reply-To: <168564586603.527584.10518315376465080920.stgit@bmoger-ubuntu> References: <168564586603.527584.10518315376465080920.stgit@bmoger-ubuntu> User-Agent: StGit/1.1.dev103+g5369f4c MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|SN7PR12MB7298:EE_ X-MS-Office365-Filtering-Correlation-Id: 8fdc22f0-fd6c-47d8-b69f-08db62d2b501 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xWU+6QkMXvcP8T9IF76B5cTsBiYELsGnUtdYZG5wSrpvE/dOEyljDdDyTOMOO6sdmmVQrtLwC+78tAMSrR246Dgc4pEwNbIAENWeWwx+j+naFcfdjHBzZfzhEgYwsM8RnEMFeKgWxOrkwbcsnXXgZbRMhvfIlhlqV2Z0zz7lW+kY6M/UprLn9Ml9D3p3hS+ldboojTSFR3A2VfFj0vbCQUqr5VmgDsMTuvxQkqUdxwgRcPRRiBgiThMFzzAUzn722IvdDW0PeQvXBo569rOwDk3JM3JHjTo5r8EyfU16RPwHUicOYZhTU+bRc1znqHXOD/U/QFmdaGlxOkWSmEjPgNE9h21UWmDmt76t82wykZCH+H2OdbV1UsnoLKcaIC/J/iSySITT4vqW88I5QVD7sFWVMkX0hLB+4DqDYmFsZbolrGiMOJ+mJ3/tjY2XpWZPJ5L13aE+NFJtItvixEoTy5gu80jihz23LIWv0n4tYY70PF9IaG+76KXZ9rX+1c3KTReSrM2yLGnm3DYSinHCBNwBdf3kW12jvg712DavjIVuoZFvBQxHXWjNp0aRkcTbFNAtyfoKpyWggm+l/uusiO9RaeIUG+sJhHoyRQYZ0C6BCktTdkbYenkF2wHSrgRZfrPmh3mNjSsAYYpRyVEo5LwbgYPX/bPLydxzCcn+a7N+wnX15QrbfMw+eJDdKJPFpZ/yF91kO66Cjl4VG35V/b91cUmd9VzrgyQ635hZPTP3wtVCCkRjBRE4JBTSuoi/gIpH4No5JtaSd2nFMd2fg8Tl9KoNeokcAMA2/KwI4jM= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199021)(40470700004)(36840700001)(46966006)(81166007)(2906002)(33716001)(16526019)(336012)(356005)(86362001)(82310400005)(41300700001)(8936002)(186003)(8676002)(82740400003)(478600001)(426003)(26005)(110136005)(16576012)(9686003)(54906003)(6666004)(316002)(103116003)(47076005)(7416002)(83380400001)(70586007)(70206006)(7406005)(4326008)(36860700001)(44832011)(5660300002)(40460700003)(40480700001)(71626016)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2023 19:02:12.1093 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8fdc22f0-fd6c-47d8-b69f-08db62d2b501 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7298 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767528630890741934?= X-GMAIL-MSGID: =?utf-8?q?1767528630890741934?= |
Series |
x86/resctrl: Miscellaneous resctrl features
|
|
Commit Message
Moger, Babu
June 1, 2023, 7:02 p.m. UTC
Currently, the resctrl default control group is created during kernel
init time and rest of the files are added during mount. If the new
files are to be added to the default group during the mount then it
has to be done separately again.
This can avoided if all the files are created during the mount and
destroyed during the umount. Move the default group creation in
rdt_get_tree and removal in rdt_kill_sb.
Suggested-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 59 ++++++++++++++++----------------
1 file changed, 30 insertions(+), 29 deletions(-)
Comments
Hi Babu, On 6/1/2023 12:02 PM, Babu Moger wrote: > Currently, the resctrl default control group is created during kernel > init time and rest of the files are added during mount. If the new Please drop the word "Currently" > files are to be added to the default group during the mount then it > has to be done separately again. > > This can avoided if all the files are created during the mount and > destroyed during the umount. Move the default group creation in "creation in" -> "creation to"? > rdt_get_tree and removal in rdt_kill_sb. I think it would be simpler if this patch is moved earlier in series then patch 8 can more easily be squashed where appropriate. > > Suggested-by: Reinette Chatre <reinette.chatre@intel.com> > Signed-off-by: Babu Moger <babu.moger@amd.com> > --- > arch/x86/kernel/cpu/resctrl/rdtgroup.c | 59 ++++++++++++++++---------------- > 1 file changed, 30 insertions(+), 29 deletions(-) > > diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c > index 2f5cdc638607..e03cb01c4742 100644 > --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c > +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c > @@ -57,6 +57,7 @@ static char last_cmd_status_buf[512]; > struct dentry *debugfs_resctrl; > > static bool resctrl_debug; > +static int rdtgroup_setup_root(void); > > void rdt_last_cmd_clear(void) > { > @@ -2515,13 +2516,6 @@ static int rdt_get_tree(struct fs_context *fc) > > cpus_read_lock(); > mutex_lock(&rdtgroup_mutex); > - /* > - * resctrl file system can only be mounted once. > - */ > - if (static_branch_unlikely(&rdt_enable_key)) { > - ret = -EBUSY; > - goto out; > - } > This change is unexpected. > ret = rdt_enable_ctx(ctx); > if (ret < 0) > @@ -2535,9 +2529,15 @@ static int rdt_get_tree(struct fs_context *fc) > > closid_init(); > > + ret = rdtgroup_add_files(rdtgroup_default.kn, RFTYPE_CTRL_BASE); > + if (ret) > + goto out_schemata_free; > + > + kernfs_activate(rdtgroup_default.kn); > + > ret = rdtgroup_create_info_dir(rdtgroup_default.kn); > if (ret < 0) > - goto out_schemata_free; > + goto out_default; > > if (rdt_mon_capable) { > ret = mongroup_create_dir(rdtgroup_default.kn, > @@ -2587,6 +2587,8 @@ static int rdt_get_tree(struct fs_context *fc) > kernfs_remove(kn_mongrp); > out_info: > kernfs_remove(kn_info); > +out_default: > + kernfs_remove(rdtgroup_default.kn); > out_schemata_free: > schemata_list_destroy(); > out_mba: > @@ -2664,10 +2666,23 @@ static const struct fs_context_operations rdt_fs_context_ops = { > static int rdt_init_fs_context(struct fs_context *fc) > { > struct rdt_fs_context *ctx; > + int ret; > + > + /* > + * resctrl file system can only be mounted once. > + */ > + if (static_branch_unlikely(&rdt_enable_key)) > + return -EBUSY; > + > + ret = rdtgroup_setup_root(); > + if (ret) > + return ret; > Why was it necessary to move this code? > ctx = kzalloc(sizeof(struct rdt_fs_context), GFP_KERNEL); > - if (!ctx) > + if (!ctx) { > + kernfs_destroy_root(rdt_root); > return -ENOMEM; > + } > > ctx->kfc.root = rdt_root; > ctx->kfc.magic = RDTGROUP_SUPER_MAGIC; > @@ -2845,6 +2860,9 @@ static void rdt_kill_sb(struct super_block *sb) > static_branch_disable_cpuslocked(&rdt_alloc_enable_key); > static_branch_disable_cpuslocked(&rdt_mon_enable_key); > static_branch_disable_cpuslocked(&rdt_enable_key); > + /* Remove the default group and cleanup the root */ > + list_del(&rdtgroup_default.rdtgroup_list); > + kernfs_destroy_root(rdt_root); Why not just add kernfs_remove(rdtgroup_default.kn) to rmdir_all_sub()? > kernfs_kill_sb(sb); > mutex_unlock(&rdtgroup_mutex); > cpus_read_unlock(); > @@ -3598,10 +3616,8 @@ static struct kernfs_syscall_ops rdtgroup_kf_syscall_ops = { > .show_options = rdtgroup_show_options, > }; > > -static int __init rdtgroup_setup_root(void) > +static int rdtgroup_setup_root(void) > { > - int ret; > - > rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, > KERNFS_ROOT_CREATE_DEACTIVATED | > KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, > @@ -3618,19 +3634,11 @@ static int __init rdtgroup_setup_root(void) > > list_add(&rdtgroup_default.rdtgroup_list, &rdt_all_groups); > > - ret = rdtgroup_add_files(kernfs_root_to_node(rdt_root), RFTYPE_CTRL_BASE); > - if (ret) { > - kernfs_destroy_root(rdt_root); > - goto out; > - } > - > rdtgroup_default.kn = kernfs_root_to_node(rdt_root); > - kernfs_activate(rdtgroup_default.kn); > > -out: > mutex_unlock(&rdtgroup_mutex); > > - return ret; > + return 0; > } > > static void domain_destroy_mon_state(struct rdt_domain *d) > @@ -3752,13 +3760,9 @@ int __init rdtgroup_init(void) > seq_buf_init(&last_cmd_status, last_cmd_status_buf, > sizeof(last_cmd_status_buf)); > > - ret = rdtgroup_setup_root(); > - if (ret) > - return ret; > - > ret = sysfs_create_mount_point(fs_kobj, "resctrl"); > if (ret) > - goto cleanup_root; > + return ret; > It is not clear to me why this change is required, could you please elaborate? It seems that all that is needed is for rdtgroup_add_files() to move to rdt_get_tree() (which you have done) and then an additional call to kernfs_remove() in rmdir_all_sub(). I must be missing something, could you please help me understand? > ret = register_filesystem(&rdt_fs_type); > if (ret) > @@ -3791,8 +3795,6 @@ int __init rdtgroup_init(void) > > cleanup_mountpoint: > sysfs_remove_mount_point(fs_kobj, "resctrl"); > -cleanup_root: > - kernfs_destroy_root(rdt_root); > > return ret; > } > @@ -3802,5 +3804,4 @@ void __exit rdtgroup_exit(void) > debugfs_remove_recursive(debugfs_resctrl); > unregister_filesystem(&rdt_fs_type); > sysfs_remove_mount_point(fs_kobj, "resctrl"); > - kernfs_destroy_root(rdt_root); > } > > Reinette
Hi Reinette, Sorry.. Took a while to respond. I had to recreate the issue to refresh my memory. On 7/7/23 16:46, Reinette Chatre wrote: > Hi Babu, > > On 6/1/2023 12:02 PM, Babu Moger wrote: >> Currently, the resctrl default control group is created during kernel >> init time and rest of the files are added during mount. If the new > > Please drop the word "Currently" Sure > >> files are to be added to the default group during the mount then it >> has to be done separately again. >> >> This can avoided if all the files are created during the mount and >> destroyed during the umount. Move the default group creation in > > "creation in" -> "creation to"? Sure > >> rdt_get_tree and removal in rdt_kill_sb. > > I think it would be simpler if this patch is moved earlier in series > then patch 8 can more easily be squashed where appropriate. Yes, I was thinking about that. > >> >> Suggested-by: Reinette Chatre <reinette.chatre@intel.com> >> Signed-off-by: Babu Moger <babu.moger@amd.com> >> --- >> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 59 ++++++++++++++++---------------- >> 1 file changed, 30 insertions(+), 29 deletions(-) >> >> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> index 2f5cdc638607..e03cb01c4742 100644 >> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> @@ -57,6 +57,7 @@ static char last_cmd_status_buf[512]; >> struct dentry *debugfs_resctrl; >> >> static bool resctrl_debug; >> +static int rdtgroup_setup_root(void); >> >> void rdt_last_cmd_clear(void) >> { >> @@ -2515,13 +2516,6 @@ static int rdt_get_tree(struct fs_context *fc) >> >> cpus_read_lock(); >> mutex_lock(&rdtgroup_mutex); >> - /* >> - * resctrl file system can only be mounted once. >> - */ >> - if (static_branch_unlikely(&rdt_enable_key)) { >> - ret = -EBUSY; >> - goto out; >> - } >> > > This change is unexpected. Please see my comments below. > >> ret = rdt_enable_ctx(ctx); >> if (ret < 0) >> @@ -2535,9 +2529,15 @@ static int rdt_get_tree(struct fs_context *fc) >> >> closid_init(); >> >> + ret = rdtgroup_add_files(rdtgroup_default.kn, RFTYPE_CTRL_BASE); >> + if (ret) >> + goto out_schemata_free; >> + >> + kernfs_activate(rdtgroup_default.kn); >> + >> ret = rdtgroup_create_info_dir(rdtgroup_default.kn); >> if (ret < 0) >> - goto out_schemata_free; >> + goto out_default; >> >> if (rdt_mon_capable) { >> ret = mongroup_create_dir(rdtgroup_default.kn, >> @@ -2587,6 +2587,8 @@ static int rdt_get_tree(struct fs_context *fc) >> kernfs_remove(kn_mongrp); >> out_info: >> kernfs_remove(kn_info); >> +out_default: >> + kernfs_remove(rdtgroup_default.kn); >> out_schemata_free: >> schemata_list_destroy(); >> out_mba: >> @@ -2664,10 +2666,23 @@ static const struct fs_context_operations rdt_fs_context_ops = { >> static int rdt_init_fs_context(struct fs_context *fc) >> { >> struct rdt_fs_context *ctx; >> + int ret; >> + >> + /* >> + * resctrl file system can only be mounted once. >> + */ >> + if (static_branch_unlikely(&rdt_enable_key)) >> + return -EBUSY; >> + >> + ret = rdtgroup_setup_root(); >> + if (ret) >> + return ret; >> > > Why was it necessary to move this code? Please see my comments below.. > >> ctx = kzalloc(sizeof(struct rdt_fs_context), GFP_KERNEL); >> - if (!ctx) >> + if (!ctx) { >> + kernfs_destroy_root(rdt_root); >> return -ENOMEM; >> + } >> >> ctx->kfc.root = rdt_root; >> ctx->kfc.magic = RDTGROUP_SUPER_MAGIC; >> @@ -2845,6 +2860,9 @@ static void rdt_kill_sb(struct super_block *sb) >> static_branch_disable_cpuslocked(&rdt_alloc_enable_key); >> static_branch_disable_cpuslocked(&rdt_mon_enable_key); >> static_branch_disable_cpuslocked(&rdt_enable_key); >> + /* Remove the default group and cleanup the root */ >> + list_del(&rdtgroup_default.rdtgroup_list); >> + kernfs_destroy_root(rdt_root); > > Why not just add kernfs_remove(rdtgroup_default.kn) to rmdir_all_sub()? List rdtgroup_default.rdtgroup_list is added during the mount and had to be removed during umount and rdt_root is destroyed here. Please see more comments below. > >> kernfs_kill_sb(sb); >> mutex_unlock(&rdtgroup_mutex); >> cpus_read_unlock(); >> @@ -3598,10 +3616,8 @@ static struct kernfs_syscall_ops rdtgroup_kf_syscall_ops = { >> .show_options = rdtgroup_show_options, >> }; >> >> -static int __init rdtgroup_setup_root(void) >> +static int rdtgroup_setup_root(void) >> { >> - int ret; >> - >> rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, >> KERNFS_ROOT_CREATE_DEACTIVATED | >> KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, >> @@ -3618,19 +3634,11 @@ static int __init rdtgroup_setup_root(void) >> >> list_add(&rdtgroup_default.rdtgroup_list, &rdt_all_groups); >> >> - ret = rdtgroup_add_files(kernfs_root_to_node(rdt_root), RFTYPE_CTRL_BASE); >> - if (ret) { >> - kernfs_destroy_root(rdt_root); >> - goto out; >> - } >> - >> rdtgroup_default.kn = kernfs_root_to_node(rdt_root); >> - kernfs_activate(rdtgroup_default.kn); >> >> -out: >> mutex_unlock(&rdtgroup_mutex); >> >> - return ret; >> + return 0; >> } >> >> static void domain_destroy_mon_state(struct rdt_domain *d) >> @@ -3752,13 +3760,9 @@ int __init rdtgroup_init(void) >> seq_buf_init(&last_cmd_status, last_cmd_status_buf, >> sizeof(last_cmd_status_buf)); >> >> - ret = rdtgroup_setup_root(); >> - if (ret) >> - return ret; >> - >> ret = sysfs_create_mount_point(fs_kobj, "resctrl"); >> if (ret) >> - goto cleanup_root; >> + return ret; >> > > It is not clear to me why this change is required, could you > please elaborate? It seems that all that is needed is for > rdtgroup_add_files() to move to rdt_get_tree() (which you have done) > and then an additional call to kernfs_remove() in rmdir_all_sub(). > I must be missing something, could you please help me understand? > Yes. I started with that approach. But there are issues with that approach. Currently, rdt_root(which is rdtgroup_default.kn) is created during rdtgroup_init. At the same time the root files are created. Also, default group is added to rdt_all_groups. Basically, the root files and rdtgroup_default group is always there even though filesystem is never mounted. Also mbm_over and cqm_limbo workqueues are always running even though filesystem is not mounted. I changed rdtgroup_add_files() to move to rdt_get_tree() and added kernfs_remove() in rmdir_all_sub(). This caused problems. The kernfs_remove(rdtgroup_default.kn) removes all the reference counts and releases the root. When we mount again, we hit this this problem below. [ 404.558461] ------------[ cut here ]------------ [ 404.563631] WARNING: CPU: 35 PID: 7728 at fs/kernfs/dir.c:522 kernfs_new_node+0x63/0x70 404.778793] ? __warn+0x81/0x140 [ 404.782535] ? kernfs_new_node+0x63/0x70 [ 404.787036] ? report_bug+0x102/0x200 [ 404.791247] ? handle_bug+0x3f/0x70 [ 404.795269] ? exc_invalid_op+0x13/0x60 [ 404.799671] ? asm_exc_invalid_op+0x16/0x20 [ 404.804461] ? kernfs_new_node+0x63/0x70 [ 404.808954] ? snprintf+0x49/0x70 [ 404.812762] __kernfs_create_file+0x30/0xc0 [ 404.817534] rdtgroup_add_files+0x6c/0x100 Basically kernel says your rdt_root is not initialized. That is the reason I had to move everything to mount time. The rdt_root is created and initialized during the mount and also destroyed during the umount. And I had to move rdt_enable_key check during rdt_root creation.
Hi Babu, On 7/14/2023 9:26 AM, Moger, Babu wrote: > Hi Reinette, > Sorry.. Took a while to respond. I had to recreate the issue to refresh my > memory. No problem! > On 7/7/23 16:46, Reinette Chatre wrote: >> Hi Babu, >> >> On 6/1/2023 12:02 PM, Babu Moger wrote: >>> ctx = kzalloc(sizeof(struct rdt_fs_context), GFP_KERNEL); >>> - if (!ctx) >>> + if (!ctx) { >>> + kernfs_destroy_root(rdt_root); >>> return -ENOMEM; >>> + } >>> >>> ctx->kfc.root = rdt_root; >>> ctx->kfc.magic = RDTGROUP_SUPER_MAGIC; >>> @@ -2845,6 +2860,9 @@ static void rdt_kill_sb(struct super_block *sb) >>> static_branch_disable_cpuslocked(&rdt_alloc_enable_key); >>> static_branch_disable_cpuslocked(&rdt_mon_enable_key); >>> static_branch_disable_cpuslocked(&rdt_enable_key); >>> + /* Remove the default group and cleanup the root */ >>> + list_del(&rdtgroup_default.rdtgroup_list); >>> + kernfs_destroy_root(rdt_root); >> >> Why not just add kernfs_remove(rdtgroup_default.kn) to rmdir_all_sub()? > > List rdtgroup_default.rdtgroup_list is added during the mount and had to > be removed during umount and rdt_root is destroyed here. I do not think it is required for default resource group management to be tied with the resctrl files associated with default resource group. I think rdtgroup_setup_root can be split in two, one for all the resctrl files that should be done at mount/unmount and one for the default group init done at __init. >>> kernfs_kill_sb(sb); >>> mutex_unlock(&rdtgroup_mutex); >>> cpus_read_unlock(); >>> @@ -3598,10 +3616,8 @@ static struct kernfs_syscall_ops rdtgroup_kf_syscall_ops = { >>> .show_options = rdtgroup_show_options, >>> }; >>> >>> -static int __init rdtgroup_setup_root(void) >>> +static int rdtgroup_setup_root(void) >>> { >>> - int ret; >>> - >>> rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, >>> KERNFS_ROOT_CREATE_DEACTIVATED | >>> KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, >>> @@ -3618,19 +3634,11 @@ static int __init rdtgroup_setup_root(void) >>> >>> list_add(&rdtgroup_default.rdtgroup_list, &rdt_all_groups); >>> >>> - ret = rdtgroup_add_files(kernfs_root_to_node(rdt_root), RFTYPE_CTRL_BASE); >>> - if (ret) { >>> - kernfs_destroy_root(rdt_root); >>> - goto out; >>> - } >>> - >>> rdtgroup_default.kn = kernfs_root_to_node(rdt_root); >>> - kernfs_activate(rdtgroup_default.kn); >>> >>> -out: >>> mutex_unlock(&rdtgroup_mutex); >>> >>> - return ret; >>> + return 0; >>> } >>> >>> static void domain_destroy_mon_state(struct rdt_domain *d) >>> @@ -3752,13 +3760,9 @@ int __init rdtgroup_init(void) >>> seq_buf_init(&last_cmd_status, last_cmd_status_buf, >>> sizeof(last_cmd_status_buf)); >>> >>> - ret = rdtgroup_setup_root(); >>> - if (ret) >>> - return ret; >>> - >>> ret = sysfs_create_mount_point(fs_kobj, "resctrl"); >>> if (ret) >>> - goto cleanup_root; >>> + return ret; >>> >> >> It is not clear to me why this change is required, could you >> please elaborate? It seems that all that is needed is for >> rdtgroup_add_files() to move to rdt_get_tree() (which you have done) >> and then an additional call to kernfs_remove() in rmdir_all_sub(). >> I must be missing something, could you please help me understand? >> > > Yes. I started with that approach. But there are issues with that approach. > > Currently, rdt_root(which is rdtgroup_default.kn) is created during > rdtgroup_init. At the same time the root files are created. Also, default > group is added to rdt_all_groups. Basically, the root files and > rdtgroup_default group is always there even though filesystem is never > mounted. Also mbm_over and cqm_limbo workqueues are always running even > though filesystem is not mounted. > > I changed rdtgroup_add_files() to move to rdt_get_tree() and added > kernfs_remove() in rmdir_all_sub(). This caused problems. The > kernfs_remove(rdtgroup_default.kn) removes all the reference counts and > releases the root. When we mount again, we hit this this problem below. > > [ 404.558461] ------------[ cut here ]------------ > [ 404.563631] WARNING: CPU: 35 PID: 7728 at fs/kernfs/dir.c:522 > kernfs_new_node+0x63/0x70 > > 404.778793] ? __warn+0x81/0x140 > [ 404.782535] ? kernfs_new_node+0x63/0x70 > [ 404.787036] ? report_bug+0x102/0x200 > [ 404.791247] ? handle_bug+0x3f/0x70 > [ 404.795269] ? exc_invalid_op+0x13/0x60 > [ 404.799671] ? asm_exc_invalid_op+0x16/0x20 > [ 404.804461] ? kernfs_new_node+0x63/0x70 > [ 404.808954] ? snprintf+0x49/0x70 > [ 404.812762] __kernfs_create_file+0x30/0xc0 > [ 404.817534] rdtgroup_add_files+0x6c/0x100 > > Basically kernel says your rdt_root is not initialized. That is the reason > I had to move everything to mount time. The rdt_root is created and > initialized during the mount and also destroyed during the umount. > And I had to move rdt_enable_key check during rdt_root creation. > ok, thank you for the additional details. I see now how this patch evolved. I understand how rdt_root needs to be created/destroyed during mount/unmount. If I understand correctly the changes to rdt_init_fs_context() was motivated by this line: ctx->kfc.root = rdt_root; ... that prompted you to move rdt_root creation there in order to have it present for this assignment and that prompted the rdt_enable_key check to follow. Is this correct? I am concerned about the changes to rdt_init_fs_context() since it further separates the resctrl file management, it breaks the symmetry of the key checked and set, and finally these new actions seem unrelated to a function named "init_fs_context". I looked at other examples and from what I can tell it is not required that ctx->kfc.root be initialized within rdt_init_fs_context(). Looks like the value is required by kernfs_get_tree() that is called from rdt_get_tree(). For comparison I found cgroup_do_get_tree(). Note how cgroup_do_get_tree(), within the .get_tree callback, initializes kernfs_fs_context.root and then call kernfs_get_tree()? It thus looks to me as though things can be simplified significantly if the kernfs_fs_context.root assignment is moved from rdt_init_fs_context() to rdt_get_tree(). rdt_get_tree() can then create rdt_root (and add all needed files), assign it to kernfs_fs_context.root and call kernfs_get_tree(). What do you think? Reinette
Hi Reinette, On 7/14/23 16:54, Reinette Chatre wrote: > Hi Babu, > > On 7/14/2023 9:26 AM, Moger, Babu wrote: >> Hi Reinette, >> Sorry.. Took a while to respond. I had to recreate the issue to refresh my >> memory. > > No problem! > >> On 7/7/23 16:46, Reinette Chatre wrote: >>> Hi Babu, >>> >>> On 6/1/2023 12:02 PM, Babu Moger wrote: > > >>>> ctx = kzalloc(sizeof(struct rdt_fs_context), GFP_KERNEL); >>>> - if (!ctx) >>>> + if (!ctx) { >>>> + kernfs_destroy_root(rdt_root); >>>> return -ENOMEM; >>>> + } >>>> >>>> ctx->kfc.root = rdt_root; >>>> ctx->kfc.magic = RDTGROUP_SUPER_MAGIC; >>>> @@ -2845,6 +2860,9 @@ static void rdt_kill_sb(struct super_block *sb) >>>> static_branch_disable_cpuslocked(&rdt_alloc_enable_key); >>>> static_branch_disable_cpuslocked(&rdt_mon_enable_key); >>>> static_branch_disable_cpuslocked(&rdt_enable_key); >>>> + /* Remove the default group and cleanup the root */ >>>> + list_del(&rdtgroup_default.rdtgroup_list); >>>> + kernfs_destroy_root(rdt_root); >>> >>> Why not just add kernfs_remove(rdtgroup_default.kn) to rmdir_all_sub()? >> >> List rdtgroup_default.rdtgroup_list is added during the mount and had to >> be removed during umount and rdt_root is destroyed here. > > I do not think it is required for default resource group management to > be tied with the resctrl files associated with default resource group. > > I think rdtgroup_setup_root can be split in two, one for all the > resctrl files that should be done at mount/unmount and one for the > default group init done at __init. Ok. > >>>> kernfs_kill_sb(sb); >>>> mutex_unlock(&rdtgroup_mutex); >>>> cpus_read_unlock(); >>>> @@ -3598,10 +3616,8 @@ static struct kernfs_syscall_ops rdtgroup_kf_syscall_ops = { >>>> .show_options = rdtgroup_show_options, >>>> }; >>>> >>>> -static int __init rdtgroup_setup_root(void) >>>> +static int rdtgroup_setup_root(void) >>>> { >>>> - int ret; >>>> - >>>> rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, >>>> KERNFS_ROOT_CREATE_DEACTIVATED | >>>> KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, >>>> @@ -3618,19 +3634,11 @@ static int __init rdtgroup_setup_root(void) >>>> >>>> list_add(&rdtgroup_default.rdtgroup_list, &rdt_all_groups); >>>> >>>> - ret = rdtgroup_add_files(kernfs_root_to_node(rdt_root), RFTYPE_CTRL_BASE); >>>> - if (ret) { >>>> - kernfs_destroy_root(rdt_root); >>>> - goto out; >>>> - } >>>> - >>>> rdtgroup_default.kn = kernfs_root_to_node(rdt_root); >>>> - kernfs_activate(rdtgroup_default.kn); >>>> >>>> -out: >>>> mutex_unlock(&rdtgroup_mutex); >>>> >>>> - return ret; >>>> + return 0; >>>> } >>>> >>>> static void domain_destroy_mon_state(struct rdt_domain *d) >>>> @@ -3752,13 +3760,9 @@ int __init rdtgroup_init(void) >>>> seq_buf_init(&last_cmd_status, last_cmd_status_buf, >>>> sizeof(last_cmd_status_buf)); >>>> >>>> - ret = rdtgroup_setup_root(); >>>> - if (ret) >>>> - return ret; >>>> - >>>> ret = sysfs_create_mount_point(fs_kobj, "resctrl"); >>>> if (ret) >>>> - goto cleanup_root; >>>> + return ret; >>>> >>> >>> It is not clear to me why this change is required, could you >>> please elaborate? It seems that all that is needed is for >>> rdtgroup_add_files() to move to rdt_get_tree() (which you have done) >>> and then an additional call to kernfs_remove() in rmdir_all_sub(). >>> I must be missing something, could you please help me understand? >>> >> >> Yes. I started with that approach. But there are issues with that approach. >> >> Currently, rdt_root(which is rdtgroup_default.kn) is created during >> rdtgroup_init. At the same time the root files are created. Also, default >> group is added to rdt_all_groups. Basically, the root files and >> rdtgroup_default group is always there even though filesystem is never >> mounted. Also mbm_over and cqm_limbo workqueues are always running even >> though filesystem is not mounted. >> >> I changed rdtgroup_add_files() to move to rdt_get_tree() and added >> kernfs_remove() in rmdir_all_sub(). This caused problems. The >> kernfs_remove(rdtgroup_default.kn) removes all the reference counts and >> releases the root. When we mount again, we hit this this problem below. >> >> [ 404.558461] ------------[ cut here ]------------ >> [ 404.563631] WARNING: CPU: 35 PID: 7728 at fs/kernfs/dir.c:522 >> kernfs_new_node+0x63/0x70 >> >> 404.778793] ? __warn+0x81/0x140 >> [ 404.782535] ? kernfs_new_node+0x63/0x70 >> [ 404.787036] ? report_bug+0x102/0x200 >> [ 404.791247] ? handle_bug+0x3f/0x70 >> [ 404.795269] ? exc_invalid_op+0x13/0x60 >> [ 404.799671] ? asm_exc_invalid_op+0x16/0x20 >> [ 404.804461] ? kernfs_new_node+0x63/0x70 >> [ 404.808954] ? snprintf+0x49/0x70 >> [ 404.812762] __kernfs_create_file+0x30/0xc0 >> [ 404.817534] rdtgroup_add_files+0x6c/0x100 >> >> Basically kernel says your rdt_root is not initialized. That is the reason >> I had to move everything to mount time. The rdt_root is created and >> initialized during the mount and also destroyed during the umount. >> And I had to move rdt_enable_key check during rdt_root creation. >> > > ok, thank you for the additional details. I see now how this patch evolved. > I understand how rdt_root needs to be created/destroyed > during mount/unmount. If I understand correctly the changes to > rdt_init_fs_context() was motivated by this line: > > ctx->kfc.root = rdt_root; > > ... that prompted you to move rdt_root creation there in order to have > it present for this assignment and that prompted the > rdt_enable_key check to follow. Is this correct? That is correct. > > I am concerned about the changes to rdt_init_fs_context() since it further > separates the resctrl file management, it breaks the symmetry of the > key checked and set, and finally these new actions seem unrelated to a function > named "init_fs_context". I looked at other examples and from what I can tell > it is not required that ctx->kfc.root be initialized within > rdt_init_fs_context(). Looks like the value is required by kernfs_get_tree() > that is called from rdt_get_tree(). For comparison I found cgroup_do_get_tree(). > Note how cgroup_do_get_tree(), within the .get_tree callback, > initializes kernfs_fs_context.root and then call kernfs_get_tree()? Yes. I see that. Thanks for pointing. > > It thus looks to me as though things can be simplified significantly > if the kernfs_fs_context.root assignment is moved from rdt_init_fs_context() > to rdt_get_tree(). rdt_get_tree() can then create rdt_root (and add all needed > files), assign it to kernfs_fs_context.root and call kernfs_get_tree(). > > What do you think? Yes. I think we can do that. Let me try it. Will let you know how it goes. Thanks for the suggestion.
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 2f5cdc638607..e03cb01c4742 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -57,6 +57,7 @@ static char last_cmd_status_buf[512]; struct dentry *debugfs_resctrl; static bool resctrl_debug; +static int rdtgroup_setup_root(void); void rdt_last_cmd_clear(void) { @@ -2515,13 +2516,6 @@ static int rdt_get_tree(struct fs_context *fc) cpus_read_lock(); mutex_lock(&rdtgroup_mutex); - /* - * resctrl file system can only be mounted once. - */ - if (static_branch_unlikely(&rdt_enable_key)) { - ret = -EBUSY; - goto out; - } ret = rdt_enable_ctx(ctx); if (ret < 0) @@ -2535,9 +2529,15 @@ static int rdt_get_tree(struct fs_context *fc) closid_init(); + ret = rdtgroup_add_files(rdtgroup_default.kn, RFTYPE_CTRL_BASE); + if (ret) + goto out_schemata_free; + + kernfs_activate(rdtgroup_default.kn); + ret = rdtgroup_create_info_dir(rdtgroup_default.kn); if (ret < 0) - goto out_schemata_free; + goto out_default; if (rdt_mon_capable) { ret = mongroup_create_dir(rdtgroup_default.kn, @@ -2587,6 +2587,8 @@ static int rdt_get_tree(struct fs_context *fc) kernfs_remove(kn_mongrp); out_info: kernfs_remove(kn_info); +out_default: + kernfs_remove(rdtgroup_default.kn); out_schemata_free: schemata_list_destroy(); out_mba: @@ -2664,10 +2666,23 @@ static const struct fs_context_operations rdt_fs_context_ops = { static int rdt_init_fs_context(struct fs_context *fc) { struct rdt_fs_context *ctx; + int ret; + + /* + * resctrl file system can only be mounted once. + */ + if (static_branch_unlikely(&rdt_enable_key)) + return -EBUSY; + + ret = rdtgroup_setup_root(); + if (ret) + return ret; ctx = kzalloc(sizeof(struct rdt_fs_context), GFP_KERNEL); - if (!ctx) + if (!ctx) { + kernfs_destroy_root(rdt_root); return -ENOMEM; + } ctx->kfc.root = rdt_root; ctx->kfc.magic = RDTGROUP_SUPER_MAGIC; @@ -2845,6 +2860,9 @@ static void rdt_kill_sb(struct super_block *sb) static_branch_disable_cpuslocked(&rdt_alloc_enable_key); static_branch_disable_cpuslocked(&rdt_mon_enable_key); static_branch_disable_cpuslocked(&rdt_enable_key); + /* Remove the default group and cleanup the root */ + list_del(&rdtgroup_default.rdtgroup_list); + kernfs_destroy_root(rdt_root); kernfs_kill_sb(sb); mutex_unlock(&rdtgroup_mutex); cpus_read_unlock(); @@ -3598,10 +3616,8 @@ static struct kernfs_syscall_ops rdtgroup_kf_syscall_ops = { .show_options = rdtgroup_show_options, }; -static int __init rdtgroup_setup_root(void) +static int rdtgroup_setup_root(void) { - int ret; - rdt_root = kernfs_create_root(&rdtgroup_kf_syscall_ops, KERNFS_ROOT_CREATE_DEACTIVATED | KERNFS_ROOT_EXTRA_OPEN_PERM_CHECK, @@ -3618,19 +3634,11 @@ static int __init rdtgroup_setup_root(void) list_add(&rdtgroup_default.rdtgroup_list, &rdt_all_groups); - ret = rdtgroup_add_files(kernfs_root_to_node(rdt_root), RFTYPE_CTRL_BASE); - if (ret) { - kernfs_destroy_root(rdt_root); - goto out; - } - rdtgroup_default.kn = kernfs_root_to_node(rdt_root); - kernfs_activate(rdtgroup_default.kn); -out: mutex_unlock(&rdtgroup_mutex); - return ret; + return 0; } static void domain_destroy_mon_state(struct rdt_domain *d) @@ -3752,13 +3760,9 @@ int __init rdtgroup_init(void) seq_buf_init(&last_cmd_status, last_cmd_status_buf, sizeof(last_cmd_status_buf)); - ret = rdtgroup_setup_root(); - if (ret) - return ret; - ret = sysfs_create_mount_point(fs_kobj, "resctrl"); if (ret) - goto cleanup_root; + return ret; ret = register_filesystem(&rdt_fs_type); if (ret) @@ -3791,8 +3795,6 @@ int __init rdtgroup_init(void) cleanup_mountpoint: sysfs_remove_mount_point(fs_kobj, "resctrl"); -cleanup_root: - kernfs_destroy_root(rdt_root); return ret; } @@ -3802,5 +3804,4 @@ void __exit rdtgroup_exit(void) debugfs_remove_recursive(debugfs_resctrl); unregister_filesystem(&rdt_fs_type); sysfs_remove_mount_point(fs_kobj, "resctrl"); - kernfs_destroy_root(rdt_root); }