[v4,6/7] x86/resctrl: Display CLOSID and RMID for the resctrl groups

Message ID 168177449635.1758847.13040588638888054027.stgit@bmoger-ubuntu
State New
Headers
Series x86/resctrl: Miscellaneous resctrl features |

Commit Message

Moger, Babu April 17, 2023, 11:34 p.m. UTC
  When a user creates a control or monitor group, the CLOSID or RMID
are not visible to the user. It can help to debug the issues in some
cases. There are only available with "-o debug" option.

Add CLOSID(ctrl_hw_id) and RMID(mon_hw_id) to the control/monitor groups
display in resctrl interface.
$cat /sys/fs/resctrl/clos1/clos_hw_id
1
$cat /sys/fs/resctrl/mon_groups/mon1/mon_hw_id
3

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 Documentation/x86/resctrl.rst          |   17 ++++++++++++
 arch/x86/kernel/cpu/resctrl/rdtgroup.c |   44 ++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)
  

Comments

Bagas Sanjaya April 18, 2023, 2:22 a.m. UTC | #1
On 4/18/23 06:34, Babu Moger wrote:
> +"ctrl_hw_id":
> +	Available only with debug option. On x86, reading this file shows
> +	the Class of Service (CLOS) id which acts as a resource control
> +	tag on which the resources can be throttled. Kernel assigns a new
> +	CLOSID a control group is created depending on the available
> +	CLOSIDs. Multiple cores(or threads) or processes can share a
> +	same CLOSID in a resctrl group.
> +
> <snipped>...
> +"mon_hw_id":
> +	Available only with debug option. On x86, reading this file shows
> +	the Resource Monitoring ID (RMID) for monitoring the resource
> +	utilization. Monitoring is performed by tagging each core (or
> +	thread) or process via a RMID. Kernel assigns a new RMID when
> +	a group is created depending on the available RMIDs. Multiple
> +	cores (or threads) or processes can share a same RMID in a
> +	resctrl group.
> +

Is CONFIG_DEBUG=y required?
  
Moger, Babu April 18, 2023, 2:11 p.m. UTC | #2
On 4/17/23 21:22, Bagas Sanjaya wrote:
> On 4/18/23 06:34, Babu Moger wrote:
>> +"ctrl_hw_id":
>> +	Available only with debug option. On x86, reading this file shows
>> +	the Class of Service (CLOS) id which acts as a resource control
>> +	tag on which the resources can be throttled. Kernel assigns a new
>> +	CLOSID a control group is created depending on the available
>> +	CLOSIDs. Multiple cores(or threads) or processes can share a
>> +	same CLOSID in a resctrl group.
>> +
>> <snipped>...
>> +"mon_hw_id":
>> +	Available only with debug option. On x86, reading this file shows
>> +	the Resource Monitoring ID (RMID) for monitoring the resource
>> +	utilization. Monitoring is performed by tagging each core (or
>> +	thread) or process via a RMID. Kernel assigns a new RMID when
>> +	a group is created depending on the available RMIDs. Multiple
>> +	cores (or threads) or processes can share a same RMID in a
>> +	resctrl group.
>> +
> 
> Is CONFIG_DEBUG=y required?
No. Available with resctrl dubug option.
Thanks
Babu Moger
  
Reinette Chatre May 4, 2023, 7:04 p.m. UTC | #3
Hi Babu,

On 4/17/2023 4:34 PM, Babu Moger wrote:
> When a user creates a control or monitor group, the CLOSID or RMID
> are not visible to the user. It can help to debug the issues in some
> cases. There are only available with "-o debug" option.

Please see: Documentation/process/maintainer-tip.rst

"It's also useful to structure the changelog into several paragraphs and not
lump everything together into a single one. A good structure is to explain
the context, the problem and the solution in separate paragraphs and this
order."

> 
> Add CLOSID(ctrl_hw_id) and RMID(mon_hw_id) to the control/monitor groups

Please highlight that CLOSID and RMID are x86 concepts.

> display in resctrl interface.
> $cat /sys/fs/resctrl/clos1/clos_hw_id
> 1

This example does not match what the patch does (clos_hw_id -> ctrl_hw_id).
I also think this change would be more palatable (to non x86 audience) if
the example resource group has a generic (non-x86 concept) name.

> $cat /sys/fs/resctrl/mon_groups/mon1/mon_hw_id
> 3
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  Documentation/x86/resctrl.rst          |   17 ++++++++++++
>  arch/x86/kernel/cpu/resctrl/rdtgroup.c |   44 ++++++++++++++++++++++++++++++++
>  2 files changed, 61 insertions(+)
> 
> diff --git a/Documentation/x86/resctrl.rst b/Documentation/x86/resctrl.rst
> index be443251b484..5aff8c2beb08 100644
> --- a/Documentation/x86/resctrl.rst
> +++ b/Documentation/x86/resctrl.rst
> @@ -345,6 +345,14 @@ When control is enabled all CTRL_MON groups will also contain:
>  	file. On successful pseudo-locked region creation the mode will
>  	automatically change to "pseudo-locked".
>  
> +"ctrl_hw_id":
> +	Available only with debug option. On x86, reading this file shows
> +	the Class of Service (CLOS) id which acts as a resource control
> +	tag on which the resources can be throttled. Kernel assigns a new
> +	CLOSID a control group is created depending on the available
> +	CLOSIDs. Multiple cores(or threads) or processes can share a
> +	same CLOSID in a resctrl group.

Please keep other content from the documentation in mind when making
this change. CLOSID is already documented, including the fact that it
is a limited resource. Please see content under: "Notes on cache occupancy
monitoring and control" where it, for example, states that "The number
of CLOSid and RMID are limited by the hardware."

Considering this the above could just read:
"Available only with debug option. The identifier used by hardware 
 for the control group. On x86 this is the CLOSID."

Similar feedback to the "mon_hw_id" portion.

Reinette
  
Moger, Babu May 5, 2023, 9:45 p.m. UTC | #4
Hi Reinette,

On 5/4/2023 2:04 PM, Reinette Chatre wrote:
> Hi Babu,
>
> On 4/17/2023 4:34 PM, Babu Moger wrote:
>> When a user creates a control or monitor group, the CLOSID or RMID
>> are not visible to the user. It can help to debug the issues in some
>> cases. There are only available with "-o debug" option.
> Please see: Documentation/process/maintainer-tip.rst
>
> "It's also useful to structure the changelog into several paragraphs and not
> lump everything together into a single one. A good structure is to explain
> the context, the problem and the solution in separate paragraphs and this
> order."
ok Sure.
>> Add CLOSID(ctrl_hw_id) and RMID(mon_hw_id) to the control/monitor groups
> Please highlight that CLOSID and RMID are x86 concepts.
ok Sure.
>
>> display in resctrl interface.
>> $cat /sys/fs/resctrl/clos1/clos_hw_id
>> 1
> This example does not match what the patch does (clos_hw_id -> ctrl_hw_id).
My bad. Will fix it.
> I also think this change would be more palatable (to non x86 audience) if
> the example resource group has a generic (non-x86 concept) name.

ok. In this example the clos1 name sounds x86 specific. I can change it 
to ctrl_grp1. Hope this is what you meant.


>
>> $cat /sys/fs/resctrl/mon_groups/mon1/mon_hw_id
>> 3
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>   Documentation/x86/resctrl.rst          |   17 ++++++++++++
>>   arch/x86/kernel/cpu/resctrl/rdtgroup.c |   44 ++++++++++++++++++++++++++++++++
>>   2 files changed, 61 insertions(+)
>>
>> diff --git a/Documentation/x86/resctrl.rst b/Documentation/x86/resctrl.rst
>> index be443251b484..5aff8c2beb08 100644
>> --- a/Documentation/x86/resctrl.rst
>> +++ b/Documentation/x86/resctrl.rst
>> @@ -345,6 +345,14 @@ When control is enabled all CTRL_MON groups will also contain:
>>   	file. On successful pseudo-locked region creation the mode will
>>   	automatically change to "pseudo-locked".
>>   
>> +"ctrl_hw_id":
>> +	Available only with debug option. On x86, reading this file shows
>> +	the Class of Service (CLOS) id which acts as a resource control
>> +	tag on which the resources can be throttled. Kernel assigns a new
>> +	CLOSID a control group is created depending on the available
>> +	CLOSIDs. Multiple cores(or threads) or processes can share a
>> +	same CLOSID in a resctrl group.
> Please keep other content from the documentation in mind when making
> this change. CLOSID is already documented, including the fact that it
> is a limited resource. Please see content under: "Notes on cache occupancy
> monitoring and control" where it, for example, states that "The number
> of CLOSid and RMID are limited by the hardware."
>
> Considering this the above could just read:
> "Available only with debug option. The identifier used by hardware
>   for the control group. On x86 this is the CLOSID."
ok. Sure.
>
> Similar feedback to the "mon_hw_id" portion.

Sure.

Thanks

Babu
  
Reinette Chatre May 5, 2023, 11:25 p.m. UTC | #5
Hi Babu,

On 5/5/2023 2:45 PM, Moger, Babu wrote:
> On 5/4/2023 2:04 PM, Reinette Chatre wrote:

Thank you for trimming the header in replies.

>> On 4/17/2023 4:34 PM, Babu Moger wrote:
>>> When a user creates a control or monitor group, the CLOSID or RMID
>>> are not visible to the user. It can help to debug the issues in some
>>> cases. There are only available with "-o debug" option.
>> Please see: Documentation/process/maintainer-tip.rst
>>
>> "It's also useful to structure the changelog into several paragraphs and not
>> lump everything together into a single one. A good structure is to explain
>> the context, the problem and the solution in separate paragraphs and this
>> order."
> ok Sure.
>>> Add CLOSID(ctrl_hw_id) and RMID(mon_hw_id) to the control/monitor groups
>> Please highlight that CLOSID and RMID are x86 concepts.
> ok Sure.
>>
>>> display in resctrl interface.
>>> $cat /sys/fs/resctrl/clos1/clos_hw_id
>>> 1
>> This example does not match what the patch does (clos_hw_id -> ctrl_hw_id).
> My bad. Will fix it.
>> I also think this change would be more palatable (to non x86 audience) if
>> the example resource group has a generic (non-x86 concept) name.
> 
> ok. In this example the clos1 name sounds x86 specific. I can change it to ctrl_grp1. Hope this is what you meant.

Yes, that is what I meant. ctrl_grp1 sounds good.

Thank you

Reinette
  

Patch

diff --git a/Documentation/x86/resctrl.rst b/Documentation/x86/resctrl.rst
index be443251b484..5aff8c2beb08 100644
--- a/Documentation/x86/resctrl.rst
+++ b/Documentation/x86/resctrl.rst
@@ -345,6 +345,14 @@  When control is enabled all CTRL_MON groups will also contain:
 	file. On successful pseudo-locked region creation the mode will
 	automatically change to "pseudo-locked".
 
+"ctrl_hw_id":
+	Available only with debug option. On x86, reading this file shows
+	the Class of Service (CLOS) id which acts as a resource control
+	tag on which the resources can be throttled. Kernel assigns a new
+	CLOSID a control group is created depending on the available
+	CLOSIDs. Multiple cores(or threads) or processes can share a
+	same CLOSID in a resctrl group.
+
 When monitoring is enabled all MON groups will also contain:
 
 "mon_data":
@@ -358,6 +366,15 @@  When monitoring is enabled all MON groups will also contain:
 	the sum for all tasks in the CTRL_MON group and all tasks in
 	MON groups. Please see example section for more details on usage.
 
+"mon_hw_id":
+	Available only with debug option. On x86, reading this file shows
+	the Resource Monitoring ID (RMID) for monitoring the resource
+	utilization. Monitoring is performed by tagging each core (or
+	thread) or process via a RMID. Kernel assigns a new RMID when
+	a group is created depending on the available RMIDs. Multiple
+	cores (or threads) or processes can share a same RMID in a
+	resctrl group.
+
 Resource allocation rules
 -------------------------
 
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 0169821bc08c..15ded0dd5b09 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -782,6 +782,38 @@  static int rdtgroup_tasks_show(struct kernfs_open_file *of,
 	return ret;
 }
 
+static int rdtgroup_closid_show(struct kernfs_open_file *of,
+				struct seq_file *s, void *v)
+{
+	struct rdtgroup *rdtgrp;
+	int ret = 0;
+
+	rdtgrp = rdtgroup_kn_lock_live(of->kn);
+	if (rdtgrp)
+		seq_printf(s, "%u\n", rdtgrp->closid);
+	else
+		ret = -ENOENT;
+	rdtgroup_kn_unlock(of->kn);
+
+	return ret;
+}
+
+static int rdtgroup_rmid_show(struct kernfs_open_file *of,
+			      struct seq_file *s, void *v)
+{
+	struct rdtgroup *rdtgrp;
+	int ret = 0;
+
+	rdtgrp = rdtgroup_kn_lock_live(of->kn);
+	if (rdtgrp)
+		seq_printf(s, "%u\n", rdtgrp->mon.rmid);
+	else
+		ret = -ENOENT;
+	rdtgroup_kn_unlock(of->kn);
+
+	return ret;
+}
+
 #ifdef CONFIG_PROC_CPU_RESCTRL
 
 /*
@@ -1843,6 +1875,12 @@  static struct rftype res_common_files[] = {
 		.seq_show	= rdtgroup_tasks_show,
 		.fflags		= RFTYPE_BASE,
 	},
+	{
+		.name		= "mon_hw_id",
+		.mode		= 0444,
+		.kf_ops		= &rdtgroup_kf_single_ops,
+		.seq_show	= rdtgroup_rmid_show,
+	},
 	{
 		.name		= "schemata",
 		.mode		= 0644,
@@ -1866,6 +1904,12 @@  static struct rftype res_common_files[] = {
 		.seq_show	= rdtgroup_size_show,
 		.fflags		= RFTYPE_CTRL_BASE,
 	},
+	{
+		.name		= "ctrl_hw_id",
+		.mode		= 0444,
+		.kf_ops		= &rdtgroup_kf_single_ops,
+		.seq_show	= rdtgroup_closid_show,
+	},
 
 };