[06/18] x86/resctrl: Allow the allocator to check if a CLOSID can allocate clean RMID

Message ID 20221021131204.5581-7-james.morse@arm.com
State New
Headers
Series x86/resctrl: monitored closid+rmid together, separate arch/fs locking |

Commit Message

James Morse Oct. 21, 2022, 1:11 p.m. UTC
  MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
used for different control groups.

This means once a CLOSID is allocated, all its monitoring ids may still be
dirty, and held in limbo.

Add a helper to allow the CLOSID allocator to check if a CLOSID has dirty
RMID values. This behaviour is enabled by a kconfig option selected by
the architecture, which avoids a pointless search for x86.

Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/x86/kernel/cpu/resctrl/internal.h |  1 +
 arch/x86/kernel/cpu/resctrl/monitor.c  | 31 ++++++++++++++++++++++++++
 arch/x86/kernel/cpu/resctrl/rdtgroup.c | 18 +++++++++------
 3 files changed, 43 insertions(+), 7 deletions(-)
  

Comments

Shawn Wang Nov. 8, 2022, 3:57 p.m. UTC | #1
Hi James,

On 10/21/2022 9:11 PM, James Morse wrote:
> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
> used for different control groups.
> 
> This means once a CLOSID is allocated, all its monitoring ids may still be
> dirty, and held in limbo.
> 
> Add a helper to allow the CLOSID allocator to check if a CLOSID has dirty
> RMID values. This behaviour is enabled by a kconfig option selected by
> the architecture, which avoids a pointless search for x86.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
>   arch/x86/kernel/cpu/resctrl/internal.h |  1 +
>   arch/x86/kernel/cpu/resctrl/monitor.c  | 31 ++++++++++++++++++++++++++
>   arch/x86/kernel/cpu/resctrl/rdtgroup.c | 18 +++++++++------
>   3 files changed, 43 insertions(+), 7 deletions(-)

> +/**
> + * resctrl_closid_is_dirty - Determine if clean RMID can be allocate for this
> + *                           CLOSID.
> + * @closid: The CLOSID that is being queried.
> + *
> + * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocate CLOSID
> + * may not be able to allocate clean RMID. To avoid this the allocator will
> + * only return clean CLOSID.
> + */
> +bool resctrl_closid_is_dirty(u32 closid)
> +{
> +	struct rmid_entry *entry;
> +	int i;
> +
> +	lockdep_assert_held(&rdtgroup_mutex);
> +
> +	if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> +		return false;

Since dirty closid occurs when the kconfig option for MPAM is enabled, it seems
that the condition of the if statement here should take the opposite value:
	if (!IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))

> +
> +	for (i = 0; i < resctrl_arch_system_num_rmid_idx(); i++) {
> +		entry = &rmid_ptrs[i];
> +		if (entry->closid != closid)
> +			continue;
> +
> +		if (entry->busy)
> +			return true;
> +	}
> +
> +	return false;
> +}
> +

Shawn
  
James Morse Nov. 9, 2022, 5:02 p.m. UTC | #2
Hi Shawn,

On 08/11/2022 15:57, Shawn Wang wrote:
> On 10/21/2022 9:11 PM, James Morse wrote:
>> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
>> used for different control groups.
>>
>> This means once a CLOSID is allocated, all its monitoring ids may still be
>> dirty, and held in limbo.
>>
>> Add a helper to allow the CLOSID allocator to check if a CLOSID has dirty
>> RMID values. This behaviour is enabled by a kconfig option selected by
>> the architecture, which avoids a pointless search for x86.

>> +/**
>> + * resctrl_closid_is_dirty - Determine if clean RMID can be allocate for this
>> + *                           CLOSID.
>> + * @closid: The CLOSID that is being queried.
>> + *
>> + * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocate CLOSID
>> + * may not be able to allocate clean RMID. To avoid this the allocator will
>> + * only return clean CLOSID.
>> + */
>> +bool resctrl_closid_is_dirty(u32 closid)
>> +{
>> +    struct rmid_entry *entry;
>> +    int i;
>> +
>> +    lockdep_assert_held(&rdtgroup_mutex);
>> +
>> +    if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
>> +        return false;
> 
> Since dirty closid occurs when the kconfig option for MPAM is enabled, it seems
> that the condition of the if statement here should take the opposite value:
>     if (!IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))

Yup. Bother.

Thanks for spotting that! It was intended to avoid this work on x86 as its pointless, and
the number of RMID could be large.


Thanks!

James
  
Shaopeng Tan (Fujitsu) Nov. 10, 2022, 10:50 a.m. UTC | #3
Hi James

> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can
> be used for different control groups.
> 
> This means once a CLOSID is allocated, all its monitoring ids may still be dirty,
> and held in limbo.
> 
> Add a helper to allow the CLOSID allocator to check if a CLOSID has dirty RMID
> values. This behaviour is enabled by a kconfig option selected by the
> architecture, which avoids a pointless search for x86.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
>  arch/x86/kernel/cpu/resctrl/internal.h |  1 +
> arch/x86/kernel/cpu/resctrl/monitor.c  | 31
> ++++++++++++++++++++++++++
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 18 +++++++++------
>  3 files changed, 43 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h
> b/arch/x86/kernel/cpu/resctrl/internal.h
> index c8c46fe088be..faec12025a58 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -519,6 +519,7 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup
> *rdtgrp);  void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp);
> struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);  int
> closids_supported(void);
> +bool resctrl_closid_is_dirty(u32 closid);
>  void closid_free(int closid);
>  int alloc_rmid(u32 closid);
>  void free_rmid(u32 closid, u32 rmid);
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c
> b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 59da256a77fe..99854ef4dee4 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -320,6 +320,37 @@ static struct rmid_entry *resctrl_find_free_rmid(u32
> closid)
>  	return ERR_PTR(-ENOSPC);
>  }
> 
> +/**
> + * resctrl_closid_is_dirty - Determine if clean RMID can be allocate for this
> + *                           CLOSID.
> + * @closid: The CLOSID that is being queried.
> + *
> + * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocate
> +CLOSID
> + * may not be able to allocate clean RMID. To avoid this the allocator
> +will
> + * only return clean CLOSID.
> + */
> +bool resctrl_closid_is_dirty(u32 closid) {
> +	struct rmid_entry *entry;
> +	int i;
> +
> +	lockdep_assert_held(&rdtgroup_mutex);
> +
> +	if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> +		return false;
> +
> +	for (i = 0; i < resctrl_arch_system_num_rmid_idx(); i++) {
> +		entry = &rmid_ptrs[i];
> +		if (entry->closid != closid)
> +			continue;
> +
> +		if (entry->busy)
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
>  /*
>   * As of now the RMIDs allocation is the same in each domain.
>   * However we keep track of which packages the RMIDs diff --git
> a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index ac88610a6946..59f33adcf6f8 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -93,7 +93,7 @@ void rdt_last_cmd_printf(const char *fmt, ...)
>   * - Our choices on how to configure each resource become progressively
> more
>   *   limited as the number of resources grows.
>   */
> -static int closid_free_map;
> +static unsigned long closid_free_map;
>  static int closid_free_map_len;
> 
>  int closids_supported(void)
> @@ -119,14 +119,18 @@ static void closid_init(void)
> 
>  static int closid_alloc(void)
>  {
> -	u32 closid = ffs(closid_free_map);
> +	u32 closid;
> 
> -	if (closid == 0)
> -		return -ENOSPC;
> -	closid--;
> -	closid_free_map &= ~(1 << closid);
> +	for_each_set_bit(closid, &closid_free_map, closid_free_map_len) {
> +		if
> (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) &&
> +		    resctrl_closid_is_dirty(closid))

IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) is redundant here,
since it is also at the beginning of function resctrl_closid_is_dirty(closid).

Best regards,
Shaopeng Tan

> +			continue;
> 
> -	return closid;
> +		clear_bit(closid, &closid_free_map);
> +		return closid;
> +	}
> +
> +	return -ENOSPC;
>  }
> 
>  void closid_free(int closid)
> --
> 2.30.2
  
James Morse Nov. 24, 2022, 2:21 p.m. UTC | #4
Hello,

On 10/11/2022 10:50, Shaopeng Tan (Fujitsu) wrote:
> Hi James
> 
>> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can
>> be used for different control groups.
>>
>> This means once a CLOSID is allocated, all its monitoring ids may still be dirty,
>> and held in limbo.
>>
>> Add a helper to allow the CLOSID allocator to check if a CLOSID has dirty RMID
>> values. This behaviour is enabled by a kconfig option selected by the
>> architecture, which avoids a pointless search for x86.
>>


>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c
>> b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index 59da256a77fe..99854ef4dee4 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -320,6 +320,37 @@ static struct rmid_entry *resctrl_find_free_rmid(u32

>> +bool resctrl_closid_is_dirty(u32 closid) {
>> +	struct rmid_entry *entry;
>> +	int i;
>> +
>> +	lockdep_assert_held(&rdtgroup_mutex);
>> +
>> +	if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
>> +		return false;


>> a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index ac88610a6946..59f33adcf6f8 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -119,14 +119,18 @@ static void closid_init(void)
>>
>>  static int closid_alloc(void)
>>  {
>> -	u32 closid = ffs(closid_free_map);
>> +	u32 closid;
>>
>> -	if (closid == 0)
>> -		return -ENOSPC;
>> -	closid--;
>> -	closid_free_map &= ~(1 << closid);
>> +	for_each_set_bit(closid, &closid_free_map, closid_free_map_len) {
>> +		if
>> (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) &&
>> +		    resctrl_closid_is_dirty(closid))


> IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) is redundant here,
> since it is also at the beginning of function resctrl_closid_is_dirty(closid).

This is true. I included it because resctrl_closid_is_dirty() is in a different
compilation unit, so the compiler can't know it does nothing if that config option isn't
enabled. This avoided a pointless call to a function that does nothing. But you're right
it would be more readable without it, and creating a control group is hardly a performance
critical path. I'll remove it.


Thanks,

James
  

Patch

diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index c8c46fe088be..faec12025a58 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -519,6 +519,7 @@  int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp);
 void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp);
 struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
 int closids_supported(void);
+bool resctrl_closid_is_dirty(u32 closid);
 void closid_free(int closid);
 int alloc_rmid(u32 closid);
 void free_rmid(u32 closid, u32 rmid);
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 59da256a77fe..99854ef4dee4 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -320,6 +320,37 @@  static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
 	return ERR_PTR(-ENOSPC);
 }
 
+/**
+ * resctrl_closid_is_dirty - Determine if clean RMID can be allocate for this
+ *                           CLOSID.
+ * @closid: The CLOSID that is being queried.
+ *
+ * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocate CLOSID
+ * may not be able to allocate clean RMID. To avoid this the allocator will
+ * only return clean CLOSID.
+ */
+bool resctrl_closid_is_dirty(u32 closid)
+{
+	struct rmid_entry *entry;
+	int i;
+
+	lockdep_assert_held(&rdtgroup_mutex);
+
+	if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
+		return false;
+
+	for (i = 0; i < resctrl_arch_system_num_rmid_idx(); i++) {
+		entry = &rmid_ptrs[i];
+		if (entry->closid != closid)
+			continue;
+
+		if (entry->busy)
+			return true;
+	}
+
+	return false;
+}
+
 /*
  * As of now the RMIDs allocation is the same in each domain.
  * However we keep track of which packages the RMIDs
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index ac88610a6946..59f33adcf6f8 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -93,7 +93,7 @@  void rdt_last_cmd_printf(const char *fmt, ...)
  * - Our choices on how to configure each resource become progressively more
  *   limited as the number of resources grows.
  */
-static int closid_free_map;
+static unsigned long closid_free_map;
 static int closid_free_map_len;
 
 int closids_supported(void)
@@ -119,14 +119,18 @@  static void closid_init(void)
 
 static int closid_alloc(void)
 {
-	u32 closid = ffs(closid_free_map);
+	u32 closid;
 
-	if (closid == 0)
-		return -ENOSPC;
-	closid--;
-	closid_free_map &= ~(1 << closid);
+	for_each_set_bit(closid, &closid_free_map, closid_free_map_len) {
+		if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) &&
+		    resctrl_closid_is_dirty(closid))
+			continue;
 
-	return closid;
+		clear_bit(closid, &closid_free_map);
+		return closid;
+	}
+
+	return -ENOSPC;
 }
 
 void closid_free(int closid)