Message ID | 20230421141723.2405942-8-peternewman@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1105370vqo; Fri, 21 Apr 2023 07:21:26 -0700 (PDT) X-Google-Smtp-Source: AKy350bSXtRH11ZmNIN0QfKcFZUMWH/RbnkoLQnORKAVokGcoF13yHIsEUQW5ZLoaVxSLH7en5fF X-Received: by 2002:a17:903:188:b0:1a9:2a9e:30a8 with SMTP id z8-20020a170903018800b001a92a9e30a8mr8215490plg.9.1682086885715; Fri, 21 Apr 2023 07:21:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682086885; cv=none; d=google.com; s=arc-20160816; b=Imv/jcE+dkQjGlrBR5IyAEjvSLkEt8YqH6LOtZHQcEOdZbpHkJAWKqTuhXh7bpyF2/ i8vL+t3eA4FY04RUNDI0sIR2R/rb27r0O18fY8G0Te6rmeayvjNlZn7eRf76rWQehLrG s3mnHWdeQavM58eqbXVPNHJ3DHQEDs7TBFWsvW2RQhCqj8FieWtlJSFc0grNKZh8GV7Y nuU7yyFbfOySNCY+XWt6vhPE3nqkp16tHd61I+v7DgAqSkgqJrEwZ+cgNsyzEb003h7o Qlq4rVufPXHaxtmQezKqMTrAVAcfV5Jmk8EeY/cftmbTs/x2vYI7gOgitJeR+8sfewBC WslQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=W6639SD0H0EtLqB4ey0Z/SCfQQd2BwEFoKtAHXawttg=; b=1GwG51ZJywIK3TXSRQj/5CZgwIsp2GsACOAI8czw2JAep9CQZdL3GwmGoiAHXstPyO w/wiNc1hy/qBQRdQJhYVTo0dFf3Fg1/OlJ+PAV9/49bclVdnn0OBcwcJtqaABBYAzV3C 2g84TicOMesDCczbP8im42ak35IGBBZaV8ALgo2cUZH7UjEOd9j0yfqo2Cd/4uSZJgTN WkiyJ4SiWehWtVtElzbGEarcBWqTMai1mEPZJtlE+NdeSaadsksJYZ1U3ldrqo1dyDcu HGagR71xEnvT5sq2YFCrorGHjtjuO9M2hl8pxcjRUM/BxHiO19BLBaibK6r6tCq5Vd5+ eoiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=F99tHTPd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f15-20020a170902684f00b001a24efa45bfsi4284702pln.81.2023.04.21.07.21.12; Fri, 21 Apr 2023 07:21:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=F99tHTPd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232511AbjDUOTE (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Fri, 21 Apr 2023 10:19:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232400AbjDUOSI (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 21 Apr 2023 10:18:08 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B2B413C0B for <linux-kernel@vger.kernel.org>; Fri, 21 Apr 2023 07:18:05 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54f855ecb9cso23424457b3.0 for <linux-kernel@vger.kernel.org>; Fri, 21 Apr 2023 07:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086685; x=1684678685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W6639SD0H0EtLqB4ey0Z/SCfQQd2BwEFoKtAHXawttg=; b=F99tHTPdoQ7C9Ix3lhzgS+44QzUBAXokIJ7FiF1lJ2iBIJ3DQYl/MLze6AckhIXvdc Hn1QgxpX5Y0KkD6eTmkiklywcUgCl65/mQAQ80srV+i65MYkbmqxNV6Wuv9hhHPEJgYj hj+Bvt+Bxx/5LGd0tPAx/FTKSotEpkvZ8kjx5jYAVY0+ZyZBlSdJuheD1Xo4AwSjku6L dt04yxCX5IT74kYysO2NmG5vOOht4o5PPfR6VATD2lZ3ZJXCc50kxU8yoAvhAHZBevKb 84pRmjxRdUbiYE/VJLBlphZEamluppZApe1XIXNcNQGXa6KhQTOAtfRelsZKUhgS7kOl u/dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086685; x=1684678685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W6639SD0H0EtLqB4ey0Z/SCfQQd2BwEFoKtAHXawttg=; b=c+LP5NPfcppbn3UNioXDM1axzdkS5eBWOOXAkp8WLK5fsF0vMwFo/fvEqnEf4su+nB RdyMcobBcbmxBC9BLjJqGt6oYbx5hOxCJEJhYv+YuRDoqzLZDfSY473JLqqStfydXdKG SvlBJmqfuTXRBsDH1IwGL625V6vhV9IHBJGyfQGg6AGsnswHu+THh9ydGKIo+nWxQwM9 SShkAdQbF2S1Vnt8iurhld+whY55a51yOOXs7ggT3i5zlJZmPO0tdQvKr1hIVEZfFV3U QmSIgSIHGyd7SnPjCc3ibkrcdC/AM2bmhF9Yk3sdfkNPMH4gVUAnoe0+Ck76ZwbIfjYb ruLg== X-Gm-Message-State: AAQBX9fZgTLcS1tpykYcKFe3w9bZGWi8YmSSAP9zINkUjn6UxgT9vPAz Ua28HditBgjkPzjJ6UID09vEQnUvG9RZD1Fgaw== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a81:b149:0:b0:54f:bb37:4a1c with SMTP id p70-20020a81b149000000b0054fbb374a1cmr1291578ywh.8.1682086684958; Fri, 21 Apr 2023 07:18:04 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:21 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-8-peternewman@google.com> Subject: [PATCH v1 7/9] x86/resctrl: Assign HW RMIDs to CPUs for soft RMID From: Peter Newman <peternewman@google.com> To: Fenghua Yu <fenghua.yu@intel.com>, Reinette Chatre <reinette.chatre@intel.com> Cc: Babu Moger <babu.moger@amd.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Stephane Eranian <eranian@google.com>, James Morse <james.morse@arm.com>, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman <peternewman@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763795938220975388?= X-GMAIL-MSGID: =?utf-8?q?1763795938220975388?= |
Series |
x86/resctrl: Use soft RMIDs for reliable MBM on AMD
|
|
Commit Message
Peter Newman
April 21, 2023, 2:17 p.m. UTC
To implement soft RMIDs, each CPU needs a HW RMID that is unique within
its L3 cache domain. This is the minimum number of RMIDs needed to
monitor all CPUs.
This is accomplished by determining the rank of each CPU's mask bit
within its L3 shared_cpu_mask in resctrl_online_cpu().
Signed-off-by: Peter Newman <peternewman@google.com>
---
arch/x86/kernel/cpu/resctrl/core.c | 39 +++++++++++++++++++++++++++++-
1 file changed, 38 insertions(+), 1 deletion(-)
Comments
Hi Peter, On 4/21/2023 7:17 AM, Peter Newman wrote: > To implement soft RMIDs, each CPU needs a HW RMID that is unique within > its L3 cache domain. This is the minimum number of RMIDs needed to > monitor all CPUs. > > This is accomplished by determining the rank of each CPU's mask bit > within its L3 shared_cpu_mask in resctrl_online_cpu(). > > Signed-off-by: Peter Newman <peternewman@google.com> > --- > arch/x86/kernel/cpu/resctrl/core.c | 39 +++++++++++++++++++++++++++++- > 1 file changed, 38 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c > index 47b1c37a81f8..b0d873231b1e 100644 > --- a/arch/x86/kernel/cpu/resctrl/core.c > +++ b/arch/x86/kernel/cpu/resctrl/core.c > @@ -596,6 +596,38 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) > } > } > > +/* Assign each CPU an RMID that is unique within its cache domain. */ > +static u32 determine_hw_rmid_for_cpu(int cpu) This code tends to use the verb "get", something like "get_hw_rmid()" could work. > +{ > + struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu); > + struct cacheinfo *l3ci = NULL; > + u32 rmid; > + int i; > + > + /* Locate the cacheinfo for this CPU's L3 cache. */ > + for (i = 0; i < ci->num_leaves; i++) { > + if (ci->info_list[i].level == 3 && > + (ci->info_list[i].attributes & CACHE_ID)) { > + l3ci = &ci->info_list[i]; > + break; > + } > + } > + WARN_ON(!l3ci); > + > + if (!l3ci) > + return 0; You can use "if (WARN_ON(..))" > + > + /* Use the position of cpu in its shared_cpu_mask as its RMID. */ (please use "CPU" instead of "cpu" in comments and changelogs) > + rmid = 0; > + for_each_cpu(i, &l3ci->shared_cpu_map) { > + if (i == cpu) > + break; > + rmid++; > + } > + > + return rmid; > +} I do not see any impact to the (soft) RMIDs that can be assigned to monitor groups, yet from what I understand a generic "RMID" is used as index to MBM state. Is this correct? A hardware RMID and software RMID would thus share the same MBM state. If this is correct I think we need to work on making the boundaries between hard and soft RMID more clear. > + > static void clear_closid_rmid(int cpu) > { > struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state); > @@ -604,7 +636,12 @@ static void clear_closid_rmid(int cpu) > state->default_rmid = 0; > state->cur_closid = 0; > state->cur_rmid = 0; > - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); > + state->hw_rmid = 0; > + > + if (static_branch_likely(&rdt_soft_rmid_enable_key)) > + state->hw_rmid = determine_hw_rmid_for_cpu(cpu); > + > + wrmsr(MSR_IA32_PQR_ASSOC, state->hw_rmid, 0); > } > > static int resctrl_online_cpu(unsigned int cpu) Reinette
Hi Reinette, On Thu, May 11, 2023 at 11:40 PM Reinette Chatre <reinette.chatre@intel.com> wrote: > On 4/21/2023 7:17 AM, Peter Newman wrote: > > + /* Locate the cacheinfo for this CPU's L3 cache. */ > > + for (i = 0; i < ci->num_leaves; i++) { > > + if (ci->info_list[i].level == 3 && > > + (ci->info_list[i].attributes & CACHE_ID)) { > > + l3ci = &ci->info_list[i]; > > + break; > > + } > > + } > > + WARN_ON(!l3ci); > > + > > + if (!l3ci) > > + return 0; > > You can use "if (WARN_ON(..))" Thanks, I'll look for the other changes in the series which would benefit from this. > > + rmid = 0; > > + for_each_cpu(i, &l3ci->shared_cpu_map) { > > + if (i == cpu) > > + break; > > + rmid++; > > + } > > + > > + return rmid; > > +} > > I do not see any impact to the (soft) RMIDs that can be assigned to monitor > groups, yet from what I understand a generic "RMID" is used as index to MBM state. > Is this correct? A hardware RMID and software RMID would thus share the > same MBM state. If this is correct I think we need to work on making > the boundaries between hard and soft RMID more clear. The only RMID-indexed state used by soft RMIDs right now is mbm_state::soft_rmid_bytes. The other aspect of the boundary is ensuring that nothing will access the hard RMID-specific state for a soft RMID. The remainder of the mbm_state is only accessed by the software controller, which you suggested that I disable. The arch_mbm_state is accessed only through resctrl_arch_rmid_read() and resctrl_arch_reset_rmid(), which are called by __mon_event_count() or the limbo handler. __mon_event_count() is aware of soft RMIDs, so I would just need to ensure the software controller is disabled and never put any RMIDs on the limbo list. To be safe, I can also add WARN_ON_ONCE(rdt_mon_soft_rmid) to the rmid-indexing of the mbm_state arrays in the software controller and before the resctrl_arch_rmid_read() call in the limbo handler to catch if they're ever using soft RMIDs. -Peter > > > + > > static void clear_closid_rmid(int cpu) > > { > > struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state); > > @@ -604,7 +636,12 @@ static void clear_closid_rmid(int cpu) > > state->default_rmid = 0; > > state->cur_closid = 0; > > state->cur_rmid = 0; > > - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); > > + state->hw_rmid = 0; > > + > > + if (static_branch_likely(&rdt_soft_rmid_enable_key)) > > + state->hw_rmid = determine_hw_rmid_for_cpu(cpu); > > + > > + wrmsr(MSR_IA32_PQR_ASSOC, state->hw_rmid, 0); > > } > > > > static int resctrl_online_cpu(unsigned int cpu) > > Reinette
Hi Peter, On 5/16/2023 7:49 AM, Peter Newman wrote: > On Thu, May 11, 2023 at 11:40 PM Reinette Chatre > <reinette.chatre@intel.com> wrote: >> On 4/21/2023 7:17 AM, Peter Newman wrote: >>> + rmid = 0; >>> + for_each_cpu(i, &l3ci->shared_cpu_map) { >>> + if (i == cpu) >>> + break; >>> + rmid++; >>> + } >>> + >>> + return rmid; >>> +} >> >> I do not see any impact to the (soft) RMIDs that can be assigned to monitor >> groups, yet from what I understand a generic "RMID" is used as index to MBM state. >> Is this correct? A hardware RMID and software RMID would thus share the >> same MBM state. If this is correct I think we need to work on making >> the boundaries between hard and soft RMID more clear. > > The only RMID-indexed state used by soft RMIDs right now is > mbm_state::soft_rmid_bytes. The other aspect of the boundary is > ensuring that nothing will access the hard RMID-specific state for a > soft RMID. > > The remainder of the mbm_state is only accessed by the software > controller, which you suggested that I disable. > > The arch_mbm_state is accessed only through resctrl_arch_rmid_read() > and resctrl_arch_reset_rmid(), which are called by __mon_event_count() > or the limbo handler. > > __mon_event_count() is aware of soft RMIDs, so I would just need to > ensure the software controller is disabled and never put any RMIDs on > the limbo list. To be safe, I can also add > WARN_ON_ONCE(rdt_mon_soft_rmid) to the rmid-indexing of the mbm_state > arrays in the software controller and before the > resctrl_arch_rmid_read() call in the limbo handler to catch if they're > ever using soft RMIDs. I understand and trust that you can ensure that this implementation is done safely. Please also consider how future changes to resctrl may stumble if there are not clear boundaries. You may be able to "ensure the software controller is disabled and never put any RMIDs on the limbo list", but consider if these rules will be clear to somebody who comes along in a year or more. Documenting the data structures with these unique usages will help. Specific accessors can sometimes be useful to make it obvious in which state the data is being accessed and what data can be accessed. Using WARN as you suggest is a useful tool. Reinette
Hi Reinette, On Wed, May 17, 2023 at 2:06 AM Reinette Chatre <reinette.chatre@intel.com> wrote: > On 5/16/2023 7:49 AM, Peter Newman wrote: > > On Thu, May 11, 2023 at 11:40 PM Reinette Chatre > > <reinette.chatre@intel.com> wrote: > >> I do not see any impact to the (soft) RMIDs that can be assigned to monitor > >> groups, yet from what I understand a generic "RMID" is used as index to MBM state. > >> Is this correct? A hardware RMID and software RMID would thus share the > >> same MBM state. If this is correct I think we need to work on making > >> the boundaries between hard and soft RMID more clear. > > > > The only RMID-indexed state used by soft RMIDs right now is > > mbm_state::soft_rmid_bytes. The other aspect of the boundary is > > ensuring that nothing will access the hard RMID-specific state for a > > soft RMID. > > > > The remainder of the mbm_state is only accessed by the software > > controller, which you suggested that I disable. > > > > The arch_mbm_state is accessed only through resctrl_arch_rmid_read() > > and resctrl_arch_reset_rmid(), which are called by __mon_event_count() > > or the limbo handler. > > > > __mon_event_count() is aware of soft RMIDs, so I would just need to > > ensure the software controller is disabled and never put any RMIDs on > > the limbo list. To be safe, I can also add > > WARN_ON_ONCE(rdt_mon_soft_rmid) to the rmid-indexing of the mbm_state > > arrays in the software controller and before the > > resctrl_arch_rmid_read() call in the limbo handler to catch if they're > > ever using soft RMIDs. > > I understand and trust that you can ensure that this implementation is > done safely. Please also consider how future changes to resctrl may stumble > if there are not clear boundaries. You may be able to "ensure the software > controller is disabled and never put any RMIDs on the limbo list", but > consider if these rules will be clear to somebody who comes along in a year > or more. > > Documenting the data structures with these unique usages will help. > Specific accessors can sometimes be useful to make it obvious in which state > the data is being accessed and what data can be accessed. Using WARN > as you suggest is a useful tool. After studying the present usage of RMID values some more, I've concluded that I can cleanly move all knowledge of the soft RMID implementation to be within resctrl_arch_rmid_read() and that none of the FS-layer code should need to be aware of them. However, doing this would require James's patch to allow resctrl_arch_rmid_read() to block[1], since resctrl_arch_rmid_read() would be the first opportunity architecture-dependent code has to IPI the other CPUs in the domain. The alternative to blocking in resctrl_arch_rmid_read() would be introducing an arch hook to mon_event_read(), where blocking can be done today without James's patches, so that architecture-dependent code can IPI all CPUs in the target domain to flush their event counts to memory before calling mon_event_count() to total their MBM event counts. The remaining special case for soft RMIDs would be knowing that they should never go on the limbo list. Right now I've hard-coded the soft RMID read to always return 0 bytes for occupancy events, but this answer is only correct in the context of deciding whether RMIDs are dirty, so I have to prevent the events from being presented to the user. If returning an error wasn't considered "dirty", maybe that would work too. Maybe the cleanest approach would be to cause enabling soft RMIDs to somehow cause is_llc_occupancy_enabled() to return false, but this is difficult as long as soft RMIDs are configured at mount time and rdt_mon_features is set at boot time. If soft RMIDs move completely into the arch layer, is it preferable to configure them with an rdt boot option instead of adding an architecture-dependent mount option? I recall James being opposed to adding a boot option for this. Thanks! -Peter [1] https://lore.kernel.org/lkml/20230525180209.19497-15-james.morse@arm.com/
On Fri, Apr 21, 2023 at 4:18 PM Peter Newman <peternewman@google.com> wrote: > static void clear_closid_rmid(int cpu) > { > struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state); > @@ -604,7 +636,12 @@ static void clear_closid_rmid(int cpu) > state->default_rmid = 0; > state->cur_closid = 0; > state->cur_rmid = 0; > - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); > + state->hw_rmid = 0; > + > + if (static_branch_likely(&rdt_soft_rmid_enable_key)) > + state->hw_rmid = determine_hw_rmid_for_cpu(cpu); clear_closid_rmid() isn't run at mount time, so hw_rmid will be uninitialized on any CPUs which were already enabled. The static key was originally set at boot. (the consequence was that domain bandwidth was the amount recorded on the first CPU in the domain multiplied by the number of CPUs in the domain)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 47b1c37a81f8..b0d873231b1e 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -596,6 +596,38 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) } } +/* Assign each CPU an RMID that is unique within its cache domain. */ +static u32 determine_hw_rmid_for_cpu(int cpu) +{ + struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu); + struct cacheinfo *l3ci = NULL; + u32 rmid; + int i; + + /* Locate the cacheinfo for this CPU's L3 cache. */ + for (i = 0; i < ci->num_leaves; i++) { + if (ci->info_list[i].level == 3 && + (ci->info_list[i].attributes & CACHE_ID)) { + l3ci = &ci->info_list[i]; + break; + } + } + WARN_ON(!l3ci); + + if (!l3ci) + return 0; + + /* Use the position of cpu in its shared_cpu_mask as its RMID. */ + rmid = 0; + for_each_cpu(i, &l3ci->shared_cpu_map) { + if (i == cpu) + break; + rmid++; + } + + return rmid; +} + static void clear_closid_rmid(int cpu) { struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state); @@ -604,7 +636,12 @@ static void clear_closid_rmid(int cpu) state->default_rmid = 0; state->cur_closid = 0; state->cur_rmid = 0; - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); + state->hw_rmid = 0; + + if (static_branch_likely(&rdt_soft_rmid_enable_key)) + state->hw_rmid = determine_hw_rmid_for_cpu(cpu); + + wrmsr(MSR_IA32_PQR_ASSOC, state->hw_rmid, 0); } static int resctrl_online_cpu(unsigned int cpu)