Message ID | 20230320172620.18254-10-james.morse@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp1362209wrt; Mon, 20 Mar 2023 11:16:09 -0700 (PDT) X-Google-Smtp-Source: AK7set8+JJsIu1D8FlgynUv40fL6xQbbfnba5/xl/oWcaIBqKwF0VDvhrda4XwJ5TrGAjNPw6S+9 X-Received: by 2002:a05:6a20:a8a5:b0:d9:3937:43a7 with SMTP id ca37-20020a056a20a8a500b000d9393743a7mr6301917pzb.55.1679336168966; Mon, 20 Mar 2023 11:16:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679336168; cv=none; d=google.com; s=arc-20160816; b=tqnZQoYDBOhAvfLVIzLLu9Ru8dTEN5yFBo6DqNUbPLaXDfKcA6SmnlRxo/YpRHcYVh s+AcaccQNiQI7n3xT4jOPWBiUAKoVvYa4oG57qpkHdheVy3IIL0Nbr5g2/spHKhA96C8 m2+o5XEXxkHpQMT7EHcq0syDrGDesLJswPLKOACWQFdnNfJR0deauTrB9NT7i+VMQMwB OYVfbzPqw5cTtN270R92s2ktno9nFwOKKGdgRr2F1Rk9pwZ1EjBmlIbFtWri7pfiLMrU IujBfegmsIICO6IOxS2c25v6IIbIIq5CsWbiu5Atdswq7RQms0bIIJNNRhy9jbp+U2hS 5QXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=J0oWX2oJ0lCWBH/JrsoUEX1fN+42U2i6DMT9wL+N/Xg=; b=EhfXHxo53ot+36Mmw1wJzR7hQAtwBdTUsKwbUTIgJw3m41931jTnzJnNfdMJr3qxc7 hAx8EOG+LJegfDYPB0YfWGHuq3t/X8X9mTwJ7RuRpPbBwyG+ZwbsYLT5nCq8S6+VlhE7 2jaJ77d/IMea435dCbVQRdDs9qhSPMRCyiBeqHSgr7qjvKjkSPwj2wfeXH5IS0NO/g8D elxT+y8Jw80wU2BehR/xpQ+/pnpwWJ1x+dDnBQnrv06ZiOYEio7j4bTuEmU8ZCrRoh52 5rm1VVf3UJVM3Lq20YuYpAZOzSG0ryNjlKI517ZiuTI+5JZYndeiTU7DmbNXtWrUslnC dGhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s23-20020a632c17000000b004f143cb44a2si10786898pgs.625.2023.03.20.11.15.39; Mon, 20 Mar 2023 11:16:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230210AbjCTRsC (ORCPT <rfc822;pusanteemu@gmail.com> + 99 others); Mon, 20 Mar 2023 13:48:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbjCTRrO (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 20 Mar 2023 13:47:14 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AEAD6C163 for <linux-kernel@vger.kernel.org>; Mon, 20 Mar 2023 10:42:56 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 53CD4165C; Mon, 20 Mar 2023 10:28:20 -0700 (PDT) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CCF183F67D; Mon, 20 Mar 2023 10:27:33 -0700 (PDT) From: James Morse <james.morse@arm.com> To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu <fenghua.yu@intel.com>, Reinette Chatre <reinette.chatre@intel.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, H Peter Anvin <hpa@zytor.com>, Babu Moger <Babu.Moger@amd.com>, James Morse <james.morse@arm.com>, shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS <scott@os.amperecomputing.com>, carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, Jamie Iles <quic_jiles@quicinc.com>, Xin Hao <xhao@linux.alibaba.com>, peternewman@google.com Subject: [PATCH v3 09/19] x86/resctrl: Queue mon_event_read() instead of sending an IPI Date: Mon, 20 Mar 2023 17:26:10 +0000 Message-Id: <20230320172620.18254-10-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230320172620.18254-1-james.morse@arm.com> References: <20230320172620.18254-1-james.morse@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760911602444410586?= X-GMAIL-MSGID: =?utf-8?q?1760911602444410586?= |
Series |
x86/resctrl: monitored closid+rmid together, separate arch/fs locking
|
|
Commit Message
James Morse
March 20, 2023, 5:26 p.m. UTC
x86 is blessed with an abundance of monitors, one per RMID, that can be read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, the number implemented is up to the manufacturer. This means when there are fewer monitors than needed, they need to be allocated and freed. Worse, the domain may be broken up into slices, and the MMIO accesses for each slice may need performing from different CPUs. These two details mean MPAMs monitor code needs to be able to sleep, and IPI another CPU in the domain to read from a resource that has been sliced. mon_event_read() already invokes mon_event_count() via IPI, which means this isn't possible. On systems using nohz-full, some CPUs need to be interrupted to run kernel work as they otherwise stay in user-space running realtime workloads. Interrupting these CPUs should be avoided, and scheduling work on them may never complete. Change mon_event_read() to pick a housekeeping CPU, (one that is not using nohz_full) and schedule mon_event_count() and wait. If all the CPUs in a domain are using nohz-full, then an IPI is used as the fallback. This function is only used in response to a user-space filesystem request (not the timing sensitive overflow code). This allows MPAM to hide the slice behaviour from resctrl, and to keep the monitor-allocation in monitor.c. When the IPI fallback is used on machines where MPAM needs to make an access on multiple CPUs, the counter read will always fail. Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com> Signed-off-by: James Morse <james.morse@arm.com> --- Changes since v2: * Use cpumask_any_housekeeping() and fallback to an IPI if needed --- arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 19 +++++++++++++++++-- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- arch/x86/kernel/cpu/resctrl/monitor.c | 6 ++++-- 3 files changed, 22 insertions(+), 5 deletions(-)
Comments
Hi James, On Mon, Mar 20, 2023 at 6:27 PM James Morse <james.morse@arm.com> wrote: > > x86 is blessed with an abundance of monitors, one per RMID, that can be As I explained earlier, this is not the case on AMD. > read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, > the number implemented is up to the manufacturer. This means when there are > fewer monitors than needed, they need to be allocated and freed. > > Worse, the domain may be broken up into slices, and the MMIO accesses > for each slice may need performing from different CPUs. > > These two details mean MPAMs monitor code needs to be able to sleep, and > IPI another CPU in the domain to read from a resource that has been sliced. This doesn't sound very convincing. Could mon_event_read() IPI all the CPUs in the domain? (after waiting to allocate and install monitors when necessary?) > > mon_event_read() already invokes mon_event_count() via IPI, which means > this isn't possible. On systems using nohz-full, some CPUs need to be > interrupted to run kernel work as they otherwise stay in user-space > running realtime workloads. Interrupting these CPUs should be avoided, > and scheduling work on them may never complete. > > Change mon_event_read() to pick a housekeeping CPU, (one that is not using > nohz_full) and schedule mon_event_count() and wait. If all the CPUs > in a domain are using nohz-full, then an IPI is used as the fallback. > > This function is only used in response to a user-space filesystem request > (not the timing sensitive overflow code). > > This allows MPAM to hide the slice behaviour from resctrl, and to keep > the monitor-allocation in monitor.c. This goal sounds more likely. If it makes the initial enablement smoother, then I'm all for it. Reviewed-By: Peter Newman <peternewman@google.com> These changes worked fine for me on tip/master, though there were merge conflicts to resolve. Tested-By: Peter Newman <peternewman@google.com> Thanks! -Peter
On Wed, Mar 22, 2023 at 3:07 PM Peter Newman <peternewman@google.com> wrote: > On Mon, Mar 20, 2023 at 6:27 PM James Morse <james.morse@arm.com> wrote: > > > > x86 is blessed with an abundance of monitors, one per RMID, that can be > > As I explained earlier, this is not the case on AMD. > > > read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, > > the number implemented is up to the manufacturer. This means when there are > > fewer monitors than needed, they need to be allocated and freed. > > > > Worse, the domain may be broken up into slices, and the MMIO accesses > > for each slice may need performing from different CPUs. > > > > These two details mean MPAMs monitor code needs to be able to sleep, and > > IPI another CPU in the domain to read from a resource that has been sliced. > > This doesn't sound very convincing. Could mon_event_read() IPI all the > CPUs in the domain? (after waiting to allocate and install monitors > when necessary?) No wait, I know that isn't correct. As you explained it, the remote CPU needs to sleep because it may need to atomically acquire, install, and read a CSU monitor. It still seems possible for the mon_event_read() thread to do all the waiting (tell remote CPU to program CSU monitor, wait, tell same remote CPU to read monitor), but that sounds like more work that I don't see a lot of benefit to doing today. Can you update the changelog to just say the remote CPU needs to block when installing a CSU monitor? Thanks! -Peter
Hi James, On 3/20/2023 10:26 AM, James Morse wrote: > x86 is blessed with an abundance of monitors, one per RMID, that can be > read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, > the number implemented is up to the manufacturer. This means when there are > fewer monitors than needed, they need to be allocated and freed. > > Worse, the domain may be broken up into slices, and the MMIO accesses > for each slice may need performing from different CPUs. > > These two details mean MPAMs monitor code needs to be able to sleep, and > IPI another CPU in the domain to read from a resource that has been sliced. > > mon_event_read() already invokes mon_event_count() via IPI, which means > this isn't possible. On systems using nohz-full, some CPUs need to be > interrupted to run kernel work as they otherwise stay in user-space > running realtime workloads. Interrupting these CPUs should be avoided, > and scheduling work on them may never complete. > > Change mon_event_read() to pick a housekeeping CPU, (one that is not using > nohz_full) and schedule mon_event_count() and wait. If all the CPUs > in a domain are using nohz-full, then an IPI is used as the fallback. It is not clear to me where in this solution an IPI is used as fallback ... (see below) > + int cpu; > + > + /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ > + lockdep_assert_held(&rdtgroup_mutex); > + > /* > - * setup the parameters to send to the IPI to read the data. > + * setup the parameters to pass to mon_event_count() to read the data. > */ > rr->rgrp = rdtgrp; > rr->evtid = evtid; > @@ -537,7 +543,16 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, > rr->val = 0; > rr->first = first; > > - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); > + cpu = get_cpu(); > + if (cpumask_test_cpu(cpu, &d->cpu_mask)) { > + mon_event_count(rr); > + put_cpu(); > + } else { > + put_cpu(); > + > + cpu = cpumask_any_housekeeping(&d->cpu_mask); > + smp_call_on_cpu(cpu, mon_event_count, rr, false); > + } > } > ... from what I can tell there is no IPI fallback here. As per previous patch I understand cpumask_any_housekeeping() could still return a nohz_full CPU and calling smp_call_on_cpu() on it would not send an IPI but instead queue the work to it. What did I miss? Reinette
Hi Peter, On 22/03/2023 14:07, Peter Newman wrote: > On Mon, Mar 20, 2023 at 6:27 PM James Morse <james.morse@arm.com> wrote: >> >> x86 is blessed with an abundance of monitors, one per RMID, that can be > > As I explained earlier, this is not the case on AMD. I'll change it so say Intel. >> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, >> the number implemented is up to the manufacturer. This means when there are >> fewer monitors than needed, they need to be allocated and freed. >> >> Worse, the domain may be broken up into slices, and the MMIO accesses >> for each slice may need performing from different CPUs. >> >> These two details mean MPAMs monitor code needs to be able to sleep, and >> IPI another CPU in the domain to read from a resource that has been sliced. > > This doesn't sound very convincing. Could mon_event_read() IPI all the > CPUs in the domain? (after waiting to allocate and install monitors > when necessary?) On the majority of platforms this would be a waste of time as the IPI only needs sending to one. I'd like to keep the cost of being strange limited to the strange platforms. I don't think exposing a 'sub domain' cpumask to resctrl is helpful: this needs to be hidden in the architecture specific code. The IPI is because of SoC components being implemented as slices which are private to that slice. The sleeping is because the CSU counters are allowed to be 'not ready' immediately after programming. The time is short, and to allow platforms that have too few CSU monitors to support the same user-interface as x86^W Intel, the MPAM driver needs to be able to multiplex a single CSU monitor between multiple control/monitor groups. Allowing it to sleep for the advertised not-ready period is the simplest way of doing this. >> mon_event_read() already invokes mon_event_count() via IPI, which means >> this isn't possible. On systems using nohz-full, some CPUs need to be >> interrupted to run kernel work as they otherwise stay in user-space >> running realtime workloads. Interrupting these CPUs should be avoided, >> and scheduling work on them may never complete. >> >> Change mon_event_read() to pick a housekeeping CPU, (one that is not using >> nohz_full) and schedule mon_event_count() and wait. If all the CPUs >> in a domain are using nohz-full, then an IPI is used as the fallback. >> >> This function is only used in response to a user-space filesystem request >> (not the timing sensitive overflow code). >> >> This allows MPAM to hide the slice behaviour from resctrl, and to keep >> the monitor-allocation in monitor.c. > > This goal sounds more likely. > > If it makes the initial enablement smoother, then I'm all for it. > Reviewed-By: Peter Newman <peternewman@google.com> > > These changes worked fine for me on tip/master, though there were merge > conflicts to resolve. > > Tested-By: Peter Newman <peternewman@google.com> Thanks! James
Hi Peter, On 23/03/2023 09:09, Peter Newman wrote: > On Wed, Mar 22, 2023 at 3:07 PM Peter Newman <peternewman@google.com> wrote: >> On Mon, Mar 20, 2023 at 6:27 PM James Morse <james.morse@arm.com> wrote: >>> >>> x86 is blessed with an abundance of monitors, one per RMID, that can be >> >> As I explained earlier, this is not the case on AMD. >> >>> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, >>> the number implemented is up to the manufacturer. This means when there are >>> fewer monitors than needed, they need to be allocated and freed. >>> >>> Worse, the domain may be broken up into slices, and the MMIO accesses >>> for each slice may need performing from different CPUs. >>> >>> These two details mean MPAMs monitor code needs to be able to sleep, and >>> IPI another CPU in the domain to read from a resource that has been sliced. >> >> This doesn't sound very convincing. Could mon_event_read() IPI all the >> CPUs in the domain? (after waiting to allocate and install monitors >> when necessary?) > > No wait, I know that isn't correct. > > As you explained it, the remote CPU needs to sleep because it may need > to atomically acquire, install, and read a CSU monitor. > > It still seems possible for the mon_event_read() thread to do all the > waiting (tell remote CPU to program CSU monitor, wait, tell same remote > CPU to read monitor), but that sounds like more work that I don't see a > lot of benefit to doing today. > > Can you update the changelog to just say the remote CPU needs to block > when installing a CSU monitor? Sure, I've added this after the first paragraph: -------%<------- MPAM's CSU monitors are used to back the 'llc_occupancy' monitor file. The CSU counter is allowed to return 'not ready' for a small number of micro-seconds after programming. To allow one CSU hardware monitor to be used for multiple control or monitor groups, the CPU accessing the monitor needs to be able to block when configuring and reading the counter. -------%<------- Thanks, James
Hi Reinette, On 01/04/2023 00:25, Reinette Chatre wrote: > On 3/20/2023 10:26 AM, James Morse wrote: >> x86 is blessed with an abundance of monitors, one per RMID, that can be >> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC, >> the number implemented is up to the manufacturer. This means when there are >> fewer monitors than needed, they need to be allocated and freed. >> >> Worse, the domain may be broken up into slices, and the MMIO accesses >> for each slice may need performing from different CPUs. >> >> These two details mean MPAMs monitor code needs to be able to sleep, and >> IPI another CPU in the domain to read from a resource that has been sliced. >> >> mon_event_read() already invokes mon_event_count() via IPI, which means >> this isn't possible. On systems using nohz-full, some CPUs need to be >> interrupted to run kernel work as they otherwise stay in user-space >> running realtime workloads. Interrupting these CPUs should be avoided, >> and scheduling work on them may never complete. >> >> Change mon_event_read() to pick a housekeeping CPU, (one that is not using >> nohz_full) and schedule mon_event_count() and wait. If all the CPUs >> in a domain are using nohz-full, then an IPI is used as the fallback. > > It is not clear to me where in this solution an IPI is used as fallback ... > (see below) >> @@ -537,7 +543,16 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, >> rr->val = 0; >> rr->first = first; >> >> - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); >> + cpu = get_cpu(); >> + if (cpumask_test_cpu(cpu, &d->cpu_mask)) { >> + mon_event_count(rr); >> + put_cpu(); >> + } else { >> + put_cpu(); >> + >> + cpu = cpumask_any_housekeeping(&d->cpu_mask); >> + smp_call_on_cpu(cpu, mon_event_count, rr, false); >> + } >> } >> > > ... from what I can tell there is no IPI fallback here. As per previous > patch I understand cpumask_any_housekeeping() could still return > a nohz_full CPU and calling smp_call_on_cpu() on it would not send > an IPI but instead queue the work to it. What did I miss? Huh, looks like its still in my git-stash. Sorry about that. The combined hunk looks like this: ----------------------%<---------------------- @@ -537,7 +550,26 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, rr->val = 0; rr->first = first; - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); + cpu = get_cpu(); + if (cpumask_test_cpu(cpu, &d->cpu_mask)) { + mon_event_count(rr); + put_cpu(); + } else { + put_cpu(); + + cpu = cpumask_any_housekeeping(&d->cpu_mask); + + /* + * cpumask_any_housekeeping() prefers housekeeping CPUs, but + * are all the CPUs nohz_full? If yes, pick a CPU to IPI. + * MPAM's resctrl_arch_rmid_read() is unable to read the + * counters on some platforms if its called in irq context. + */ + if (tick_nohz_full_cpu(cpu)) + smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); + else + smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); + } } ----------------------%<---------------------- Where smp_mon_event_count() is a static wrapper to make the types work. Thanks, James
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c index eb07d4435391..b06e86839d00 100644 --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c @@ -19,6 +19,7 @@ #include <linux/kernfs.h> #include <linux/seq_file.h> #include <linux/slab.h> +#include <linux/tick.h> #include "internal.h" /* @@ -527,8 +528,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain *d, struct rdtgroup *rdtgrp, int evtid, int first) { + int cpu; + + /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ + lockdep_assert_held(&rdtgroup_mutex); + /* - * setup the parameters to send to the IPI to read the data. + * setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp = rdtgrp; rr->evtid = evtid; @@ -537,7 +543,16 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, rr->val = 0; rr->first = first; - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); + cpu = get_cpu(); + if (cpumask_test_cpu(cpu, &d->cpu_mask)) { + mon_event_count(rr); + put_cpu(); + } else { + put_cpu(); + + cpu = cpumask_any_housekeeping(&d->cpu_mask); + smp_call_on_cpu(cpu, mon_event_count, rr, false); + } } int rdtgroup_mondata_show(struct seq_file *m, void *arg) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 0b5fd5a0cda2..a07557390895 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -563,7 +563,7 @@ int alloc_rmid(u32 closid); void free_rmid(u32 closid, u32 rmid); int rdt_get_mon_l3_config(struct rdt_resource *r); bool __init rdt_cpu_has(int flag); -void mon_event_count(void *info); +int mon_event_count(void *info); int rdtgroup_mondata_show(struct seq_file *m, void *arg); void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain *d, struct rdtgroup *rdtgrp, diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 3bec5c59ca0e..5e9e876c3409 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -550,10 +550,10 @@ static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr) } /* - * This is called via IPI to read the CQM/MBM counters + * This is scheduled by mon_event_read() to read the CQM/MBM counters * on a domain. */ -void mon_event_count(void *info) +int mon_event_count(void *info) { struct rdtgroup *rdtgrp, *entry; struct rmid_read *rr = info; @@ -586,6 +586,8 @@ void mon_event_count(void *info) */ if (ret == 0) rr->err = 0; + + return 0; } /*