From patchwork Fri Oct 21 13:12:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 6711 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp695476wrr; Fri, 21 Oct 2022 06:22:54 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7bLWkEzngZSr2U7iEcMjcVSlklMdxYHX/2VjT1Q6ff6ZyAGravfiSRTVlOe0c18yWw2OzR X-Received: by 2002:a05:6402:371b:b0:460:ff7d:f511 with SMTP id ek27-20020a056402371b00b00460ff7df511mr6844078edb.148.1666358574287; Fri, 21 Oct 2022 06:22:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666358574; cv=none; d=google.com; s=arc-20160816; b=RHoZixgeb6qWrHWAGVd9twhCuU3QXU684EHQoC3YUbmjS+jOyCCtPN0C1IG01EjzN6 690q8U2epd0ketkILgfpKB0ds0s2PSV0dijgmiWa/j/HqOiJ7W53zn67/niD3FtAtGla p7AqYSTWCUkPD42cVphAM8TI6dsX58jmrAEDu5JJ0lnJFImFHRHNBWYLdytSUmCH5dGU KAShJjK4U+ygc1Poo4kmZUc9JLT8u6uePQ+1ooPuIdSOoriWKQZNcnx9xF244FP+/cxd L64zYZWkmvVUQqJd7CxgL9zDBdfVtfLRmgDr9hDUKOaEeGhenCIpnDt5Aj1QA1CpMGK7 lUPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=HHZYiQ0eAr9YV0VFcGzB23ho/xbGwdydMcPIDAl8Rrs=; b=ZLEVZEe1O2M/qk5zuOYX2wnTvetni4HbDQ+iITxMkfGWuD6E5/8F4xZ80n53n6J7hf kNGXXotmDrIWkXXV+CGJLFaOFMZc9N4B8ZZejNSs5PuOsawmCxgFg/qSV801Zjl0jwe/ YBf0vUoF6we8McnAjW8rNgxSi9fQiV5SrinzP/bBN/DMaY+IQ00ZyUEJCfcyZ6iWetBC cOD7VO20C7gORK+jhYKOTV1L6TcVNal5eVzEfaAXzMuggoYA+3NnwQAEn3lTv/hc0jCc Zin2I1qDRmuAEpD9q099h3n16KpbJKwZDLzyPW11mDkNo1o+jxRQchVewSlQTkc3KknS yqTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p26-20020a170906a01a00b0078db89b2566si17756403ejy.699.2022.10.21.06.22.24; Fri, 21 Oct 2022 06:22:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbiJUNPH (ORCPT + 99 others); Fri, 21 Oct 2022 09:15:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230442AbiJUNNl (ORCPT ); Fri, 21 Oct 2022 09:13:41 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CA077230A87 for ; Fri, 21 Oct 2022 06:13:23 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 05E231477; Fri, 21 Oct 2022 06:13:19 -0700 (PDT) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C5953F792; Fri, 21 Oct 2022 06:13:10 -0700 (PDT) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, Jamie Iles , Xin Hao , xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, peternewman@google.com Subject: [PATCH 16/18] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu Date: Fri, 21 Oct 2022 13:12:02 +0000 Message-Id: <20221021131204.5581-17-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20221021131204.5581-1-james.morse@arm.com> References: <20221021131204.5581-1-james.morse@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747303608349229842?= X-GMAIL-MSGID: =?utf-8?q?1747303608349229842?= When a cpu is taken offline resctrl may need to move the overflow or limbo handlers to run on a different CPU. Once the offline callbacks have been split, cqm_setup_limbo_handler() will be called while the CPU that is going offline is still present in the cpu_mask. Pass the CPU to exclude to cqm_setup_limbo_handler() and mbm_setup_overflow_handler(). These functions can use cpumask_any_but() when selecting the CPU. -1 is used to indicate no CPUs need excluding. Signed-off-by: James Morse --- Both cpumask_any() and cpumask_any_but() return a value >= nr_cpu_ids on error. schedule_delayed_work_on() doesn't appear to check. Add the error handling to be robust. It doesn't look like its possible to hit this. --- arch/x86/kernel/cpu/resctrl/core.c | 6 ++-- arch/x86/kernel/cpu/resctrl/internal.h | 6 ++-- arch/x86/kernel/cpu/resctrl/monitor.c | 39 +++++++++++++++++++++----- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 4 +-- 4 files changed, 42 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 511ced743a79..e25d7a581b0d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -563,12 +563,14 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) { if (is_mbm_enabled() && cpu == d->mbm_work_cpu) { cancel_delayed_work(&d->mbm_over); - mbm_setup_overflow_handler(d, 0); + /* exclude_cpu=-1 as we already cpumask_clear_cpu()d */ + mbm_setup_overflow_handler(d, 0, -1); } if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu && has_busy_rmid(r, d)) { cancel_delayed_work(&d->cqm_limbo); - cqm_setup_limbo_handler(d, 0); + /* exclude_cpu=-1 as we already cpumask_clear_cpu()d */ + cqm_setup_limbo_handler(d, 0, -1); } } } diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index adbbfaabf70b..96535e359382 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -525,11 +525,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain *d, struct rdtgroup *rdtgrp, int evtid, int first); void mbm_setup_overflow_handler(struct rdt_domain *dom, - unsigned long delay_ms); + unsigned long delay_ms, + int exclude_cpu); void mbm_handle_overflow(struct work_struct *work); void __init intel_rdt_mbm_apply_quirk(void); bool is_mba_sc(struct rdt_resource *r); -void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms); +void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms, + int exclude_cpu); void cqm_handle_limbo(struct work_struct *work); bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d); void __check_limbo(struct rdt_domain *d, bool force_free); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index a9af7c56a04a..42a7fd6d56c9 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -433,7 +433,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) * setup up the limbo worker. */ if (!has_busy_rmid(r, d)) - cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL); + cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL, -1); set_bit(idx, d->rmid_busy_llc); entry->busy++; } @@ -766,15 +766,27 @@ void cqm_handle_limbo(struct work_struct *work) mutex_unlock(&rdtgroup_mutex); } -void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms) +/** + * cqm_setup_limbo_handler() - Schedule the limbo handler to run for this + * domain. + * @delay_ms: How far in the future the handler should run. + * @exclude_cpu: Which CPU the handler should not run on, -1 to pick any CPU. + */ +void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms, + int exclude_cpu) { unsigned long delay = msecs_to_jiffies(delay_ms); int cpu; - cpu = cpumask_any(&dom->cpu_mask); + if (exclude_cpu == -1) + cpu = cpumask_any(&dom->cpu_mask); + else + cpu = cpumask_any_but(&dom->cpu_mask, exclude_cpu); + dom->cqm_work_cpu = cpu; - schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay); + if (cpu < nr_cpu_ids) + schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay); } void mbm_handle_overflow(struct work_struct *work) @@ -811,7 +823,14 @@ void mbm_handle_overflow(struct work_struct *work) mutex_unlock(&rdtgroup_mutex); } -void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms) +/** + * mbm_setup_overflow_handler() - Schedule the overflow handler to run for this + * domain. + * @delay_ms: How far in the future the handler should run. + * @exclude_cpu: Which CPU the handler should not run on, -1 to pick any CPU. + */ +void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms, + int exclude_cpu) { unsigned long delay = msecs_to_jiffies(delay_ms); int cpu; @@ -819,9 +838,15 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms) if (!resctrl_mounted || !resctrl_arch_mon_capable()) return; - cpu = cpumask_any(&dom->cpu_mask); + if (exclude_cpu == -1) + cpu = cpumask_any(&dom->cpu_mask); + else + cpu = cpumask_any_but(&dom->cpu_mask, exclude_cpu); + dom->mbm_work_cpu = cpu; - schedule_delayed_work_on(cpu, &dom->mbm_over, delay); + + if (cpu < nr_cpu_ids) + schedule_delayed_work_on(cpu, &dom->mbm_over, delay); } static int dom_data_init(struct rdt_resource *r) diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index ef444a5690e9..3d68a9cb1942 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -2254,7 +2254,7 @@ static int rdt_get_tree(struct fs_context *fc) if (is_mbm_enabled()) { r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; list_for_each_entry(dom, &r->domains, list) - mbm_setup_overflow_handler(dom, MBM_OVERFLOW_INTERVAL); + mbm_setup_overflow_handler(dom, MBM_OVERFLOW_INTERVAL, -1); } goto out; @@ -3413,7 +3413,7 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d) if (is_mbm_enabled()) { INIT_DELAYED_WORK(&d->mbm_over, mbm_handle_overflow); - mbm_setup_overflow_handler(d, MBM_OVERFLOW_INTERVAL); + mbm_setup_overflow_handler(d, MBM_OVERFLOW_INTERVAL, -1); } if (is_llc_occupancy_enabled())