From patchwork Mon Mar 20 17:26:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 72338 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp1358509wrt; Mon, 20 Mar 2023 11:08:39 -0700 (PDT) X-Google-Smtp-Source: AK7set+PKx+V9r16jXaCx4X4ZrsWL7xFksHeJqqr3HOvxqUJMFDLzohD/XV78vAA1qTG/i3w8ICG X-Received: by 2002:a17:90b:3a81:b0:234:13a3:6e67 with SMTP id om1-20020a17090b3a8100b0023413a36e67mr99305pjb.12.1679335719554; Mon, 20 Mar 2023 11:08:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679335719; cv=none; d=google.com; s=arc-20160816; b=L6xQwCHTtVycd2ZvwZazWkCyKZbp6zGJlV3c+cnRCiyRRezKEEwaRW/wOL11fQGOaj GZCgAgE6MPAdgElRLj7ra9HaToMcLjYi506G7HbE6uUo0n8gWeMSlu+K9/nR6Q79NjdP R+ABwpwrZuKT6akNgtEgeXZI1LLd5C8IlrCjlzqSAQlBpFu/fn2PAWkFAbm5vD3V4ZOh 2ciRcg9rTKX66mv8mV/aVhUFpycPqCAUI/qwdXvzWAbPwbBrnNSOhlx0HC/nHWAaH8Ae 93fLT6ENUYItxwNEvUKEYce3nzQqRAzbnC/mr0SikqspZV5p71KIbqSscDrU5UgY/gyF Xzng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=GEdIbvWYdMFCtL5fl4Kye2cd5/9IxCp4lNnpvZE5GLg=; b=OTwBCIVpOMuSuZeJeyocaPKgooKxj/DmCufgoOrDkbzzOEnKDZyu0I43gtP4qyxaPA OnAoTY64XAmbfMZ1eXlZDkyFEaIa5CGrkwmzfzk4QHkDr2Iiu+D6UUsqBhKu1cCBoBjM sM8hXYplCNGvhuiXmv+S3p/kGGDEkWQKP2zN72/9ZCnFEfQT+2Alh6ccsAh2lLtE8Cdt sMDsCiMjV4y7YXs4x/HYskY9A7+SXuleEBIGmZmH1XNOQRtWqckeARQu7gJ/odr49yLA FcnbEXMb+TvsoUlSJGJgrZbz8oJB/pKHSdCdDKGjw7R4EQRbvJiKZyCPYfNalMADB5v9 l36g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v6-20020a17090abb8600b00233b583bf5fsi16475911pjr.74.2023.03.20.11.08.26; Mon, 20 Mar 2023 11:08:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229825AbjCTRrt (ORCPT + 99 others); Mon, 20 Mar 2023 13:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229906AbjCTRrK (ORCPT ); Mon, 20 Mar 2023 13:47:10 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3DA9B3B238 for ; Mon, 20 Mar 2023 10:42:49 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B502D1BCB; Mon, 20 Mar 2023 10:28:45 -0700 (PDT) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 30DE83F67D; Mon, 20 Mar 2023 10:27:59 -0700 (PDT) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com Subject: [PATCH v3 18/19] x86/resctrl: Add cpu offline callback for resctrl work Date: Mon, 20 Mar 2023 17:26:19 +0000 Message-Id: <20230320172620.18254-19-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230320172620.18254-1-james.morse@arm.com> References: <20230320172620.18254-1-james.morse@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760911131504030426?= X-GMAIL-MSGID: =?utf-8?q?1760911131504030426?= The resctrl architecture specific code may need to free a domain when a CPU goes offline, it also needs to reset the CPUs PQR_ASSOC register. The resctrl filesystem code needs to move the overflow and limbo work to run on a different CPU, and clear this CPU from the cpu_mask of control and monitor groups. Currently this is all done in core.c and called from resctrl_offline_cpu(), making the split between architecture and filesystem code unclear. Move the filesystem work into a filesystem helper called resctrl_offline_cpu(), and rename the one in core.c resctrl_arch_offline_cpu(). The rdtgroup_mutex is unlocked and locked again in the call in preparation for changing the locking rules for the architecture code. resctrl_offline_cpu() is called before any of the resource/domains are updated, and makes use of the exclude_cpu feature that was previously added. Tested-by: Shaopeng Tan Signed-off-by: James Morse --- arch/x86/kernel/cpu/resctrl/core.c | 41 ++++---------------------- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 39 ++++++++++++++++++++++++ include/linux/resctrl.h | 1 + 3 files changed, 45 insertions(+), 36 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index aafe4b74587c..4e5fc89dab6d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -578,22 +578,6 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) return; } - - if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) { - if (is_mbm_enabled() && cpu == d->mbm_work_cpu) { - cancel_delayed_work(&d->mbm_over); - /* - * exclude_cpu=-1 as this CPU has already been removed - * by cpumask_clear_cpu()d - */ - mbm_setup_overflow_handler(d, 0, RESCTRL_PICK_ANY_CPU); - } - if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu && - has_busy_rmid(r, d)) { - cancel_delayed_work(&d->cqm_limbo); - cqm_setup_limbo_handler(d, 0, RESCTRL_PICK_ANY_CPU); - } - } } static void clear_closid_rmid(int cpu) @@ -623,31 +607,15 @@ static int resctrl_arch_online_cpu(unsigned int cpu) return err; } -static void clear_childcpus(struct rdtgroup *r, unsigned int cpu) +static int resctrl_arch_offline_cpu(unsigned int cpu) { - struct rdtgroup *cr; - - list_for_each_entry(cr, &r->mon.crdtgrp_list, mon.crdtgrp_list) { - if (cpumask_test_and_clear_cpu(cpu, &cr->cpu_mask)) { - break; - } - } -} - -static int resctrl_offline_cpu(unsigned int cpu) -{ - struct rdtgroup *rdtgrp; struct rdt_resource *r; mutex_lock(&rdtgroup_mutex); + resctrl_offline_cpu(cpu); + for_each_capable_rdt_resource(r) domain_remove_cpu(cpu, r); - list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) { - if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) { - clear_childcpus(rdtgrp, cpu); - break; - } - } clear_closid_rmid(cpu); mutex_unlock(&rdtgroup_mutex); @@ -970,7 +938,8 @@ static int __init resctrl_late_init(void) state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/resctrl/cat:online:", - resctrl_arch_online_cpu, resctrl_offline_cpu); + resctrl_arch_online_cpu, + resctrl_arch_offline_cpu); if (state < 0) return state; diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index bf206bdb21ee..c27ec56c6c60 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -3710,6 +3710,45 @@ int resctrl_online_cpu(unsigned int cpu) return 0; } +static void clear_childcpus(struct rdtgroup *r, unsigned int cpu) +{ + struct rdtgroup *cr; + + list_for_each_entry(cr, &r->mon.crdtgrp_list, mon.crdtgrp_list) { + if (cpumask_test_and_clear_cpu(cpu, &cr->cpu_mask)) + break; + } +} + +void resctrl_offline_cpu(unsigned int cpu) +{ + struct rdt_domain *d; + struct rdtgroup *rdtgrp; + struct rdt_resource *l3 = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + + lockdep_assert_held(&rdtgroup_mutex); + + list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) { + if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) { + clear_childcpus(rdtgrp, cpu); + break; + } + } + + d = get_domain_from_cpu(cpu, l3); + if (d) { + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) { + cancel_delayed_work(&d->mbm_over); + mbm_setup_overflow_handler(d, 0, cpu); + } + if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu && + has_busy_rmid(l3, d)) { + cancel_delayed_work(&d->cqm_limbo); + cqm_setup_limbo_handler(d, 0, cpu); + } + } +} + /* * rdtgroup_init - rdtgroup initialization * diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 3ea7d618f33f..f053527aaa5b 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -226,6 +226,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d, int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d); void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d); int resctrl_online_cpu(unsigned int cpu); +void resctrl_offline_cpu(unsigned int cpu); /** * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rmid