From patchwork Tue Feb 13 18:44:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 200568 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp743088dyb; Tue, 13 Feb 2024 10:55:41 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWORfNh2StetbZXGfK5CsId7WSgMVZ9wEb2uNp2D9MZy/4tp/EHvYWRQoAEBnn0ooBwZoiVVUg9xgPr7NQ/JPbby6f0vQ== X-Google-Smtp-Source: AGHT+IH4HzclASkBD7IjrX6AnUHW2PRE2ET+x/ez8DB+zZKdYkrowjkV+K5RJ35zkkqoA12+1Vv1 X-Received: by 2002:a05:6a21:9206:b0:19c:9419:1f37 with SMTP id tl6-20020a056a21920600b0019c94191f37mr584589pzb.13.1707850541258; Tue, 13 Feb 2024 10:55:41 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707850541; cv=pass; d=google.com; s=arc-20160816; b=ZfNqWD+EjQPvcv5soWw8a8Q91aKqyCxVRhhISMn16eA3WQJbH4kffSMc7WshGFsmQh 7qNKX+D/TANAhFD3p0qFYGhR9zUBOJaygj5gQUKjOwZ5LChEEt42JVqJSfken4JZWV7D ufOPtRzzYUVUmGvWpTozpJktmjTuKyxM0dx2ZwAeXG96529tZccCiQH1Uiq7hbi0T31C tlghPBQcg129Ep+0ztVUWGB8QhxOjnj4Mc4F9g8/n5yxqghsSpPaABd8GPF/lOp77FWF 3IB4PB6u9xTxh2h/MV1NLxOmXqaWeUZ2FBY19HB44fnMBy1aGbO08oRl2Bk6ZH+CM0+b GI/w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=b9PHOAX1gF4vsYMdgCQKu/jQ2P8XIjL6hdCH9UxfwMM=; fh=IeLxMHZPFEjvEyd0d3Vwy8nHQ17LewkoTLBUafCWwlY=; b=nVK+7wPNjGiYjSIg+BEOVy9AMiJzbXP8tRC4WqZDZFLqqXNIn2ZPK76+gREhxX//TZ ZZJfd3gPunXr2pszGf8Hjo7i5+jGAasf///NasjI+2kOHqF2yTlDnq7k7VMdljD64/HL F4U+A8uJzUQ5r+u+TVZCw8YBW/EwK0x3PdTBbZYNG2wIsxBlNd14cNsJQ5bO4SYaxjF8 ug2cPejmmi1EVQsA2dHd1ExRgLnfKZ4NbUm2yHDfImu7UsBAVP8gjqql/PD1ZyWlISIA a5iuzKYY3aObRqgc2meu1spq6NGCbKCO4r8SL/h08bWLNvaEpiKJjhAE27NCWxbu8BOq lRgA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-64099-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64099-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com X-Forwarded-Encrypted: i=2; AJvYcCUJKC+YtFeY+Nm0YHZ7ecEepkpffIET61hCobssSioZm7IVxre8yba6HpIc6WMvJjhaGyfUky7j0iz0RUFvmTCSFJ0nJQ== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id q13-20020a65684d000000b005dc76f56a18si2349525pgt.559.2024.02.13.10.55.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 10:55:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64099-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-64099-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64099-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E9D6228DFC4 for ; Tue, 13 Feb 2024 18:47:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C2F4660DC7; Tue, 13 Feb 2024 18:45:32 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2AAED60BBB for ; Tue, 13 Feb 2024 18:45:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707849930; cv=none; b=sUjLOxSXB3wC0YK7axWwv6ccejLqraEa5/H8H56w4dcx5YBehU7tb2uuy+haB4zu/4BnCN+c28rKzuLP2/wToUd7o7Ol67aVHUEQoIk6DNRDxauvAnvoRPddkx0I7lSxE+V6HoojWUclZBWoGsidPSe+8soIxy+paMVo9L8MlnY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707849930; c=relaxed/simple; bh=4Ked7jnGlDXK2s0lINFNKoZlGzapZ/Q5SvYuHDZAEHk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CwX1Du3i6YQ91Xtw7aDhqKlTC20mjcxcFMR7CVZ91CXjazkUOBg/8VGKcN09b0QhfHaGm2hgFPN8zSipD53dQVI7u9rVS3jUHdLihHxhWPHjwx9Cl9/UHYCwRKO/tfi4BuBBF14W2XQfXrV9PDlcQbY2fPWpzNrMvUrnwdgQsUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 01C1C1477; Tue, 13 Feb 2024 10:46:10 -0800 (PST) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9D6383F766; Tue, 13 Feb 2024 10:45:25 -0800 (PST) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com, dfustini@baylibre.com, amitsinght@marvell.com, David Hildenbrand , Babu Moger Subject: [PATCH v9 08/24] x86/resctrl: Track the number of dirty RMID a CLOSID has Date: Tue, 13 Feb 2024 18:44:22 +0000 Message-Id: <20240213184438.16675-9-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20240213184438.16675-1-james.morse@arm.com> References: <20240213184438.16675-1-james.morse@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790811089089135347 X-GMAIL-MSGID: 1790811089089135347 MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be used for different control groups. This means once a CLOSID is allocated, all its monitoring ids may still be dirty, and held in limbo. Keep track of the number of RMID held in limbo each CLOSID has. This will allow a future helper to find the 'cleanest' CLOSID when allocating. The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined. This will never be the case on x86. Signed-off-by: James Morse Tested-by: Shaopeng Tan Tested-by: Peter Newman Tested-by: Babu Moger Tested-by: Carl Worth # arm64 Reviewed-by: Shaopeng Tan Reviewed-by: Reinette Chatre Reviewed-by: Babu Moger --- Changes since v4: * Moved closid_num_dirty_rmid[] update under entry->busy check * Take the mutex in dom_data_init() as the caller doesn't. Changes since v5: * Added braces after an else. * Made closid_num_dirty_rmid an unsigned int. * Moved mutex_lock() in dom_data_init() to cover the whole function. Changes since v6: * Made closid_num_dirty_rmid[] and associated tmp variables u32. Changes since v7: * Clobber kfree()d variable with NULL. * Guard the use of closid_num_dirty_rmid with IS_ENABLED() so it can be optimised out on x86. Changes since v8: * Added a comment about over allocation is the arch doesn't pre-sanitise values. MPAM does, and is the only user of this code. * Made comments over closid_num_dirty_rmid() not a kdoc comment. --- arch/x86/kernel/cpu/resctrl/monitor.c | 75 +++++++++++++++++++++++---- 1 file changed, 65 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index c49f2e89ef29..13b0c8d14f3d 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -50,6 +50,13 @@ struct rmid_entry { */ static LIST_HEAD(rmid_free_lru); +/* + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has. + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined. + * Indexed by CLOSID. Protected by rdtgroup_mutex. + */ +static u32 *closid_num_dirty_rmid; + /* * @rmid_limbo_count - count of currently unused but (potentially) * dirty RMIDs. @@ -292,6 +299,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, return 0; } +static void limbo_release_entry(struct rmid_entry *entry) +{ + lockdep_assert_held(&rdtgroup_mutex); + + rmid_limbo_count--; + list_add_tail(&entry->list, &rmid_free_lru); + + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) + closid_num_dirty_rmid[entry->closid]--; +} + /* * Check the RMIDs that are marked as busy for this domain. If the * reported LLC occupancy is below the threshold clear the busy bit and @@ -328,10 +346,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free) if (force_free || !rmid_dirty) { clear_bit(idx, d->rmid_busy_llc); - if (!--entry->busy) { - rmid_limbo_count--; - list_add_tail(&entry->list, &rmid_free_lru); - } + if (!--entry->busy) + limbo_release_entry(entry); } cur_idx = idx + 1; } @@ -398,6 +414,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) u64 val = 0; u32 idx; + lockdep_assert_held(&rdtgroup_mutex); + idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid); entry->busy = 0; @@ -423,10 +441,13 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) } put_cpu(); - if (entry->busy) + if (entry->busy) { rmid_limbo_count++; - else + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) + closid_num_dirty_rmid[entry->closid]++; + } else { list_add_tail(&entry->list, &rmid_free_lru); + } } void free_rmid(u32 closid, u32 rmid) @@ -770,13 +791,39 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms) static int dom_data_init(struct rdt_resource *r) { u32 idx_limit = resctrl_arch_system_num_rmid_idx(); + u32 num_closid = resctrl_arch_get_num_closid(r); struct rmid_entry *entry = NULL; + int err = 0, i; u32 idx; - int i; + + mutex_lock(&rdtgroup_mutex); + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + u32 *tmp; + + /* + * If the architecture hasn't provided a sanitised value here, + * this may result in larger arrays than necessary. Resctrl will + * use a smaller system wide value based on the resources in + * use. + */ + tmp = kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); + if (!tmp) { + err = -ENOMEM; + goto out_unlock; + } + + closid_num_dirty_rmid = tmp; + } rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); - if (!rmid_ptrs) - return -ENOMEM; + if (!rmid_ptrs) { + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid = NULL; + } + err = -ENOMEM; + goto out_unlock; + } for (i = 0; i < idx_limit; i++) { entry = &rmid_ptrs[i]; @@ -796,13 +843,21 @@ static int dom_data_init(struct rdt_resource *r) entry = __rmid_entry(idx); list_del(&entry->list); - return 0; +out_unlock: + mutex_unlock(&rdtgroup_mutex); + + return err; } static void __exit dom_data_exit(void) { mutex_lock(&rdtgroup_mutex); + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid = NULL; + } + kfree(rmid_ptrs); rmid_ptrs = NULL;