Message ID | 20230920061856.257597-3-ying.huang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp3913169vqi; Tue, 19 Sep 2023 23:21:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGRMHD4BkpykkbtRrzP2rXug621G7bfF/kg/9VvFb95hapxOFIFWUfFB2SNGZuvfjJX/B0A X-Received: by 2002:a17:902:e54c:b0:1c3:dad8:bb99 with SMTP id n12-20020a170902e54c00b001c3dad8bb99mr1572001plf.64.1695190912511; Tue, 19 Sep 2023 23:21:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695190912; cv=none; d=google.com; s=arc-20160816; b=anewGJbIhClePwwOEAdpCHT3/6jsP1zTLScgQVzHi5HZfohDbLcZMgq5WiBsWrx+Go mtHNyUI2WVGKv/iLzv4oKE02i3Cx3uQV4WPaNe0pRcmE44HJrbz3OD94WreObL1frkI4 O1DJpDKCHbgBAx+nI94fVFEPoCgHd5SEO/tzrblyHEPUoOI5uQELz3VymytoUMvQ3IrL cRV9MgcTN/sx0vtBjMRV+PyWRrJ0PgWh9QjBz61l84xKnCf68tKx805xnDvMLQP3glvG ogOBFOLcA1YDRDkoyTn6Yo/gUSZex4eqnuQ4s/AbqPGb3ozckFCI+6qiRFJ3KLuLNE1+ tjvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=36opLWR6CJOCDKHWsDafiwu7OGcLAly4+3YEuY6IXMM=; fh=hGQidUwESGodwSar92ST2/gqoeYVcIwox/uRMHGZrSY=; b=xefFfYsa7KPlTYu9vzIMhSmQOgpHxs6qoElFsWo88Hpxa79v4JOdFfHPwGXNpRiYna 2/H7jdZnIoZns+E/1fsv0pSeiT0KlOUONSeRNgHRdA/4mytN63lFZK3Q3wHRyIzs+cqb nBmJECv1bP+glgDg6UyJKaPe4P5lDjbdyjgso+t0i5X213U6CF47dihFG3XXo6M8rZvV KYx6hqQ8mgqImc1tMVCtFdhjNxs9b3nf5Nj8GtWXlzDZ4XzcG4MJB98DIsIyykdqmb6v QGTvju/toiZSem/jQdUBXsfsDoTIbUT8KC5Q6lVZzf0W1pQvvHBJi2kiRteTb1L06+iU tycg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="JjzVS/Gc"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id f9-20020a170902684900b001bbb39c68b2si10966435pln.178.2023.09.19.23.21.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Sep 2023 23:21:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="JjzVS/Gc"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id C2D26828EC1B; Tue, 19 Sep 2023 23:20:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233213AbjITGT5 (ORCPT <rfc822;toshivichauhan@gmail.com> + 26 others); Wed, 20 Sep 2023 02:19:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233183AbjITGTr (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 20 Sep 2023 02:19:47 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D1599 for <linux-kernel@vger.kernel.org>; Tue, 19 Sep 2023 23:19:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695190781; x=1726726781; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lVKcfu7RB9vKN2Hm50wym0zo99YnMLypq+qR+yt7FI8=; b=JjzVS/GcIFCb+wgM1BJ+H3FLzEFp5+5xFRNe/q+EottYmRsCWs8d1Aty mL9rWfNOJjg1yrrjhnxMG9F/x8Cs4zqr9LqUzdLZiUc6keZE2KiPDNHOP N7dtAalDPQnLrJpzV1ORi3roa9MxwkSuF6POwF1MzTjDXnwSWwEg8czLN ncJrt3MizNw++vSJm/adsezjJctPwBR5wc6WvqIERr6CWS7SmVH8R3IPt uYnKk6AB6zlmCkmjwnXYdhPxXzrVj0fVdIdZKiTbsN4n5kjI8O3bbWUTo rUCcD+7FDZsEU77OZY5TG+5FmAoulthw4V4Ud0+KW4I+cAH9qNjwrog+w A==; X-IronPort-AV: E=McAfee;i="6600,9927,10838"; a="365187600" X-IronPort-AV: E=Sophos;i="6.02,161,1688454000"; d="scan'208";a="365187600" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 23:19:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10838"; a="740060521" X-IronPort-AV: E=Sophos;i="6.02,161,1688454000"; d="scan'208";a="740060521" Received: from yhuang6-mobl2.sh.intel.com ([10.238.6.133]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 23:19:37 -0700 From: Huang Ying <ying.huang@intel.com> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Arjan Van De Ven <arjan@linux.intel.com>, Huang Ying <ying.huang@intel.com>, Sudeep Holla <sudeep.holla@arm.com>, Andrew Morton <akpm@linux-foundation.org>, Mel Gorman <mgorman@techsingularity.net>, Vlastimil Babka <vbabka@suse.cz>, David Hildenbrand <david@redhat.com>, Johannes Weiner <jweiner@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>, Michal Hocko <mhocko@suse.com>, Pavel Tatashin <pasha.tatashin@soleen.com>, Matthew Wilcox <willy@infradead.org>, Christoph Lameter <cl@linux.com> Subject: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size Date: Wed, 20 Sep 2023 14:18:48 +0800 Message-Id: <20230920061856.257597-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230920061856.257597-1-ying.huang@intel.com> References: <20230920061856.257597-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 19 Sep 2023 23:20:13 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777536506236930256 X-GMAIL-MSGID: 1777536506236930256 |
Series |
mm: PCP high auto-tuning
|
|
Commit Message
Huang, Ying
Sept. 20, 2023, 6:18 a.m. UTC
Per-CPU data cache size is useful information. For example, it can be
used to determine per-CPU cache size. So, in this patch, the data
cache size for each CPU is calculated via data_cache_size /
shared_cpu_weight.
A brute-force algorithm to iterate all online CPUs is used to avoid
to allocate an extra cpumask, especially in offline callback.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++-
include/linux/cacheinfo.h | 1 +
2 files changed, 42 insertions(+), 1 deletion(-)
Comments
On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > Per-CPU data cache size is useful information. For example, it can be > used to determine per-CPU cache size. So, in this patch, the data > cache size for each CPU is calculated via data_cache_size / > shared_cpu_weight. > > A brute-force algorithm to iterate all online CPUs is used to avoid > to allocate an extra cpumask, especially in offline callback. > You have not mentioned who will use this information ? Looking at the change, it is not exposed to the user-space. Also I see this is actually part of the series [1]. Is this info used in any of those patches ? Can you point me to the same ? Not all architecture use cacheinfo yet. How will the mm changes affect those platforms ? -- Regards, Sudeep [1] https://lore.kernel.org/all/20230920061856.257597-1-ying.huang@intel.com/
Sudeep Holla <sudeep.holla@arm.com> writes: > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: >> Per-CPU data cache size is useful information. For example, it can be >> used to determine per-CPU cache size. So, in this patch, the data >> cache size for each CPU is calculated via data_cache_size / >> shared_cpu_weight. >> >> A brute-force algorithm to iterate all online CPUs is used to avoid >> to allocate an extra cpumask, especially in offline callback. >> > > You have not mentioned who will use this information ? Looking at the > change, it is not exposed to the user-space. Also I see this is actually > part of the series [1]. Is this info used in any of those patches ? Can you > point me to the same ? Yes. It is used by [PATCH 03/10] of the series. If the per-CPU data cache size is large enough, we will cache more pages in the per-CPU pageset to reduce the zone lock contention. > Not all architecture use cacheinfo yet. How will the mm changes affect those > platforms ? If cacheinfo isn't available, we will fallback to the original behavior. That is, we will drain per-CPU pageset more often (that is, cache less to improve cache-hot pages sharing between CPUs). > -- > Regards, > Sudeep > > [1] https://lore.kernel.org/all/20230920061856.257597-1-ying.huang@intel.com/ -- Best Regards, Huang, Ying
On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > Per-CPU data cache size is useful information. For example, it can be > used to determine per-CPU cache size. So, in this patch, the data > cache size for each CPU is calculated via data_cache_size / > shared_cpu_weight. > > A brute-force algorithm to iterate all online CPUs is used to avoid > to allocate an extra cpumask, especially in offline callback. > > Signed-off-by: "Huang, Ying" <ying.huang@intel.com> It's not necessarily relevant to the patch, but at least the scheduler also stores some per-cpu topology information such as sd_llc_size -- the number of CPUs sharing the same last-level-cache as this CPU. It may be worth unifying this at some point if it's common that per-cpu information is too fine and per-zone or per-node information is too coarse. This would be particularly true when considering locking granularity, > Cc: Sudeep Holla <sudeep.holla@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: David Hildenbrand <david@redhat.com> > Cc: Johannes Weiner <jweiner@redhat.com> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Christoph Lameter <cl@linux.com> > --- > drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- > include/linux/cacheinfo.h | 1 + > 2 files changed, 42 insertions(+), 1 deletion(-) > > diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > index cbae8be1fe52..3e8951a3fbab 100644 > --- a/drivers/base/cacheinfo.c > +++ b/drivers/base/cacheinfo.c > @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) > return rc; > } > > +static void update_data_cache_size_cpu(unsigned int cpu) > +{ > + struct cpu_cacheinfo *ci; > + struct cacheinfo *leaf; > + unsigned int i, nr_shared; > + unsigned int size_data = 0; > + > + if (!per_cpu_cacheinfo(cpu)) > + return; > + > + ci = ci_cacheinfo(cpu); > + for (i = 0; i < cache_leaves(cpu); i++) { > + leaf = per_cpu_cacheinfo_idx(cpu, i); > + if (leaf->type != CACHE_TYPE_DATA && > + leaf->type != CACHE_TYPE_UNIFIED) > + continue; > + nr_shared = cpumask_weight(&leaf->shared_cpu_map); > + if (!nr_shared) > + continue; > + size_data += leaf->size / nr_shared; > + } > + ci->size_data = size_data; > +} This needs comments. It would be nice to add a comment on top describing the limitation of CACHE_TYPE_UNIFIED here in the context of update_data_cache_size_cpu(). The L2 cache could be unified but much smaller than a L3 or other last-level-cache. It's not clear from the code what level of cache is being used due to a lack of familiarity of the cpu_cacheinfo code but size_data is not the size of a cache, it appears to be the share of a cache a CPU would have under ideal circumstances. However, as it appears to also be iterating hierarchy then this may not be accurate. Caches may or may not allow data to be duplicated between levels so the value may be inaccurate. A respin of the patch is not necessary but a follow-on patch adding clarifing comments would be very welcome covering o What levels of cache are being used o Describe what size_data actually is and preferably rename the field to be more explicit as "size" could be the total cache capacity, the cache slice under ideal circumstances or even the number of CPUs sharing that cache. The cache details *may* need a follow-on patch if the size_data value is misleading. If it is a hierarchy and the value does not always represent the slice of cache a CPU could have under ideal circumstances then the value should be based on the LLC only so that it is predictable across architectures.
Mel Gorman <mgorman@techsingularity.net> writes: > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: >> Per-CPU data cache size is useful information. For example, it can be >> used to determine per-CPU cache size. So, in this patch, the data >> cache size for each CPU is calculated via data_cache_size / >> shared_cpu_weight. >> >> A brute-force algorithm to iterate all online CPUs is used to avoid >> to allocate an extra cpumask, especially in offline callback. >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > > It's not necessarily relevant to the patch, but at least the scheduler > also stores some per-cpu topology information such as sd_llc_size -- the > number of CPUs sharing the same last-level-cache as this CPU. It may be > worth unifying this at some point if it's common that per-cpu > information is too fine and per-zone or per-node information is too > coarse. This would be particularly true when considering locking > granularity, > >> Cc: Sudeep Holla <sudeep.holla@arm.com> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Mel Gorman <mgorman@techsingularity.net> >> Cc: Vlastimil Babka <vbabka@suse.cz> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Johannes Weiner <jweiner@redhat.com> >> Cc: Dave Hansen <dave.hansen@linux.intel.com> >> Cc: Michal Hocko <mhocko@suse.com> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >> Cc: Matthew Wilcox <willy@infradead.org> >> Cc: Christoph Lameter <cl@linux.com> >> --- >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- >> include/linux/cacheinfo.h | 1 + >> 2 files changed, 42 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c >> index cbae8be1fe52..3e8951a3fbab 100644 >> --- a/drivers/base/cacheinfo.c >> +++ b/drivers/base/cacheinfo.c >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) >> return rc; >> } >> >> +static void update_data_cache_size_cpu(unsigned int cpu) >> +{ >> + struct cpu_cacheinfo *ci; >> + struct cacheinfo *leaf; >> + unsigned int i, nr_shared; >> + unsigned int size_data = 0; >> + >> + if (!per_cpu_cacheinfo(cpu)) >> + return; >> + >> + ci = ci_cacheinfo(cpu); >> + for (i = 0; i < cache_leaves(cpu); i++) { >> + leaf = per_cpu_cacheinfo_idx(cpu, i); >> + if (leaf->type != CACHE_TYPE_DATA && >> + leaf->type != CACHE_TYPE_UNIFIED) >> + continue; >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); >> + if (!nr_shared) >> + continue; >> + size_data += leaf->size / nr_shared; >> + } >> + ci->size_data = size_data; >> +} > > This needs comments. > > It would be nice to add a comment on top describing the limitation of > CACHE_TYPE_UNIFIED here in the context of > update_data_cache_size_cpu(). Sure. Will do that. > The L2 cache could be unified but much smaller than a L3 or other > last-level-cache. It's not clear from the code what level of cache is being > used due to a lack of familiarity of the cpu_cacheinfo code but size_data > is not the size of a cache, it appears to be the share of a cache a CPU > would have under ideal circumstances. Yes. And it isn't for one specific level of cache. It's sum of per-CPU shares of all levels of cache. But the calculation is inaccurate. More details are in the below reply. > However, as it appears to also be > iterating hierarchy then this may not be accurate. Caches may or may not > allow data to be duplicated between levels so the value may be inaccurate. Thank you very much for pointing this out! The cache can be inclusive or not. So, we cannot calculate the per-CPU slice of all-level caches via adding them together blindly. I will change this in a follow-on patch. > A respin of the patch is not necessary but a follow-on patch adding > clarifing comments would be very welcome covering > > o What levels of cache are being used > o Describe what size_data actually is and preferably rename the field > to be more explicit as "size" could be the total cache capacity, the > cache slice under ideal circumstances or even the number of CPUs sharing > that cache. Sure. > The cache details *may* need a follow-on patch if the size_data value is > misleading. If it is a hierarchy and the value does not always represent > the slice of cache a CPU could have under ideal circumstances then the > value should be based on the LLC only so that it is predictable across > architectures. Sure. -- Best Regards, Huang, Ying
On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: > Mel Gorman <mgorman@techsingularity.net> writes: > > > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > >> Per-CPU data cache size is useful information. For example, it can be > >> used to determine per-CPU cache size. So, in this patch, the data > >> cache size for each CPU is calculated via data_cache_size / > >> shared_cpu_weight. > >> > >> A brute-force algorithm to iterate all online CPUs is used to avoid > >> to allocate an extra cpumask, especially in offline callback. > >> > >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > > > > It's not necessarily relevant to the patch, but at least the scheduler > > also stores some per-cpu topology information such as sd_llc_size -- the > > number of CPUs sharing the same last-level-cache as this CPU. It may be > > worth unifying this at some point if it's common that per-cpu > > information is too fine and per-zone or per-node information is too > > coarse. This would be particularly true when considering locking > > granularity, > > > >> Cc: Sudeep Holla <sudeep.holla@arm.com> > >> Cc: Andrew Morton <akpm@linux-foundation.org> > >> Cc: Mel Gorman <mgorman@techsingularity.net> > >> Cc: Vlastimil Babka <vbabka@suse.cz> > >> Cc: David Hildenbrand <david@redhat.com> > >> Cc: Johannes Weiner <jweiner@redhat.com> > >> Cc: Dave Hansen <dave.hansen@linux.intel.com> > >> Cc: Michal Hocko <mhocko@suse.com> > >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > >> Cc: Matthew Wilcox <willy@infradead.org> > >> Cc: Christoph Lameter <cl@linux.com> > >> --- > >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- > >> include/linux/cacheinfo.h | 1 + > >> 2 files changed, 42 insertions(+), 1 deletion(-) > >> > >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > >> index cbae8be1fe52..3e8951a3fbab 100644 > >> --- a/drivers/base/cacheinfo.c > >> +++ b/drivers/base/cacheinfo.c > >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) > >> return rc; > >> } > >> > >> +static void update_data_cache_size_cpu(unsigned int cpu) > >> +{ > >> + struct cpu_cacheinfo *ci; > >> + struct cacheinfo *leaf; > >> + unsigned int i, nr_shared; > >> + unsigned int size_data = 0; > >> + > >> + if (!per_cpu_cacheinfo(cpu)) > >> + return; > >> + > >> + ci = ci_cacheinfo(cpu); > >> + for (i = 0; i < cache_leaves(cpu); i++) { > >> + leaf = per_cpu_cacheinfo_idx(cpu, i); > >> + if (leaf->type != CACHE_TYPE_DATA && > >> + leaf->type != CACHE_TYPE_UNIFIED) > >> + continue; > >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); > >> + if (!nr_shared) > >> + continue; > >> + size_data += leaf->size / nr_shared; > >> + } > >> + ci->size_data = size_data; > >> +} > > > > This needs comments. > > > > It would be nice to add a comment on top describing the limitation of > > CACHE_TYPE_UNIFIED here in the context of > > update_data_cache_size_cpu(). > > Sure. Will do that. > Thanks. > > The L2 cache could be unified but much smaller than a L3 or other > > last-level-cache. It's not clear from the code what level of cache is being > > used due to a lack of familiarity of the cpu_cacheinfo code but size_data > > is not the size of a cache, it appears to be the share of a cache a CPU > > would have under ideal circumstances. > > Yes. And it isn't for one specific level of cache. It's sum of per-CPU > shares of all levels of cache. But the calculation is inaccurate. More > details are in the below reply. > > > However, as it appears to also be > > iterating hierarchy then this may not be accurate. Caches may or may not > > allow data to be duplicated between levels so the value may be inaccurate. > > Thank you very much for pointing this out! The cache can be inclusive > or not. So, we cannot calculate the per-CPU slice of all-level caches > via adding them together blindly. I will change this in a follow-on > patch. > Please do, I would strongly suggest basing this on LLC only because it's the only value you can be sure of. This change is the only change that may warrant a respin of the series as the history will be somewhat confusing otherwise.
Mel Gorman <mgorman@techsingularity.net> writes: > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: >> Mel Gorman <mgorman@techsingularity.net> writes: >> >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: >> >> Per-CPU data cache size is useful information. For example, it can be >> >> used to determine per-CPU cache size. So, in this patch, the data >> >> cache size for each CPU is calculated via data_cache_size / >> >> shared_cpu_weight. >> >> >> >> A brute-force algorithm to iterate all online CPUs is used to avoid >> >> to allocate an extra cpumask, especially in offline callback. >> >> >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> >> > >> > It's not necessarily relevant to the patch, but at least the scheduler >> > also stores some per-cpu topology information such as sd_llc_size -- the >> > number of CPUs sharing the same last-level-cache as this CPU. It may be >> > worth unifying this at some point if it's common that per-cpu >> > information is too fine and per-zone or per-node information is too >> > coarse. This would be particularly true when considering locking >> > granularity, >> > >> >> Cc: Sudeep Holla <sudeep.holla@arm.com> >> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> >> Cc: Mel Gorman <mgorman@techsingularity.net> >> >> Cc: Vlastimil Babka <vbabka@suse.cz> >> >> Cc: David Hildenbrand <david@redhat.com> >> >> Cc: Johannes Weiner <jweiner@redhat.com> >> >> Cc: Dave Hansen <dave.hansen@linux.intel.com> >> >> Cc: Michal Hocko <mhocko@suse.com> >> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >> >> Cc: Matthew Wilcox <willy@infradead.org> >> >> Cc: Christoph Lameter <cl@linux.com> >> >> --- >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- >> >> include/linux/cacheinfo.h | 1 + >> >> 2 files changed, 42 insertions(+), 1 deletion(-) >> >> >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c >> >> index cbae8be1fe52..3e8951a3fbab 100644 >> >> --- a/drivers/base/cacheinfo.c >> >> +++ b/drivers/base/cacheinfo.c >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) >> >> return rc; >> >> } >> >> >> >> +static void update_data_cache_size_cpu(unsigned int cpu) >> >> +{ >> >> + struct cpu_cacheinfo *ci; >> >> + struct cacheinfo *leaf; >> >> + unsigned int i, nr_shared; >> >> + unsigned int size_data = 0; >> >> + >> >> + if (!per_cpu_cacheinfo(cpu)) >> >> + return; >> >> + >> >> + ci = ci_cacheinfo(cpu); >> >> + for (i = 0; i < cache_leaves(cpu); i++) { >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); >> >> + if (leaf->type != CACHE_TYPE_DATA && >> >> + leaf->type != CACHE_TYPE_UNIFIED) >> >> + continue; >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); >> >> + if (!nr_shared) >> >> + continue; >> >> + size_data += leaf->size / nr_shared; >> >> + } >> >> + ci->size_data = size_data; >> >> +} >> > >> > This needs comments. >> > >> > It would be nice to add a comment on top describing the limitation of >> > CACHE_TYPE_UNIFIED here in the context of >> > update_data_cache_size_cpu(). >> >> Sure. Will do that. >> > > Thanks. > >> > The L2 cache could be unified but much smaller than a L3 or other >> > last-level-cache. It's not clear from the code what level of cache is being >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data >> > is not the size of a cache, it appears to be the share of a cache a CPU >> > would have under ideal circumstances. >> >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU >> shares of all levels of cache. But the calculation is inaccurate. More >> details are in the below reply. >> >> > However, as it appears to also be >> > iterating hierarchy then this may not be accurate. Caches may or may not >> > allow data to be duplicated between levels so the value may be inaccurate. >> >> Thank you very much for pointing this out! The cache can be inclusive >> or not. So, we cannot calculate the per-CPU slice of all-level caches >> via adding them together blindly. I will change this in a follow-on >> patch. >> > > Please do, I would strongly suggest basing this on LLC only because it's > the only value you can be sure of. This change is the only change that may > warrant a respin of the series as the history will be somewhat confusing > otherwise. I am still checking whether it's possible to get cache inclusive information via cpuid. If there's no reliable way to do that. We can use the max value of per-CPU share of each level of cache. For inclusive cache, that will be the value of LLC. For non-inclusive cache, the value will be more accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB per core, while LLC is 1.875 MB per core according to [1]. [1] https://www.intel.com/content/www/us/en/developer/articles/technical/fourth-generation-xeon-scalable-family-overview.html I will respin the series. Thanks a lot for review! -- Best Regards, Huang, Ying
On Thu, Oct 12, 2023 at 09:12:00PM +0800, Huang, Ying wrote: > Mel Gorman <mgorman@techsingularity.net> writes: > > > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: > >> Mel Gorman <mgorman@techsingularity.net> writes: > >> > >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > >> >> Per-CPU data cache size is useful information. For example, it can be > >> >> used to determine per-CPU cache size. So, in this patch, the data > >> >> cache size for each CPU is calculated via data_cache_size / > >> >> shared_cpu_weight. > >> >> > >> >> A brute-force algorithm to iterate all online CPUs is used to avoid > >> >> to allocate an extra cpumask, especially in offline callback. > >> >> > >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > >> > > >> > It's not necessarily relevant to the patch, but at least the scheduler > >> > also stores some per-cpu topology information such as sd_llc_size -- the > >> > number of CPUs sharing the same last-level-cache as this CPU. It may be > >> > worth unifying this at some point if it's common that per-cpu > >> > information is too fine and per-zone or per-node information is too > >> > coarse. This would be particularly true when considering locking > >> > granularity, > >> > > >> >> Cc: Sudeep Holla <sudeep.holla@arm.com> > >> >> Cc: Andrew Morton <akpm@linux-foundation.org> > >> >> Cc: Mel Gorman <mgorman@techsingularity.net> > >> >> Cc: Vlastimil Babka <vbabka@suse.cz> > >> >> Cc: David Hildenbrand <david@redhat.com> > >> >> Cc: Johannes Weiner <jweiner@redhat.com> > >> >> Cc: Dave Hansen <dave.hansen@linux.intel.com> > >> >> Cc: Michal Hocko <mhocko@suse.com> > >> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > >> >> Cc: Matthew Wilcox <willy@infradead.org> > >> >> Cc: Christoph Lameter <cl@linux.com> > >> >> --- > >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- > >> >> include/linux/cacheinfo.h | 1 + > >> >> 2 files changed, 42 insertions(+), 1 deletion(-) > >> >> > >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > >> >> index cbae8be1fe52..3e8951a3fbab 100644 > >> >> --- a/drivers/base/cacheinfo.c > >> >> +++ b/drivers/base/cacheinfo.c > >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) > >> >> return rc; > >> >> } > >> >> > >> >> +static void update_data_cache_size_cpu(unsigned int cpu) > >> >> +{ > >> >> + struct cpu_cacheinfo *ci; > >> >> + struct cacheinfo *leaf; > >> >> + unsigned int i, nr_shared; > >> >> + unsigned int size_data = 0; > >> >> + > >> >> + if (!per_cpu_cacheinfo(cpu)) > >> >> + return; > >> >> + > >> >> + ci = ci_cacheinfo(cpu); > >> >> + for (i = 0; i < cache_leaves(cpu); i++) { > >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); > >> >> + if (leaf->type != CACHE_TYPE_DATA && > >> >> + leaf->type != CACHE_TYPE_UNIFIED) > >> >> + continue; > >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); > >> >> + if (!nr_shared) > >> >> + continue; > >> >> + size_data += leaf->size / nr_shared; > >> >> + } > >> >> + ci->size_data = size_data; > >> >> +} > >> > > >> > This needs comments. > >> > > >> > It would be nice to add a comment on top describing the limitation of > >> > CACHE_TYPE_UNIFIED here in the context of > >> > update_data_cache_size_cpu(). > >> > >> Sure. Will do that. > >> > > > > Thanks. > > > >> > The L2 cache could be unified but much smaller than a L3 or other > >> > last-level-cache. It's not clear from the code what level of cache is being > >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data > >> > is not the size of a cache, it appears to be the share of a cache a CPU > >> > would have under ideal circumstances. > >> > >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU > >> shares of all levels of cache. But the calculation is inaccurate. More > >> details are in the below reply. > >> > >> > However, as it appears to also be > >> > iterating hierarchy then this may not be accurate. Caches may or may not > >> > allow data to be duplicated between levels so the value may be inaccurate. > >> > >> Thank you very much for pointing this out! The cache can be inclusive > >> or not. So, we cannot calculate the per-CPU slice of all-level caches > >> via adding them together blindly. I will change this in a follow-on > >> patch. > >> > > > > Please do, I would strongly suggest basing this on LLC only because it's > > the only value you can be sure of. This change is the only change that may > > warrant a respin of the series as the history will be somewhat confusing > > otherwise. > > I am still checking whether it's possible to get cache inclusive > information via cpuid. > cpuid may be x86-specific so that potentially leads to different behaviours on different architectures. > If there's no reliable way to do that. We can use the max value of > per-CPU share of each level of cache. For inclusive cache, that will be > the value of LLC. For non-inclusive cache, the value will be more > accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB > per core, while LLC is 1.875 MB per core according to [1]. > Be that as it may, it still opens the possibility of significantly different behaviour depending on the CPU family. I would strongly recommend that you start with LLC only because LLC is also the topology level of interest used by the scheduler and it's information that is generally available. Trying to get accurate information on every level and the complexity of dealing with inclusive vs exclusive cache or write-back vs write-through should be a separate patch, with separate justification and notes on how it can lead to behaviour specific to the CPU family or architecture.
Mel Gorman <mgorman@techsingularity.net> writes: > On Thu, Oct 12, 2023 at 09:12:00PM +0800, Huang, Ying wrote: >> Mel Gorman <mgorman@techsingularity.net> writes: >> >> > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: >> >> Mel Gorman <mgorman@techsingularity.net> writes: >> >> >> >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: >> >> >> Per-CPU data cache size is useful information. For example, it can be >> >> >> used to determine per-CPU cache size. So, in this patch, the data >> >> >> cache size for each CPU is calculated via data_cache_size / >> >> >> shared_cpu_weight. >> >> >> >> >> >> A brute-force algorithm to iterate all online CPUs is used to avoid >> >> >> to allocate an extra cpumask, especially in offline callback. >> >> >> >> >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> >> >> > >> >> > It's not necessarily relevant to the patch, but at least the scheduler >> >> > also stores some per-cpu topology information such as sd_llc_size -- the >> >> > number of CPUs sharing the same last-level-cache as this CPU. It may be >> >> > worth unifying this at some point if it's common that per-cpu >> >> > information is too fine and per-zone or per-node information is too >> >> > coarse. This would be particularly true when considering locking >> >> > granularity, >> >> > >> >> >> Cc: Sudeep Holla <sudeep.holla@arm.com> >> >> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> >> >> Cc: Mel Gorman <mgorman@techsingularity.net> >> >> >> Cc: Vlastimil Babka <vbabka@suse.cz> >> >> >> Cc: David Hildenbrand <david@redhat.com> >> >> >> Cc: Johannes Weiner <jweiner@redhat.com> >> >> >> Cc: Dave Hansen <dave.hansen@linux.intel.com> >> >> >> Cc: Michal Hocko <mhocko@suse.com> >> >> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >> >> >> Cc: Matthew Wilcox <willy@infradead.org> >> >> >> Cc: Christoph Lameter <cl@linux.com> >> >> >> --- >> >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- >> >> >> include/linux/cacheinfo.h | 1 + >> >> >> 2 files changed, 42 insertions(+), 1 deletion(-) >> >> >> >> >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c >> >> >> index cbae8be1fe52..3e8951a3fbab 100644 >> >> >> --- a/drivers/base/cacheinfo.c >> >> >> +++ b/drivers/base/cacheinfo.c >> >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) >> >> >> return rc; >> >> >> } >> >> >> >> >> >> +static void update_data_cache_size_cpu(unsigned int cpu) >> >> >> +{ >> >> >> + struct cpu_cacheinfo *ci; >> >> >> + struct cacheinfo *leaf; >> >> >> + unsigned int i, nr_shared; >> >> >> + unsigned int size_data = 0; >> >> >> + >> >> >> + if (!per_cpu_cacheinfo(cpu)) >> >> >> + return; >> >> >> + >> >> >> + ci = ci_cacheinfo(cpu); >> >> >> + for (i = 0; i < cache_leaves(cpu); i++) { >> >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); >> >> >> + if (leaf->type != CACHE_TYPE_DATA && >> >> >> + leaf->type != CACHE_TYPE_UNIFIED) >> >> >> + continue; >> >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); >> >> >> + if (!nr_shared) >> >> >> + continue; >> >> >> + size_data += leaf->size / nr_shared; >> >> >> + } >> >> >> + ci->size_data = size_data; >> >> >> +} >> >> > >> >> > This needs comments. >> >> > >> >> > It would be nice to add a comment on top describing the limitation of >> >> > CACHE_TYPE_UNIFIED here in the context of >> >> > update_data_cache_size_cpu(). >> >> >> >> Sure. Will do that. >> >> >> > >> > Thanks. >> > >> >> > The L2 cache could be unified but much smaller than a L3 or other >> >> > last-level-cache. It's not clear from the code what level of cache is being >> >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data >> >> > is not the size of a cache, it appears to be the share of a cache a CPU >> >> > would have under ideal circumstances. >> >> >> >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU >> >> shares of all levels of cache. But the calculation is inaccurate. More >> >> details are in the below reply. >> >> >> >> > However, as it appears to also be >> >> > iterating hierarchy then this may not be accurate. Caches may or may not >> >> > allow data to be duplicated between levels so the value may be inaccurate. >> >> >> >> Thank you very much for pointing this out! The cache can be inclusive >> >> or not. So, we cannot calculate the per-CPU slice of all-level caches >> >> via adding them together blindly. I will change this in a follow-on >> >> patch. >> >> >> > >> > Please do, I would strongly suggest basing this on LLC only because it's >> > the only value you can be sure of. This change is the only change that may >> > warrant a respin of the series as the history will be somewhat confusing >> > otherwise. >> >> I am still checking whether it's possible to get cache inclusive >> information via cpuid. >> > > cpuid may be x86-specific so that potentially leads to different behaviours > on different architectures. > >> If there's no reliable way to do that. We can use the max value of >> per-CPU share of each level of cache. For inclusive cache, that will be >> the value of LLC. For non-inclusive cache, the value will be more >> accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB >> per core, while LLC is 1.875 MB per core according to [1]. >> > > Be that as it may, it still opens the possibility of significantly different > behaviour depending on the CPU family. I would strongly recommend that you > start with LLC only because LLC is also the topology level of interest used > by the scheduler and it's information that is generally available. Trying > to get accurate information on every level and the complexity of dealing > with inclusive vs exclusive cache or write-back vs write-through should > be a separate patch, with separate justification and notes on how it can > lead to behaviour specific to the CPU family or architecture. IMHO, we should try to optimize for as many CPUs as possible. The size of the per-CPU (HW thread for SMT) slice of LLC of latest Intel server CPUs is as follows, Icelake: 0.75 MB Sapphire Rapids: 0.9375 MB While pcp->batch is 63 * 4 / 1024 = 0.2461 MB. In [03/10], only if "per_cpu_cache_slice > 4 * pcp->batch", we will cache pcp->batch before draining the PCP. This makes the optimization unavailable for a significant portion of the server CPUs. In theory, if "per_cpu_cache_slice > 2 * pcp->batch", we can reuse cache-hot pages between CPUs. So, if we change the condition to "per_cpu_cache_slice > 3 * pcp->batch", I think that we are still safe. As for other CPUs, according to [2], AMD CPUs have larger per-CPU LLC. So, it's OK for them. ARM CPUs has much smaller per-CPU LLC, so some further optimization is needed. [2] https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scalable-review/2 So, I suggest to use "per_cpu_cache_slice > 3 * pcp->batch" in [03/10], and use LLC in this patch [02/10]. Then, we can optimize the per-CPU slice of cache calculation in the follow-up patches. -- Best Regards, Huang, Ying
On Fri, Oct 13, 2023 at 11:06:51AM +0800, Huang, Ying wrote: > Mel Gorman <mgorman@techsingularity.net> writes: > > > On Thu, Oct 12, 2023 at 09:12:00PM +0800, Huang, Ying wrote: > >> Mel Gorman <mgorman@techsingularity.net> writes: > >> > >> > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: > >> >> Mel Gorman <mgorman@techsingularity.net> writes: > >> >> > >> >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > >> >> >> Per-CPU data cache size is useful information. For example, it can be > >> >> >> used to determine per-CPU cache size. So, in this patch, the data > >> >> >> cache size for each CPU is calculated via data_cache_size / > >> >> >> shared_cpu_weight. > >> >> >> > >> >> >> A brute-force algorithm to iterate all online CPUs is used to avoid > >> >> >> to allocate an extra cpumask, especially in offline callback. > >> >> >> > >> >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > >> >> > > >> >> > It's not necessarily relevant to the patch, but at least the scheduler > >> >> > also stores some per-cpu topology information such as sd_llc_size -- the > >> >> > number of CPUs sharing the same last-level-cache as this CPU. It may be > >> >> > worth unifying this at some point if it's common that per-cpu > >> >> > information is too fine and per-zone or per-node information is too > >> >> > coarse. This would be particularly true when considering locking > >> >> > granularity, > >> >> > > >> >> >> Cc: Sudeep Holla <sudeep.holla@arm.com> > >> >> >> Cc: Andrew Morton <akpm@linux-foundation.org> > >> >> >> Cc: Mel Gorman <mgorman@techsingularity.net> > >> >> >> Cc: Vlastimil Babka <vbabka@suse.cz> > >> >> >> Cc: David Hildenbrand <david@redhat.com> > >> >> >> Cc: Johannes Weiner <jweiner@redhat.com> > >> >> >> Cc: Dave Hansen <dave.hansen@linux.intel.com> > >> >> >> Cc: Michal Hocko <mhocko@suse.com> > >> >> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > >> >> >> Cc: Matthew Wilcox <willy@infradead.org> > >> >> >> Cc: Christoph Lameter <cl@linux.com> > >> >> >> --- > >> >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- > >> >> >> include/linux/cacheinfo.h | 1 + > >> >> >> 2 files changed, 42 insertions(+), 1 deletion(-) > >> >> >> > >> >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > >> >> >> index cbae8be1fe52..3e8951a3fbab 100644 > >> >> >> --- a/drivers/base/cacheinfo.c > >> >> >> +++ b/drivers/base/cacheinfo.c > >> >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) > >> >> >> return rc; > >> >> >> } > >> >> >> > >> >> >> +static void update_data_cache_size_cpu(unsigned int cpu) > >> >> >> +{ > >> >> >> + struct cpu_cacheinfo *ci; > >> >> >> + struct cacheinfo *leaf; > >> >> >> + unsigned int i, nr_shared; > >> >> >> + unsigned int size_data = 0; > >> >> >> + > >> >> >> + if (!per_cpu_cacheinfo(cpu)) > >> >> >> + return; > >> >> >> + > >> >> >> + ci = ci_cacheinfo(cpu); > >> >> >> + for (i = 0; i < cache_leaves(cpu); i++) { > >> >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); > >> >> >> + if (leaf->type != CACHE_TYPE_DATA && > >> >> >> + leaf->type != CACHE_TYPE_UNIFIED) > >> >> >> + continue; > >> >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); > >> >> >> + if (!nr_shared) > >> >> >> + continue; > >> >> >> + size_data += leaf->size / nr_shared; > >> >> >> + } > >> >> >> + ci->size_data = size_data; > >> >> >> +} > >> >> > > >> >> > This needs comments. > >> >> > > >> >> > It would be nice to add a comment on top describing the limitation of > >> >> > CACHE_TYPE_UNIFIED here in the context of > >> >> > update_data_cache_size_cpu(). > >> >> > >> >> Sure. Will do that. > >> >> > >> > > >> > Thanks. > >> > > >> >> > The L2 cache could be unified but much smaller than a L3 or other > >> >> > last-level-cache. It's not clear from the code what level of cache is being > >> >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data > >> >> > is not the size of a cache, it appears to be the share of a cache a CPU > >> >> > would have under ideal circumstances. > >> >> > >> >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU > >> >> shares of all levels of cache. But the calculation is inaccurate. More > >> >> details are in the below reply. > >> >> > >> >> > However, as it appears to also be > >> >> > iterating hierarchy then this may not be accurate. Caches may or may not > >> >> > allow data to be duplicated between levels so the value may be inaccurate. > >> >> > >> >> Thank you very much for pointing this out! The cache can be inclusive > >> >> or not. So, we cannot calculate the per-CPU slice of all-level caches > >> >> via adding them together blindly. I will change this in a follow-on > >> >> patch. > >> >> > >> > > >> > Please do, I would strongly suggest basing this on LLC only because it's > >> > the only value you can be sure of. This change is the only change that may > >> > warrant a respin of the series as the history will be somewhat confusing > >> > otherwise. > >> > >> I am still checking whether it's possible to get cache inclusive > >> information via cpuid. > >> > > > > cpuid may be x86-specific so that potentially leads to different behaviours > > on different architectures. > > > >> If there's no reliable way to do that. We can use the max value of > >> per-CPU share of each level of cache. For inclusive cache, that will be > >> the value of LLC. For non-inclusive cache, the value will be more > >> accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB > >> per core, while LLC is 1.875 MB per core according to [1]. > >> > > > > Be that as it may, it still opens the possibility of significantly different > > behaviour depending on the CPU family. I would strongly recommend that you > > start with LLC only because LLC is also the topology level of interest used > > by the scheduler and it's information that is generally available. Trying > > to get accurate information on every level and the complexity of dealing > > with inclusive vs exclusive cache or write-back vs write-through should > > be a separate patch, with separate justification and notes on how it can > > lead to behaviour specific to the CPU family or architecture. > > IMHO, we should try to optimize for as many CPUs as possible. The size > of the per-CPU (HW thread for SMT) slice of LLC of latest Intel server > CPUs is as follows, > > Icelake: 0.75 MB > Sapphire Rapids: 0.9375 MB > > While pcp->batch is 63 * 4 / 1024 = 0.2461 MB. > > In [03/10], only if "per_cpu_cache_slice > 4 * pcp->batch", we will cache > pcp->batch before draining the PCP. This makes the optimization > unavailable for a significant portion of the server CPUs. > > In theory, if "per_cpu_cache_slice > 2 * pcp->batch", we can reuse > cache-hot pages between CPUs. So, if we change the condition to > "per_cpu_cache_slice > 3 * pcp->batch", I think that we are still safe. > > As for other CPUs, according to [2], AMD CPUs have larger per-CPU LLC. > So, it's OK for them. ARM CPUs has much smaller per-CPU LLC, so some > further optimization is needed. > > [2] https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scalable-review/2 > > So, I suggest to use "per_cpu_cache_slice > 3 * pcp->batch" in [03/10], > and use LLC in this patch [02/10]. Then, we can optimize the per-CPU > slice of cache calculation in the follow-up patches. > I'm ok with adjusting the thresholds to adapt to using LLC only because at least it'll be consistent across CPU architectures and families. Dealing with the potentially different cache characteristics at each level or even being able to discover them is just unnecessarily complicated. It gets even worse if the mapping changes. For example, if L1 was direct mapped, L2 index mapped and L3 fully associative then it's not even meaningful to say that a CPU has a meaningful slice size as cache coloring side-effects mess everything up.
diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index cbae8be1fe52..3e8951a3fbab 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) return rc; } +static void update_data_cache_size_cpu(unsigned int cpu) +{ + struct cpu_cacheinfo *ci; + struct cacheinfo *leaf; + unsigned int i, nr_shared; + unsigned int size_data = 0; + + if (!per_cpu_cacheinfo(cpu)) + return; + + ci = ci_cacheinfo(cpu); + for (i = 0; i < cache_leaves(cpu); i++) { + leaf = per_cpu_cacheinfo_idx(cpu, i); + if (leaf->type != CACHE_TYPE_DATA && + leaf->type != CACHE_TYPE_UNIFIED) + continue; + nr_shared = cpumask_weight(&leaf->shared_cpu_map); + if (!nr_shared) + continue; + size_data += leaf->size / nr_shared; + } + ci->size_data = size_data; +} + +static void update_data_cache_size(bool cpu_online, unsigned int cpu) +{ + unsigned int icpu; + + for_each_online_cpu(icpu) { + if (!cpu_online && icpu == cpu) + continue; + update_data_cache_size_cpu(icpu); + } +} + static int cacheinfo_cpu_online(unsigned int cpu) { int rc = detect_cache_attributes(cpu); @@ -906,7 +941,11 @@ static int cacheinfo_cpu_online(unsigned int cpu) return rc; rc = cache_add_dev(cpu); if (rc) - free_cache_attributes(cpu); + goto err; + update_data_cache_size(true, cpu); + return 0; +err: + free_cache_attributes(cpu); return rc; } @@ -916,6 +955,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu) cpu_cache_sysfs_exit(cpu); free_cache_attributes(cpu); + update_data_cache_size(false, cpu); return 0; } diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h index a5cfd44fab45..4e7ccfa0c36d 100644 --- a/include/linux/cacheinfo.h +++ b/include/linux/cacheinfo.h @@ -73,6 +73,7 @@ struct cacheinfo { struct cpu_cacheinfo { struct cacheinfo *info_list; + unsigned int size_data; unsigned int num_levels; unsigned int num_leaves; bool cpu_map_populated;