Message ID | 20230710200342.358255-1-void@manifault.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp50338vqm; Mon, 10 Jul 2023 13:18:43 -0700 (PDT) X-Google-Smtp-Source: APBJJlHUZkKEgCLJZIoAYnE+xiKvwcGCpvUg7chDd9AwGimY+L7A3VrT1gXtkXG2LjARDV9Coh2x X-Received: by 2002:a5d:690e:0:b0:313:f0ef:1e55 with SMTP id t14-20020a5d690e000000b00313f0ef1e55mr11823687wru.37.1689020323076; Mon, 10 Jul 2023 13:18:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689020323; cv=none; d=google.com; s=arc-20160816; b=hG2lw7JWeMQH7lwHa0BkNhYFbeUn48MtAHL97FNfRQy3YyD7RYjyehfuPE5O/ttlXA gF6Nd7LVHPTAfLRydnAdtGk3F8D+deDtgiZghRHRu99QP3sZzZmWDBQjP6q66D4zHI11 GFr+sCvVi/iRy62zEq1KMza/maMePxrtjzkJO7V86fThcRbwQvAfVka/KrEyiAdV2mG5 9rbHdewO47jAznuYAIfKhjdN7HpHfh0+Eby7YBSAT4FKYhwmPGiEeQtFPmyZnLINdm87 kkgzUbfgoLsLAgNd/ccuYySXHkQRvGrVIL1/XJu+I3z6Aacc5baw7htLc6R08e4UPcQr QYPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=WPBSsr2eiOgXjmOUkIUT4k+7ikad/s/kT7rleyjRkpM=; fh=GCF3HUaVVQnsrFP0NhazYIuTwTFmsBL5+Yby9S9U12k=; b=dMwGkbhdfAJQBwCJlVcHPKHvT/jwC3AMjqlz7Br7bS5lGnS3RkglS0GoBZ+jPtb5kJ mMW75rLQeBW3XR50n4r+8DMyqqPu9aFDsKh7P9aYDcXYoaSCLqNFadK4wiPsuPHLDj+h PdwVM7rRPX7JDU/KskGsxN9fm2OHtRwgEYHaYgvHr8D90SR+CMwDtQqYlgqrp7BbkJwK VMzY2um76wSq5IebS7pJr3V2C2ZNAPcN8RtrwLwg0A/35/VBa8qX9B4KX0OlmxvDtLdY EBtBv/hkuORoWxt2EnWSNoAt5g7RPAFMdmQZB9odFUjDeMVHhMA5zqjUcQ0N7vtxOPz8 se/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v7-20020aa7d9c7000000b0051d98659a61si289002eds.144.2023.07.10.13.18.19; Mon, 10 Jul 2023 13:18:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231575AbjGJUEH (ORCPT <rfc822;gnulinuxfreebsd@gmail.com> + 99 others); Mon, 10 Jul 2023 16:04:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229562AbjGJUEE (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 10 Jul 2023 16:04:04 -0400 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8280133 for <linux-kernel@vger.kernel.org>; Mon, 10 Jul 2023 13:04:01 -0700 (PDT) Received: by mail-qv1-f42.google.com with SMTP id 6a1803df08f44-635dccdf17dso33373386d6.3 for <linux-kernel@vger.kernel.org>; Mon, 10 Jul 2023 13:04:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689019440; x=1691611440; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WPBSsr2eiOgXjmOUkIUT4k+7ikad/s/kT7rleyjRkpM=; b=b920zJ/zw4ruP6fev9pWhiJBxCBf1J7TjfTIb997CzNRGw69SsYkLFy9hJmDgeSLzl S/ZvJ2yqECdqfy/6Me4uTXWx7Knc8OWZKc+xXT82+9+jq1pdmXUK3JSOxWlCtYFovVz/ Iby+2tOpP92lutYFd0g86Vqi35hhcN9oPjoJzaMYU85w6ko1ZReqRxVy4nazHTp07mHj bw8+F52IVJsSqwzjtyZpw8F+e+8O/1gjCHGD4EE/BOOqH/kvT6AJRBaLEpRlVum7x5Za tDiXDclJkpfFrld94EE6oJpmTNrbYYIa+MCkfp86GIAGZoYIDFLjurtIK8qQ5RDscnmk I8gQ== X-Gm-Message-State: ABy/qLbRIhJGFtbnyiNe/QESeR2IKwyyZTDZqOZ/fj8CMUbWND1nZ/ao 0XhxpD6Yb0ZG45jkUtQVgvrGOsxi++bFTyx/ X-Received: by 2002:a0c:de0f:0:b0:621:1a4c:759b with SMTP id t15-20020a0cde0f000000b006211a4c759bmr11095367qvk.36.1689019440027; Mon, 10 Jul 2023 13:04:00 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:4850]) by smtp.gmail.com with ESMTPSA id v14-20020a0cdd8e000000b005ef81cc63ccsm168752qvk.117.2023.07.10.13.03.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 13:03:59 -0700 (PDT) From: David Vernet <void@manifault.com> To: linux-kernel@vger.kernel.org Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, gautham.shenoy@amd.com, kprateek.nayak@amd.com, aaron.lu@intel.com, clm@meta.com, tj@kernel.org, roman.gushchin@linux.dev, kernel-team@meta.com Subject: [PATCH v2 0/7] sched: Implement shared runqueue in CFS Date: Mon, 10 Jul 2023 15:03:35 -0500 Message-Id: <20230710200342.358255-1-void@manifault.com> X-Mailer: git-send-email 2.40.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771066174097490589 X-GMAIL-MSGID: 1771066174097490589 |
Series |
sched: Implement shared runqueue in CFS
|
|
Message
David Vernet
July 10, 2023, 8:03 p.m. UTC
Changes
-------
This is v2 of the shared wakequeue (now called shared runqueue)
patchset. The following are changes from the RFC v1 patchset
(https://lore.kernel.org/lkml/20230613052004.2836135-1-void@manifault.com/).
v1 -> v2 changes:
- Change name from swqueue to shared_runq (Peter)
- Sharded per-LLC shared runqueues to avoid contention on
scheduler-heavy workloads (Peter)
- Pull tasks from the shared_runq in newidle_balance() rather than in
pick_next_task_fair() (Peter and Vincent)
- Rename a few functions to reflect their actual purpose. For example,
shared_runq_dequeue_task() instead of swqueue_remove_task() (Peter)
- Expose move_queued_task() from core.c rather than migrate_task_to()
(Peter)
- Properly check is_cpu_allowed() when pulling a task from a shared_runq
to ensure it can actually be migrated (Peter and Gautham)
- Dropped RFC tag
This patch set is based off of commit ebb83d84e49b ("sched/core: Avoid
multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") on the
sched/core branch of tip.git.
Overview
========
The scheduler must constantly strike a balance between work
conservation, and avoiding costly migrations which harm performance due
to e.g. decreased cache locality. The matter is further complicated by
the topology of the system. Migrating a task between cores on the same
LLC may be more optimal than keeping a task local to the CPU, whereas
migrating a task between LLCs or NUMA nodes may tip the balance in the
other direction.
With that in mind, while CFS is by and large mostly a work conserving
scheduler, there are certain instances where the scheduler will choose
to keep a task local to a CPU, when it would have been more optimal to
migrate it to an idle core.
An example of such a workload is the HHVM / web workload at Meta. HHVM
is a VM that JITs Hack and PHP code in service of web requests. Like
other JIT / compilation workloads, it tends to be heavily CPU bound, and
exhibit generally poor cache locality. To try and address this, we set
several debugfs (/sys/kernel/debug/sched) knobs on our HHVM workloads:
- migration_cost_ns -> 0
- latency_ns -> 20000000
- min_granularity_ns -> 10000000
- wakeup_granularity_ns -> 12000000
These knobs are intended both to encourage the scheduler to be as work
conserving as possible (migration_cost_ns -> 0), and also to keep tasks
running for relatively long time slices so as to avoid the overhead of
context switching (the other knobs). Collectively, these knobs provide a
substantial performance win; resulting in roughly a 20% improvement in
throughput. Worth noting, however, is that this improvement is _not_ at
full machine saturation.
That said, even with these knobs, we noticed that CPUs were still going
idle even when the host was overcommitted. In response, we wrote the
"shared runqueue" (shared_runq) feature proposed in this patch set. The
idea behind shared_runq is simple: it enables the scheduler to be more
aggressively work conserving by placing a waking task into a sharded
per-LLC FIFO queue that can be pulled from by another core in the LLC
FIFO queue which can then be pulled from before it goes idle.
With this simple change, we were able to achieve a 1 - 1.6% improvement
in throughput, as well as a small, consistent improvement in p95 and p99
latencies, in HHVM. These performance improvements were in addition to
the wins from the debugfs knobs mentioned above, and to other benchmarks
outlined below in the Results section.
Design
======
The design of shared_runq is quite simple. A shared_runq is simply a
list of struct shared_runq_shards:
struct shared_runq_shard {
struct list_head list;
spinlock_t lock;
} ____cacheline_aligned;
struct shared_runq {
u32 num_shards;
struct shared_runq_shard shards[];
} ____cacheline_aligned;
We create a struct shared_runq per LLC, ensuring they're in their own
cachelines to avoid false sharing between CPUs on different LLCs. We
also create some number of shards per struct shared_runq, where runnable
tasks are inserted and pulled from.
When a task becomes runnable, it enqueues itself in the
shared_runq_shard of its current core. Enqueues only happen if the task
is not pinned to a specific CPU.
A core will pull a task from one of the shards in its LLC's shared_runq
at the beginning of newidle_balance().
Difference between shared_runq and SIS_NODE
===========================================
In [0] Peter proposed a patch that addresses Tejun's observations that
when workqueues are targeted towards a specific LLC on his Zen2 machine
with small CCXs, that there would be significant idle time due to
select_idle_sibling() not considering anything outside of the current
LLC.
This patch (SIS_NODE) is essentially the complement to the proposal
here. SID_NODE causes waking tasks to look for idle cores in neighboring
LLCs on the same die, whereas shared_runq causes cores about to go idle
to look for enqueued tasks. That said, in its current form, the two
features at are a different scope as SIS_NODE searches for idle cores
between LLCs, while shared_runq enqueues tasks within a single LLC.
The patch was since removed in [1], and we compared the results to
shared_runq (previously called "swqueue") in [2]. SIS_NODE did not
outperform shared_runq on any of the benchmarks, so we elect to not
compare against it again for this v2 patch set.
[0]: https://lore.kernel.org/all/20230530113249.GA156198@hirez.programming.kicks-ass.net/
[1]: https://lore.kernel.org/all/20230605175636.GA4253@hirez.programming.kicks-ass.net/
[2]: https://lore.kernel.org/lkml/20230613052004.2836135-1-void@manifault.com/
Results
=======
Note that the motivation for the shared runqueue feature was originally
arrived at using experiments in the sched_ext framework that's currently
being proposed upstream. The ~1 - 1.6% improvement in HHVM throughput
is similarly visible using work-conserving sched_ext schedulers (even
very simple ones like global FIFO).
In both single and multi socket / CCX hosts, this can measurably improve
performance. In addition to the performance gains observed on our
internal web workloads, we also observed an improvement in common
workloads such as kernel compile and hackbench, when running shared
runqueue.
On the other hand, some workloads suffer from shared_runq. Workloads
that hammer the runqueue hard, such as netperf UDP_RR, or ./schbench -L
-m 52 -p 512 -r 10 -t 1. This can be mitigated somewhat by sharding the
shared datastructures within a CCX, but it doesn't seem to eliminate all
contention in every scenario. On the positive side, it seems that
sharding does not materially harm the benchmarks run for this patch
series; and in fact seems to improve some workloads such as kernel
compile.
Note that for the kernel compile workloads below, the compilation was
done by running make -j$(nproc) built-in.a on several different types of
hosts configured with make allyesconfig on commit a27648c74210 ("afs:
Fix setting of mtime when creating a file/dir/symlink") on Linus' tree
(boost and turbo were disabled on all of these hosts when the
experiments were performed). Additionally, NO_SHARED_RUNQ refers to
SHARED_RUNQ being completely disabled, SHARED_RUNQ_WAKEUPS refers to
sharded SHARED_RUNQ where tasks are enqueued in the shared runqueue at
wakeup time, and SHARED_RUNQ_ALL refers to sharded SHARED_RUNQ where
tasks are enqueued in the shared runqueue on every enqueue. Results are
not included for unsharded shared runqueue, as the results here exceed
the unsharded results already outlined out in [2] as linked above.
=== Single-socket | 16 core / 32 thread | 2-CCX | AMD 7950X Zen4 ===
CPU max MHz: 5879.8818
CPU min MHz: 3000.0000
Command: make -j$(nproc) built-in.a
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 582.46s | 3101% |
SHARED_RUNQ_WAKEUPS: | 581.22s | 3117% |
SHARED_RUNQ_ALL: | 578.41s | 3141% |
o------------o-------o
Takeaway: SHARED_RUNQ_WAKEUPS performs roughly the same as
NO_SHARED_RUNQ, but SHARED_RUNQ_ALL results in a statistically
significant ~.7% improvement over NO_SHARED_RUNQ. This suggests that
enqueuing tasks in the shared runqueue on every enqueue improves work
conservation, and thanks to sharding, does not result in contention.
Note that I didn't collect data for kernel compile with SHARED_RUNQ_ALL
_without_ sharding. The reason for this is that we know that CPUs with
sufficiently large LLCs will contend, so if we've decided to accommodate
those CPUs with sharding, there's not much point in measuring the
results of not sharding on CPUs that we know won't contend.
Command: hackbench --loops 10000
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 2.1912s | 3117% |
SHARED_RUNQ_WAKEUP: | 2.1080s | 3155% |
SHARED_RUNQ_ALL: | 1.9830s | 3144% |
o------------o-------o
Takeaway: SHARED_RUNQ in both forms performs exceptionally well compared
to NO_SHARED_RUNQ here, with SHARED_RUNQ_ALL beating NO_SHARED_RUNQ by
almost 10%. This was a surprising result given that it seems
advantageous to err on the side of avoiding migration in hackbench given
that tasks are short lived in sending only 10k bytes worth of messages,
but the results of the benchmark would seem to suggest that minimizing
runqueue delays is preferable.
Command:
for i in `seq 128`; do
netperf -6 -t UDP_RR -c -C -l $runtime &
done
o_______________________o
| mean (thoughput) |
o-----------------------o
NO_SHARED_RUNQ: | 25064.12 |
SHARED_RUNQ_WAKEUP: | 24862.16 |
SHARED_RUNQ_ALL: | 25287.73 |
o-----------------------o
Takeaway: No statistical significance, though it is worth noting that
there is no regression for shared runqueue on the 7950X, while there is
a small regression on the Skylake and Milan hosts for SHARED_RUNQ_WAKEUP
as described below.
=== Single-socket | 18 core / 36 thread | 1-CCX | Intel Skylake ===
CPU max MHz: 1601.0000
CPU min MHz: 800.0000
Command: make -j$(nproc) built-in.a
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 1535.46s | 3417% |
SHARED_RUNQ_WAKEUP: | 1534.56s | 3428% |
SHARED_RUNQ_ALL: | 1531.95s | 3429% |
o------------o-------o
Takeaway: SHARED_RUNQ_ALL results in a ~.23% improvement over
NO_SHARED_RUNQ. Not a huge improvement, but consistently measurable.
The cause of this gain is presumably the same as the 7950X: improved
work conservation, with sharding preventing excessive contention on the
shard lock.
Command: hackbench --loops 10000
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 5.5750s | 3369% |
SHARED_RUNQ_WAKEUP: | 5.5764s | 3495% |
SHARED_RUNQ_ALL: | 5.4760s | 3481% |
o------------o-------o
Takeaway: SHARED_RUNQ_ALL results in a ~1.6% improvement over
NO_SHARED_RUNQ. Also statistically significant, but smaller than the
almost 10% improvement observed on the 7950X.
Command: netperf -n $(nproc) -l 60 -t TCP_RR
for i in `seq 128`; do
netperf -6 -t UDP_RR -c -C -l $runtime &
done
o______________________o
| mean (thoughput) |
o----------------------o
NO_SHARED_RUNQ: | 11963.08 |
SHARED_RUNQ_WAKEUP: | 11943.60 |
SHARED_RUNQ_ALL: | 11554.32 |
o----------------------o
Takeaway: NO_SHARED_RUNQ performs the same as SHARED_RUNQ_WAKEUP, but
beats SHARED_RUNQ_ALL by ~3.4%. This result makes sense -- the workload
is very heavy on the runqueue, so enqueuing tasks in the shared runqueue
in __enqueue_entity() would intuitively result in increased contention
on the shard lock. The fact that we're at parity with
SHARED_RUNQ_WAKEUP suggests that sharding the shared runqueue has
significantly improved the contention that was observed in v1, but that
__enqueue_entity() puts it over the edge.
NOTE: Parity for SHARED_RUNQ_WAKEUP relies on choosing the correct shard
size. If we chose, for example, a shard size of 16, there would still be
a regression between NO_SHARED_RUNQ and SHARED_RUNQ_WAKEUP. As described
below, this suggests that we may want to add a debugfs tunable for the
shard size.
=== Single-socket | 72-core | 6-CCX | AMD Milan Zen3 ===
CPU max MHz: 700.0000
CPU min MHz: 700.0000
Command: make -j$(nproc) built-in.a
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 1601.81s | 6476% |
SHARED_RUNQ_WAKEUP: | 1602.55s | 6472% |
SHARED_RUNQ_ALL: | 1602.49s | 6475% |
o------------o-------o
Takeaway: No statistically significant variance. It might be worth
experimenting with work stealing in a follow-on patch set.
Command: hackbench --loops 10000
o____________o_______o
| mean | CPU |
o------------o-------o
NO_SHARED_RUNQ: | 5.2672s | 6463% |
SHARED_RUNQ_WAKEUP: | 5.1476s | 6583% |
SHARED_RUNQ_ALL: | 5.1003s | 6598% |
o------------o-------o
Takeaway: SHARED_RUNQ_ALL again wins, by about 3% over NO_SHARED_RUNQ in
this case.
Command: netperf -n $(nproc) -l 60 -t TCP_RR
for i in `seq 128`; do
netperf -6 -t UDP_RR -c -C -l $runtime &
done
o_______________________o
| mean (thoughput) |
o-----------------------o
NO_SHARED_RUNQ: | 13819.08 |
SHARED_RUNQ_WAKEUP: | 13907.74 |
SHARED_RUNQ_ALL: | 13569.69 |
o-----------------------o
Takeaway: Similar to the Skylake runs, NO_SHARED_RUNQ still beats
SHARED_RUNQ_ALL, though by a slightly lower margin of ~1.8%.
Finally, let's look at how sharding affects the following schbench
incantation suggested by Chris in [3]:
schbench -L -m 52 -p 512 -r 10 -t 1
[3]: https://lore.kernel.org/lkml/c8419d9b-2b31-2190-3058-3625bdbcb13d@meta.com/
The TL;DR is that sharding improves things a lot, but doesn't completely
fix the problem. Here are the results from running the schbench command
on the 18 core / 36 thread single CCX, single-socket Skylake:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&shard->lock: 31510503 31510711 0.08 19.98 168932319.64 5.36 31700383 31843851 0.03 17.50 10273968.33 0.32
------------
&shard->lock 15731657 [<0000000068c0fd75>] pick_next_task_fair+0x4dd/0x510
&shard->lock 15756516 [<000000001faf84f9>] enqueue_task_fair+0x459/0x530
&shard->lock 21766 [<00000000126ec6ab>] newidle_balance+0x45a/0x650
&shard->lock 772 [<000000002886c365>] dequeue_task_fair+0x4c9/0x540
------------
&shard->lock 23458 [<00000000126ec6ab>] newidle_balance+0x45a/0x650
&shard->lock 16505108 [<000000001faf84f9>] enqueue_task_fair+0x459/0x530
&shard->lock 14981310 [<0000000068c0fd75>] pick_next_task_fair+0x4dd/0x510
&shard->lock 835 [<000000002886c365>] dequeue_task_fair+0x4c9/0x540
These results are when we create only 3 shards (16 logical cores per
shard), so the contention may be a result of overly-coarse sharding. If
we run the schbench incantation with no sharding whatsoever, we see the
following significantly worse lock stats contention:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&shard->lock: 117868635 118361486 0.09 393.01 1250954097.25 10.57 119345882 119780601 0.05 343.35 38313419.51 0.32
------------
&shard->lock 59169196 [<0000000060507011>] __enqueue_entity+0xdc/0x110
&shard->lock 59084239 [<00000000f1c67316>] __dequeue_entity+0x78/0xa0
&shard->lock 108051 [<00000000084a6193>] newidle_balance+0x45a/0x650
------------
&shard->lock 60028355 [<0000000060507011>] __enqueue_entity+0xdc/0x110
&shard->lock 119882 [<00000000084a6193>] newidle_balance+0x45a/0x650
&shard->lock 58213249 [<00000000f1c67316>] __dequeue_entity+0x78/0xa0
The contention is ~3-4x worse if we don't shard at all. This roughly
matches the fact that we had 3 shards on the first workload run above.
If we make the shards even smaller, the contention is comparably much
lower:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&shard->lock: 13839849 13877596 0.08 13.23 5389564.95 0.39 46910241 48069307 0.06 16.40 16534469.35 0.34
------------
&shard->lock 3559 [<00000000ea455dcc>] newidle_balance+0x45a/0x650
&shard->lock 6992418 [<000000002266f400>] __dequeue_entity+0x78/0xa0
&shard->lock 6881619 [<000000002a62f2e0>] __enqueue_entity+0xdc/0x110
------------
&shard->lock 6640140 [<000000002266f400>] __dequeue_entity+0x78/0xa0
&shard->lock 3523 [<00000000ea455dcc>] newidle_balance+0x45a/0x650
&shard->lock 7233933 [<000000002a62f2e0>] __enqueue_entity+0xdc/0x110
Interestingly, SHARED_RUNQ performs worse than NO_SHARED_RUNQ on the schbench
benchmark on Milan as well, but we contend more on the rq lock than the
shard lock:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&rq->__lock: 9617614 9656091 0.10 79.64 69665812.00 7.21 18092700 67652829 0.11 82.38 344524858.87 5.09
-----------
&rq->__lock 6301611 [<000000003e63bf26>] task_rq_lock+0x43/0xe0
&rq->__lock 2530807 [<00000000516703f0>] __schedule+0x72/0xaa0
&rq->__lock 109360 [<0000000011be1562>] raw_spin_rq_lock_nested+0xa/0x10
&rq->__lock 178218 [<00000000c38a30f9>] sched_ttwu_pending+0x3d/0x170
-----------
&rq->__lock 3245506 [<00000000516703f0>] __schedule+0x72/0xaa0
&rq->__lock 1294355 [<00000000c38a30f9>] sched_ttwu_pending+0x3d/0x170
&rq->__lock 2837804 [<000000003e63bf26>] task_rq_lock+0x43/0xe0
&rq->__lock 1627866 [<0000000011be1562>] raw_spin_rq_lock_nested+0xa/0x10
..................................................................................................................................................................................................
&shard->lock: 7338558 7343244 0.10 35.97 7173949.14 0.98 30200858 32679623 0.08 35.59 16270584.52 0.50
------------
&shard->lock 2004142 [<00000000f8aa2c91>] __dequeue_entity+0x78/0xa0
&shard->lock 2611264 [<00000000473978cc>] newidle_balance+0x45a/0x650
&shard->lock 2727838 [<0000000028f55bb5>] __enqueue_entity+0xdc/0x110
------------
&shard->lock 2737232 [<00000000473978cc>] newidle_balance+0x45a/0x650
&shard->lock 1693341 [<00000000f8aa2c91>] __dequeue_entity+0x78/0xa0
&shard->lock 2912671 [<0000000028f55bb5>] __enqueue_entity+0xdc/0x110
...................................................................................................................................................................................................
If we look at the lock stats with SHARED_RUNQ disabled, the rq lock still
contends the most, but it's significantly less than with it enabled:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&rq->__lock: 791277 791690 0.12 110.54 4889787.63 6.18 1575996 62390275 0.13 112.66 316262440.56 5.07
-----------
&rq->__lock 263343 [<00000000516703f0>] __schedule+0x72/0xaa0
&rq->__lock 19394 [<0000000011be1562>] raw_spin_rq_lock_nested+0xa/0x10
&rq->__lock 4143 [<000000003b542e83>] __task_rq_lock+0x51/0xf0
&rq->__lock 51094 [<00000000c38a30f9>] sched_ttwu_pending+0x3d/0x170
-----------
&rq->__lock 23756 [<0000000011be1562>] raw_spin_rq_lock_nested+0xa/0x10
&rq->__lock 379048 [<00000000516703f0>] __schedule+0x72/0xaa0
&rq->__lock 677 [<000000003b542e83>] __task_rq_lock+0x51/0xf0
Worth noting is that increasing the granularity of the shards in general
improves very runqueue-heavy workloads such as netperf UDP_RR and this
schbench command, but it doesn't necessarily make a big difference for
every workload, or for sufficiently small CCXs such as the 7950X. It may
make sense to eventually allow users to control this with a debugfs
knob, but for now we'll elect to choose a default that resulted in good
performance for the benchmarks run for this patch series.
Conclusion
==========
shared_runq in this form provides statistically significant wins for
several types of workloads, and various CPU topologies. The reason for
this is roughly the same for all workloads: shared_runq encourages work
conservation inside of a CCX by having a CPU do an O(# per-LLC shards)
iteration over the shared_runq shards in an LLC. We could similarly do
an O(n) iteration over all of the runqueues in the current LLC when a
core is going idle, but that's quite costly (especially for larger
LLCs), and sharded shared_runq seems to provide a performant middle
ground between doing an O(n) walk, and doing an O(# shards) walk.
For the workloads above, kernel compile and hackbench were clear winners
for shared_runq (especially in __enqueue_entity()). The reason for the
improvement in kernel compile is of course that we have a heavily
CPU-bound workload where cache locality doesn't mean much; getting a CPU
is the #1 goal. As mentioned above, while I didn't expect to see an
improvement in hackbench, the results of the benchmark suggest that
minimizing runqueue delays is preferable to optimizing for L1/L2
locality.
Not all workloads benefit from shared_runq, however. Workloads that
hammer the runqueue hard, such as netperf UDP_RR, or schbench -L -m 52
-p 512 -r 10 -t 1, tend to run into contention on the shard locks;
especially when enqueuing tasks in __enqueue_entity(). This can be
mitigated significantly by sharding the shared datastructures within a
CCX, but it doesn't eliminate all contention, as described above.
Worth noting as well is that Gautham Shenoy ran some interesting
experiments on a few more ideas in [4], such as walking the shared_runq
on the pop path until a task is found that can be migrated to the
calling CPU. I didn't run those experiments in this patch set, but it
might be worth doing so.
[4]: https://lore.kernel.org/lkml/ZJkqeXkPJMTl49GB@BLR-5CG11610CF.amd.com/
Finally, while shared_runq in this form encourages work conservation, it
of course does not guarantee it given that we don't implement any kind
of work stealing between shared_runqs. In the future, we could
potentially push CPU utilization even higher by enabling work stealing
between shared_runqs, likely between CCXs on the same NUMA node.
Originally-by: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: David Vernet <void@manifault.com>
David Vernet (7):
sched: Expose move_queued_task() from core.c
sched: Move is_cpu_allowed() into sched.h
sched: Check cpu_active() earlier in newidle_balance()
sched/fair: Add SHARED_RUNQ sched feature and skeleton calls
sched: Implement shared runqueue in CFS
sched: Shard per-LLC shared runqueues
sched: Move shared_runq to __{enqueue,dequeue}_entity()
include/linux/sched.h | 2 +
kernel/sched/core.c | 37 +-----
kernel/sched/fair.c | 254 +++++++++++++++++++++++++++++++++++++++-
kernel/sched/features.h | 1 +
kernel/sched/sched.h | 37 ++++++
5 files changed, 292 insertions(+), 39 deletions(-)
Comments
On Mon, Jul 10, 2023 at 03:03:35PM -0500, David Vernet wrote: > Difference between shared_runq and SIS_NODE > =========================================== > > In [0] Peter proposed a patch that addresses Tejun's observations that > when workqueues are targeted towards a specific LLC on his Zen2 machine > with small CCXs, that there would be significant idle time due to > select_idle_sibling() not considering anything outside of the current > LLC. > > This patch (SIS_NODE) is essentially the complement to the proposal > here. SID_NODE causes waking tasks to look for idle cores in neighboring > LLCs on the same die, whereas shared_runq causes cores about to go idle > to look for enqueued tasks. That said, in its current form, the two > features at are a different scope as SIS_NODE searches for idle cores > between LLCs, while shared_runq enqueues tasks within a single LLC. > > The patch was since removed in [1], and we compared the results to > shared_runq (previously called "swqueue") in [2]. SIS_NODE did not > outperform shared_runq on any of the benchmarks, so we elect to not > compare against it again for this v2 patch set. Right, so SIS is search-idle-on-wakeup, while you do search-task-on-newidle, and they are indeed complentary actions. As to SIS_NODE, I really want that to happen, but we need a little more work for the Epyc things, they have a few too many CCXs per node :-) Anyway, the same thing that moticated SIS_NODE should also be relevant here, those Zen2 things have only 3/4 cores per LLC, would it not also make sense to include multiple of them into the shared runqueue thing? (My brain is still processing the shared_runq name...)
On Tue, Jul 11, 2023 at 01:42:07PM +0200, Peter Zijlstra wrote: > On Mon, Jul 10, 2023 at 03:03:35PM -0500, David Vernet wrote: > > Difference between shared_runq and SIS_NODE > > =========================================== > > > > In [0] Peter proposed a patch that addresses Tejun's observations that > > when workqueues are targeted towards a specific LLC on his Zen2 machine > > with small CCXs, that there would be significant idle time due to > > select_idle_sibling() not considering anything outside of the current > > LLC. > > > > This patch (SIS_NODE) is essentially the complement to the proposal > > here. SID_NODE causes waking tasks to look for idle cores in neighboring > > LLCs on the same die, whereas shared_runq causes cores about to go idle > > to look for enqueued tasks. That said, in its current form, the two > > features at are a different scope as SIS_NODE searches for idle cores > > between LLCs, while shared_runq enqueues tasks within a single LLC. > > > > The patch was since removed in [1], and we compared the results to > > shared_runq (previously called "swqueue") in [2]. SIS_NODE did not > > outperform shared_runq on any of the benchmarks, so we elect to not > > compare against it again for this v2 patch set. > > Right, so SIS is search-idle-on-wakeup, while you do > search-task-on-newidle, and they are indeed complentary actions. > > As to SIS_NODE, I really want that to happen, but we need a little more > work for the Epyc things, they have a few too many CCXs per node :-) > > Anyway, the same thing that moticated SIS_NODE should also be relevant > here, those Zen2 things have only 3/4 cores per LLC, would it not also > make sense to include multiple of them into the shared runqueue thing? It's probably worth experimenting with this, but it would be workload dependent on whether this would help or hurt. I would imagine there are workloads where having a single shared runq for the whole socket is advantageous even for larger LLCs like on Milan. But for many use cases (including e.g. HHVM), the cache-line bouncing makes it untenable. But yes, if we deem SIS_NODE to be useful for small CCXs like Zen2, I don't see any reason to not apply that to shared_runq as well. I don't have a Zen2 but I'll prototype this idea and hopefully can get some help from Tejun or someone else to run some benchmarks on it. > (My brain is still processing the shared_runq name...) Figured this would be the most contentious part of v2.
Hello David, On Mon, Jul 10, 2023 at 03:03:35PM -0500, David Vernet wrote: > Changes > ------- > > This is v2 of the shared wakequeue (now called shared runqueue) > patchset. The following are changes from the RFC v1 patchset > (https://lore.kernel.org/lkml/20230613052004.2836135-1-void@manifault.com/). > > v1 -> v2 changes: > - Change name from swqueue to shared_runq (Peter) > > - Sharded per-LLC shared runqueues to avoid contention on > scheduler-heavy workloads (Peter) > > - Pull tasks from the shared_runq in newidle_balance() rather than in > pick_next_task_fair() (Peter and Vincent) > > - Rename a few functions to reflect their actual purpose. For example, > shared_runq_dequeue_task() instead of swqueue_remove_task() (Peter) > > - Expose move_queued_task() from core.c rather than migrate_task_to() > (Peter) > > - Properly check is_cpu_allowed() when pulling a task from a shared_runq > to ensure it can actually be migrated (Peter and Gautham) > > - Dropped RFC tag > > This patch set is based off of commit ebb83d84e49b ("sched/core: Avoid > multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") on the > sched/core branch of tip.git. I have evaluated this v2 patchset on AMD Zen3 and Zen4 servers. tldr: * We see non-trivial improvements on hackbench on both Zen3 and Zen4 until the system is super-overloaded, at which point we see regressions. * tbench shows regressions on Zen3 with each client configuration. tbench on Zen4 shows some improvements when the system is overloaded. * netperf shows minor improvements on Zen3 when the system is under low to moderate load. netperf regresses on Zen3 at high load, and at all load-points on Zen4. * Stream, SPECjbb2015 and Mongodb show no significant difference compared to the current tip. * With netperf and tbench, using the shared-runqueue during enqueue_entity performs badly. Server configurations used: AMD Zen3 Server: * 2 sockets, * 64 cores per socket, * SMT2 enabled * Total of 256 threads. * Configured in Nodes-Per-Socket(NPS)=1 AMD Zen4 Server: * 2 sockets, * 128 cores per socket, * SMT2 enabled * Total of 512 threads. * Configured in Nodes-Per-Socket(NPS)=1 The trends on NPS=2 and NPS=4 are similar. So I am not posting those. Legend: tip : Tip kernel with top commit ebb83d84e49b ("sched/core: Avoid multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") swqueue_v1 : Your v1 patches applied on top of the aforemenioned tip commit. noshard : shared-runqueue v2 patches 1-5. This uses a shared-runqueue during wakeup. No sharding. shard_wakeup : shared-runqueue v2 patches 1-6. This uses a shared-runqueue during wakeup and has shards with shard size = 6 (default) shard_all : v2 patches 1-7. This uses a sharded shared-runqueue during enqueue_entity ================================================================== Test : hackbench Units : Normalized time in seconds Interpretation: Lower is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Case: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1-groups 1.00 [ 0.00]( 4.39) 0.89 [ 11.39]( 6.86) 0.90 [ 9.87](12.51) 0.92 [ 7.85]( 6.29) 0.91 [ 8.86]( 4.67) 2-groups 1.00 [ 0.00]( 1.64) 0.81 [ 18.90]( 2.67) 0.84 [ 15.71]( 5.91) 0.87 [ 12.74]( 2.26) 0.87 [ 12.53]( 2.40) 4-groups 1.00 [ 0.00]( 1.27) 0.89 [ 10.85]( 2.38) 0.94 [ 6.20]( 2.99) 0.97 [ 2.71]( 2.38) 0.96 [ 4.46]( 1.06) 8-groups 1.00 [ 0.00]( 0.72) 0.94 [ 6.37]( 2.25) 0.96 [ 3.61]( 3.39) 1.09 [ -8.78]( 1.14) 1.06 [ -5.68]( 1.55) 16-groups 1.00 [ 0.00]( 4.72) 0.96 [ 3.64]( 1.11) 1.01 [ -1.26]( 1.68) 1.07 [ -7.41]( 2.06) 1.10 [ -9.55]( 1.48) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Case: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1-groups 1.00 [ 0.00](13.13) 0.90 [ 10.34](10.27) 0.94 [ 6.44](10.99) 0.94 [ 6.44](11.45) 0.91 [ 9.20](13.31) 2-groups 1.00 [ 0.00](10.27) 0.97 [ 2.86]( 6.51) 0.94 [ 5.71]( 4.65) 1.00 [ -0.48](10.25) 0.95 [ 4.52]( 6.60) 4-groups 1.00 [ 0.00]( 2.73) 0.89 [ 10.80]( 7.75) 0.82 [ 17.61]( 9.57) 0.83 [ 16.67](10.64) 0.81 [ 18.56]( 9.58) 8-groups 1.00 [ 0.00]( 1.75) 0.85 [ 15.22]( 5.16) 0.83 [ 17.01]( 4.28) 0.90 [ 10.45](10.05) 0.79 [ 21.04]( 2.84) 16-groups 1.00 [ 0.00]( 0.54) 1.16 [-15.84]( 2.17) 1.16 [-16.09]( 3.59) 1.24 [-24.26]( 4.22) 1.13 [-12.87]( 3.76) ================================================================== Test : tbench Units : Normalized throughput Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clients: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.46) 0.99 [ -1.33]( 0.94) 0.98 [ -1.65]( 0.37) 1.00 [ -0.23]( 0.15) 0.95 [ -4.77]( 0.93) 2 1.00 [ 0.00]( 0.35) 0.99 [ -1.00]( 0.41) 1.00 [ -0.12]( 0.45) 0.99 [ -0.62]( 1.43) 0.92 [ -8.48]( 5.19) 4 1.00 [ 0.00]( 2.37) 0.99 [ -0.70]( 0.39) 0.98 [ -2.49]( 0.79) 0.98 [ -1.92]( 0.85) 0.91 [ -8.54]( 0.56) 8 1.00 [ 0.00]( 0.35) 1.01 [ 0.88]( 0.21) 0.97 [ -3.13]( 0.96) 0.99 [ -1.44]( 0.96) 0.89 [-11.48]( 1.31) 16 1.00 [ 0.00]( 2.50) 1.00 [ 0.34]( 1.34) 0.97 [ -2.96]( 1.17) 0.97 [ -2.88]( 1.85) 0.84 [-16.41]( 1.31) 32 1.00 [ 0.00]( 1.79) 0.98 [ -1.97]( 3.81) 0.99 [ -1.17]( 1.89) 0.95 [ -4.83]( 2.08) 0.52 [-48.23]( 1.11) 64 1.00 [ 0.00]( 5.75) 1.17 [ 17.41]( 0.79) 0.97 [ -3.14](10.67) 1.07 [ 6.94]( 3.08) 0.41 [-59.17]( 1.88) 128 1.00 [ 0.00]( 3.16) 0.87 [-13.37]( 7.98) 0.73 [-26.87]( 1.07) 0.74 [-25.81]( 0.97) 0.35 [-64.93]( 0.68) 256 1.00 [ 0.00]( 0.21) 0.93 [ -6.86]( 2.81) 0.85 [-15.26]( 3.17) 0.91 [ -9.39]( 1.05) 0.90 [ -9.94]( 0.97) 512 1.00 [ 0.00]( 0.23) 0.88 [-11.79](15.25) 0.83 [-17.35]( 1.21) 0.87 [-12.96]( 2.63) 0.99 [ -1.18]( 0.80) 1024 1.00 [ 0.00]( 0.44) 0.99 [ -0.98]( 0.43) 0.77 [-23.18]( 5.24) 0.82 [-17.83]( 0.70) 0.96 [ -3.82]( 1.61) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clients: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.19) 0.98 [ -1.72]( 0.19) 0.99 [ -1.15]( 0.28) 0.98 [ -1.79]( 0.28) 0.99 [ -1.49]( 0.10) 2 1.00 [ 0.00]( 0.63) 0.98 [ -2.28]( 0.63) 0.98 [ -1.91]( 0.26) 0.97 [ -3.14]( 0.25) 0.98 [ -1.77]( 0.32) 4 1.00 [ 0.00]( 0.22) 1.00 [ 0.00]( 1.13) 0.99 [ -0.69]( 0.57) 0.98 [ -1.59]( 0.35) 0.99 [ -0.64]( 0.18) 8 1.00 [ 0.00]( 1.14) 0.99 [ -0.73]( 0.61) 0.98 [ -2.28]( 2.61) 0.97 [ -2.56]( 0.34) 0.98 [ -1.77]( 0.70) 16 1.00 [ 0.00]( 0.98) 0.97 [ -2.54]( 1.24) 0.98 [ -1.71]( 1.86) 0.98 [ -1.53]( 0.62) 0.96 [ -3.56]( 0.93) 32 1.00 [ 0.00]( 0.76) 0.98 [ -2.31]( 1.35) 0.98 [ -2.06]( 0.77) 0.96 [ -3.53]( 1.63) 0.88 [-11.72]( 2.77) 64 1.00 [ 0.00]( 0.96) 0.96 [ -4.45]( 3.53) 0.97 [ -3.44]( 1.53) 0.96 [ -3.52]( 0.89) 0.31 [-69.03]( 0.64) 128 1.00 [ 0.00]( 3.03) 0.95 [ -4.78]( 0.56) 0.98 [ -2.48]( 0.47) 0.92 [ -7.73]( 0.16) 0.20 [-79.75]( 0.24) 256 1.00 [ 0.00]( 0.04) 0.93 [ -7.21]( 1.00) 0.94 [ -5.90]( 0.63) 0.59 [-41.29]( 1.76) 0.16 [-83.71]( 0.07) 512 1.00 [ 0.00]( 3.08) 1.07 [ 7.07](17.78) 1.15 [ 15.49]( 2.65) 0.82 [-17.53](29.11) 0.93 [ -7.18](32.23) 1024 1.00 [ 0.00]( 0.60) 1.16 [ 15.61]( 0.07) 1.16 [ 15.92]( 0.06) 1.12 [ 11.57]( 1.86) 1.12 [ 11.97]( 0.21) 2048 1.00 [ 0.00]( 0.16) 1.15 [ 14.62]( 0.90) 1.15 [ 15.20]( 0.29) 1.08 [ 7.64]( 1.44) 1.15 [ 14.57]( 0.23) ================================================================== Test : stream (10 runs) Units : Normalized Bandwidth, MB/s Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Test: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) Copy 1.00 [ 0.00]( 5.30) 1.00 [ -0.09]( 5.26) 1.03 [ 2.79]( 2.06) 1.02 [ 1.73]( 5.29) 1.03 [ 2.92]( 1.82) Scale 1.00 [ 0.00]( 0.27) 0.98 [ -1.98]( 1.82) 0.99 [ -0.62]( 1.24) 1.00 [ -0.26]( 1.42) 1.00 [ 0.10]( 0.17) Add 1.00 [ 0.00]( 0.48) 0.99 [ -0.90]( 1.71) 1.00 [ 0.12]( 0.95) 1.00 [ 0.18]( 1.45) 1.01 [ 0.56]( 0.14) Triad 1.00 [ 0.00]( 1.02) 1.03 [ 2.80]( 0.60) 1.02 [ 2.18]( 1.96) 1.03 [ 2.56]( 2.25) 1.03 [ 2.58]( 0.56) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Test: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) Copy 1.00 [ 0.00]( 0.79) 0.99 [ -0.65]( 0.83) 1.00 [ 0.39]( 0.47) 0.99 [ -1.41]( 1.16) 0.98 [ -1.73]( 1.17) Scale 1.00 [ 0.00]( 0.25) 1.00 [ -0.21]( 0.23) 1.01 [ 0.63]( 0.64) 0.99 [ -0.72]( 0.25) 1.00 [ -0.40]( 0.70) Add 1.00 [ 0.00]( 0.25) 1.00 [ -0.15]( 0.27) 1.00 [ 0.44]( 0.73) 0.99 [ -0.82]( 0.40) 0.99 [ -0.73]( 0.78) Triad 1.00 [ 0.00]( 0.23) 1.00 [ -0.28]( 0.28) 1.00 [ 0.34]( 0.74) 0.99 [ -0.91]( 0.48) 0.99 [ -0.86]( 0.98) ================================================================== Test : stream (100 runs) Units : Normalized Bandwidth, MB/s Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Test: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) Copy 1.00 [ 0.00]( 0.58) 0.98 [ -2.05]( 6.08) 1.00 [ 0.46]( 0.19) 0.99 [ -0.58]( 1.96) 1.01 [ 0.50]( 0.20) Scale 1.00 [ 0.00]( 0.47) 0.97 [ -2.71]( 4.94) 1.00 [ -0.01]( 0.23) 1.00 [ -0.35]( 1.39) 1.00 [ 0.10]( 0.20) Add 1.00 [ 0.00]( 0.18) 0.97 [ -2.77]( 7.30) 1.00 [ 0.27]( 0.13) 1.00 [ 0.05]( 0.59) 1.00 [ 0.19]( 0.15) Triad 1.00 [ 0.00]( 0.77) 0.99 [ -0.59]( 7.73) 1.03 [ 2.80]( 0.36) 1.03 [ 2.92]( 0.43) 1.02 [ 2.23]( 0.38) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Test: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) Copy 1.00 [ 0.00]( 1.45) 0.97 [ -2.65]( 2.02) 1.01 [ 0.85]( 0.96) 1.00 [ -0.24]( 0.67) 0.97 [ -2.77]( 1.09) Scale 1.00 [ 0.00]( 0.67) 0.99 [ -1.27]( 1.01) 1.01 [ 0.93]( 0.95) 1.00 [ -0.08]( 0.46) 0.99 [ -1.23]( 0.35) Add 1.00 [ 0.00]( 0.46) 0.99 [ -1.28]( 0.54) 1.01 [ 0.88]( 0.61) 1.00 [ 0.00]( 0.31) 0.98 [ -1.89]( 0.33) Triad 1.00 [ 0.00]( 0.46) 0.98 [ -1.93]( 0.55) 1.01 [ 0.80]( 0.59) 1.00 [ 0.00]( 0.37) 0.98 [ -2.22]( 0.09) ================================================================== Test : netperf Units : Normalized Througput Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clients: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1-clients 1.00 [ 0.00]( 0.25) 0.99 [ -1.22]( 0.88) 0.99 [ -1.34]( 0.26) 1.01 [ 0.65]( 0.44) 0.95 [ -4.71]( 1.14) 2-clients 1.00 [ 0.00]( 0.37) 1.02 [ 1.59]( 1.02) 1.03 [ 2.62]( 1.77) 1.02 [ 2.39]( 1.59) 0.96 [ -4.47]( 0.77) 4-clients 1.00 [ 0.00]( 0.57) 1.03 [ 2.79]( 1.21) 1.03 [ 2.59]( 1.72) 1.01 [ 1.49]( 2.27) 0.92 [ -8.30]( 2.95) 8-clients 1.00 [ 0.00]( 1.09) 1.05 [ 4.84]( 0.99) 1.03 [ 3.13]( 1.70) 1.02 [ 1.53]( 2.25) 0.92 [ -8.23]( 1.64) 16-clients 1.00 [ 0.00]( 1.34) 1.06 [ 5.88]( 0.96) 1.04 [ 3.52]( 1.99) 0.99 [ -1.10]( 2.28) 0.77 [-22.58]( 0.96) 32-clients 1.00 [ 0.00]( 5.77) 1.08 [ 8.30]( 2.26) 1.00 [ 0.06]( 2.18) 1.02 [ 2.12]( 2.31) 0.44 [-56.08]( 1.50) 64-clients 1.00 [ 0.00]( 3.14) 0.94 [ -5.93]( 3.19) 0.76 [-24.05]( 1.65) 0.85 [-15.44]( 3.05) 0.33 [-66.71]( 7.28) 128-clients 1.00 [ 0.00]( 1.74) 0.73 [-26.70]( 3.10) 0.64 [-35.94]( 3.64) 0.73 [-26.93]( 5.07) 0.36 [-63.97]( 7.73) 256-clients 1.00 [ 0.00]( 1.50) 0.61 [-38.62]( 1.21) 0.56 [-43.88]( 5.72) 0.59 [-40.60]( 2.26) 0.83 [-17.18]( 5.95) 512-clients 1.00 [ 0.00](50.23) 0.66 [-33.66](51.96) 0.47 [-53.21](47.91) 0.50 [-50.22](42.87) 0.89 [-10.89](48.44) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clients: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1-clients 1.00 [ 0.00]( 0.60) 0.99 [ -0.65]( 0.14) 0.99 [ -1.26]( 0.26) 0.99 [ -1.37]( 0.40) 0.99 [ -0.80]( 0.53) 2-clients 1.00 [ 0.00]( 0.57) 0.99 [ -1.07]( 0.40) 0.99 [ -1.46]( 0.22) 0.98 [ -2.07]( 0.15) 0.99 [ -1.38]( 0.25) 4-clients 1.00 [ 0.00]( 0.32) 0.99 [ -0.81]( 0.31) 0.99 [ -1.32]( 0.49) 0.98 [ -1.87]( 0.31) 0.99 [ -1.40]( 0.62) 8-clients 1.00 [ 0.00]( 0.45) 0.99 [ -1.26]( 0.88) 0.98 [ -2.01]( 0.90) 0.98 [ -2.23]( 0.42) 0.98 [ -2.04]( 1.03) 16-clients 1.00 [ 0.00]( 0.49) 0.98 [ -1.91]( 1.54) 0.97 [ -2.86]( 2.41) 0.97 [ -2.70]( 1.39) 0.97 [ -3.30]( 0.78) 32-clients 1.00 [ 0.00]( 1.95) 0.98 [ -2.10]( 2.09) 0.98 [ -2.17]( 1.24) 0.97 [ -2.73]( 1.58) 0.44 [-56.47](12.66) 64-clients 1.00 [ 0.00]( 3.05) 0.96 [ -4.00]( 2.43) 0.95 [ -4.84]( 2.82) 0.97 [ -3.43]( 2.06) 0.24 [-76.24]( 1.15) 128-clients 1.00 [ 0.00]( 2.63) 0.86 [-14.02]( 2.49) 0.80 [-19.86]( 2.16) 0.75 [-25.02]( 3.24) 0.14 [-85.91]( 3.76) 256-clients 1.00 [ 0.00]( 2.02) 0.75 [-25.02]( 2.59) 0.52 [-47.93]( 2.60) 0.42 [-58.38]( 2.18) 0.11 [-88.59]( 9.61) 512-clients 1.00 [ 0.00]( 5.67) 1.20 [ 19.91]( 4.99) 1.22 [ 21.57]( 3.89) 0.92 [ -7.65]( 4.84) 1.07 [ 7.22](14.77) ================================================================== Test : schbench Units : Normalized 99th percentile latency in us Interpretation: Lower is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.00) 1.79 [-78.57]( 4.00) 1.93 [-92.86](13.86) 1.93 [-92.86](17.44) 1.14 [-14.29]( 9.75) 2 1.00 [ 0.00]( 7.37) 1.87 [-86.67]( 5.52) 1.87 [-86.67]( 9.10) 1.87 [-86.67]( 7.14) 1.07 [ -6.67]( 3.53) 4 1.00 [ 0.00]( 4.00) 1.28 [-28.00]( 8.57) 1.40 [-40.00]( 0.00) 1.48 [-48.00]( 6.47) 1.08 [ -8.00]( 7.41) 8 1.00 [ 0.00]( 3.33) 1.21 [-20.59]( 1.42) 1.18 [-17.65]( 3.79) 1.15 [-14.71]( 3.88) 1.09 [ -8.82]( 4.81) 16 1.00 [ 0.00]( 2.11) 1.00 [ 0.00]( 2.81) 1.00 [ 0.00]( 4.69) 1.04 [ -3.70]( 4.22) 1.04 [ -3.70]( 6.57) 32 1.00 [ 0.00]( 4.51) 0.92 [ 7.61]( 1.37) 0.92 [ 7.61]( 5.90) 0.93 [ 6.52]( 2.91) 0.96 [ 4.35]( 1.74) 64 1.00 [ 0.00]( 0.92) 0.95 [ 5.45]( 0.37) 0.95 [ 5.45]( 0.98) 0.96 [ 3.64]( 3.14) 1.00 [ 0.00]( 9.05) 128 1.00 [ 0.00]( 1.35) 0.93 [ 7.16]( 3.88) 0.92 [ 7.90]( 2.01) 0.94 [ 5.68]( 1.14) 0.94 [ 6.42]( 0.95) 256 1.00 [ 0.00]( 0.59) 1.00 [ 0.00]( 1.53) 1.01 [ -1.16]( 6.79) 0.96 [ 4.42]( 5.96) 0.91 [ 9.31]( 2.22) 512 1.00 [ 0.00]( 4.53) 1.03 [ -3.44]( 2.14) 1.08 [ -7.56]( 0.74) 1.04 [ -3.89]( 2.91) 0.90 [ 10.31](11.51) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00](13.36) 0.90 [ 10.00](21.95) 0.87 [ 13.33]( 2.25) 0.83 [ 16.67](25.24) 0.97 [ 3.33](20.80) 2 1.00 [ 0.00](15.57) 1.12 [-11.54]( 2.01) 1.04 [ -3.85](11.04) 1.08 [ -7.69](10.66) 1.08 [ -7.69]( 2.09) 4 1.00 [ 0.00]( 3.33) 1.03 [ -3.33]( 9.94) 1.17 [-16.67](16.38) 0.97 [ 3.33]( 2.01) 0.97 [ 3.33]( 2.01) 8 1.00 [ 0.00]( 3.18) 1.08 [ -8.11]( 5.29) 0.97 [ 2.70]( 2.78) 1.00 [ 0.00]( 9.79) 1.08 [ -8.11]( 4.22) 16 1.00 [ 0.00](14.63) 0.96 [ 3.85]( 7.33) 1.15 [-15.38](13.09) 0.98 [ 1.92]( 1.96) 0.96 [ 3.85]( 3.40) 32 1.00 [ 0.00]( 1.49) 1.03 [ -2.60]( 3.80) 1.01 [ -1.30]( 1.95) 0.99 [ 1.30]( 1.32) 0.99 [ 1.30]( 2.25) 64 1.00 [ 0.00]( 0.00) 1.01 [ -0.80]( 1.64) 1.02 [ -1.60]( 1.63) 1.02 [ -2.40]( 0.78) 1.02 [ -1.60]( 1.21) 128 1.00 [ 0.00]( 1.10) 0.99 [ 0.87]( 0.88) 1.00 [ 0.44]( 0.00) 0.98 [ 2.18]( 0.77) 1.02 [ -1.75]( 1.96) 256 1.00 [ 0.00]( 0.96) 0.99 [ 0.76]( 0.22) 0.99 [ 1.14]( 1.17) 0.97 [ 3.43]( 0.20) 1.18 [-17.90](10.57) 512 1.00 [ 0.00]( 0.73) 1.16 [-16.40]( 0.46) 1.17 [-17.45]( 1.38) 1.16 [-16.40]( 0.87) 1.08 [ -8.03]( 1.98) ================================================================== Test : new-schbench-requests-per-second Units : Normalized Requests per second Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.22) 1.00 [ 0.00]( 0.25) 1.00 [ 0.00]( 0.22) 1.00 [ 0.00]( 0.12) 2 1.00 [ 0.00]( 0.21) 1.00 [ 0.24]( 0.50) 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.12) 1.00 [ 0.24]( 0.12) 4 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.12) 1.00 [ 0.24]( 0.12) 1.00 [ 0.24]( 0.12) 1.00 [ 0.00]( 0.12) 8 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.12) 16 1.00 [ 0.00]( 0.12) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.12) 1.00 [ -0.24]( 0.12) 1.00 [ -0.48]( 0.12) 32 1.00 [ 0.00]( 3.00) 0.99 [ -1.38]( 0.25) 0.98 [ -1.93]( 0.14) 0.98 [ -2.20]( 0.15) 0.96 [ -4.13]( 0.39) 64 1.00 [ 0.00]( 3.74) 0.97 [ -3.11]( 0.49) 0.94 [ -5.87]( 1.53) 0.93 [ -7.25]( 2.01) 0.91 [ -8.64]( 0.39) 128 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 0.99 [ -1.11]( 0.19) 256 1.00 [ 0.00]( 0.26) 1.13 [ 12.58]( 0.34) 1.12 [ 11.84]( 0.20) 1.08 [ 7.89]( 0.82) 1.02 [ 1.73]( 1.31) 512 1.00 [ 0.00]( 0.11) 1.00 [ 0.43]( 0.33) 1.00 [ 0.21]( 0.19) 1.00 [ 0.21]( 0.29) 1.00 [ -0.43]( 0.11) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.23) 1.00 [ 0.00]( 0.00) 0.99 [ -1.32]( 0.23) 0.98 [ -1.76]( 0.23) 1.00 [ 0.00]( 0.00) 2 1.00 [ 0.00]( 0.11) 1.00 [ 0.44]( 0.11) 0.99 [ -0.66]( 0.23) 0.98 [ -1.54]( 0.12) 1.00 [ 0.22]( 0.11) 4 1.00 [ 0.00]( 0.00) 1.00 [ 0.22]( 0.00) 0.99 [ -0.88]( 0.20) 0.99 [ -1.32]( 0.11) 1.00 [ 0.22]( 0.00) 8 1.00 [ 0.00]( 0.11) 1.00 [ 0.22]( 0.00) 0.99 [ -0.88]( 0.00) 0.98 [ -1.54]( 0.12) 1.00 [ 0.22]( 0.00) 16 1.00 [ 0.00]( 0.11) 1.00 [ 0.22]( 0.11) 0.99 [ -0.66]( 0.11) 0.99 [ -1.10]( 0.23) 1.00 [ 0.22]( 0.00) 32 1.00 [ 0.00]( 0.00) 1.00 [ 0.22]( 0.11) 1.00 [ -0.22]( 0.00) 0.99 [ -1.32]( 0.11) 1.00 [ 0.44]( 0.11) 64 1.00 [ 0.00]( 0.00) 1.00 [ 0.22]( 0.00) 1.00 [ -0.22]( 0.00) 0.99 [ -1.32]( 0.11) 1.00 [ 0.22]( 0.00) 128 1.00 [ 0.00]( 4.48) 1.03 [ 3.03]( 0.12) 1.01 [ 1.17]( 0.24) 1.00 [ -0.47]( 0.32) 0.99 [ -0.70]( 0.21) 256 1.00 [ 0.00]( 0.00) 1.01 [ 1.06]( 0.00) 1.01 [ 0.84]( 0.11) 0.99 [ -1.48]( 0.29) 1.00 [ 0.42]( 0.00) 512 1.00 [ 0.00]( 0.36) 1.08 [ 8.48]( 0.13) 1.08 [ 7.68]( 0.13) 1.03 [ 2.65]( 0.40) 1.03 [ 2.65]( 1.16) ================================================================== Test : new-schbench-wakeup-latency Units : Normalized 99th percentile latency in us Interpretation: Lower is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00](22.13) 1.00 [ 0.00](15.49) 1.00 [ 0.00]( 8.15) 1.17 [-16.67]( 0.00) 1.50 [-50.00](19.36) 2 1.00 [ 0.00]( 0.00) 0.75 [ 25.00](15.49) 1.00 [ 0.00](14.08) 1.00 [ 0.00]( 6.74) 1.00 [ 0.00]( 6.20) 4 1.00 [ 0.00](14.08) 1.00 [ 0.00](14.08) 1.00 [ 0.00](14.08) 1.00 [ 0.00](14.08) 1.00 [ 0.00]( 0.00) 8 1.00 [ 0.00]( 6.74) 0.75 [ 25.00](15.49) 0.88 [ 12.50](12.78) 0.75 [ 25.00](15.49) 1.12 [-12.50]( 5.96) 16 1.00 [ 0.00]( 0.00) 0.78 [ 22.22]( 0.00) 0.78 [ 22.22](13.47) 1.00 [ 0.00](12.39) 1.00 [ 0.00]( 0.00) 32 1.00 [ 0.00]( 9.94) 1.11 [-11.11](11.07) 1.11 [-11.11]( 5.34) 1.11 [-11.11](11.07) 1.11 [-11.11]( 5.34) 64 1.00 [ 0.00]( 8.37) 1.00 [ 0.00]( 4.08) 1.00 [ 0.00]( 4.08) 1.00 [ 0.00]( 4.08) 1.08 [ -7.69]( 0.00) 128 1.00 [ 0.00]( 1.27) 1.05 [ -4.88]( 1.19) 1.05 [ -4.88]( 2.08) 1.05 [ -4.88]( 0.00) 3.95 [-295.12]( 4.51) 256 1.00 [ 0.00]( 0.22) 0.56 [ 43.91]( 1.05) 0.58 [ 42.42]( 1.24) 0.60 [ 40.06]( 9.74) 0.80 [ 20.12]( 1.06) 512 1.00 [ 0.00](11.19) 1.14 [-14.13](18.63) 1.30 [-30.50](59.87) 0.98 [ 2.25]( 3.40) 1.67 [-66.61](37.87) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00](25.98) 0.85 [ 15.38](38.73) 0.85 [ 15.38](16.26) 0.69 [ 30.77](47.67) 0.85 [ 15.38]( 4.84) 2 1.00 [ 0.00]( 4.43) 1.00 [ 0.00]( 4.19) 0.92 [ 8.33]( 0.00) 1.00 [ 0.00]( 4.43) 1.00 [ 0.00]( 4.43) 4 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 4.19) 0.92 [ 8.33]( 0.00) 0.92 [ 8.33]( 0.00) 1.00 [ 0.00]( 0.00) 8 1.00 [ 0.00]( 0.00) 1.08 [ -8.33]( 0.00) 0.92 [ 8.33]( 4.84) 0.92 [ 8.33]( 0.00) 1.00 [ 0.00]( 0.00) 16 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 0.92 [ 8.33]( 0.00) 0.92 [ 8.33]( 0.00) 1.00 [ 0.00]( 0.00) 32 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 0.91 [ 9.09]( 0.00) 1.00 [ 0.00]( 4.84) 1.00 [ 0.00]( 0.00) 64 1.00 [ 0.00]( 0.00) 1.10 [-10.00]( 0.00) 1.10 [-10.00]( 4.84) 1.10 [-10.00]( 0.00) 1.20 [-20.00]( 0.00) 128 1.00 [ 0.00]( 7.75) 1.64 [-64.29]( 0.00) 1.50 [-50.00]( 2.42) 1.57 [-57.14]( 4.07) 1.71 [-71.43]( 2.18) 256 1.00 [ 0.00]( 1.52) 1.05 [ -5.08]( 2.97) 1.03 [ -3.39](19.10) 1.10 [-10.17](16.79) 3.20 [-220.34](22.22) 512 1.00 [ 0.00]( 0.26) 0.63 [ 37.01](48.00) 0.63 [ 36.51]( 4.25) 0.72 [ 28.50]( 4.44) 0.90 [ 9.58]( 7.28) ================================================================== Test : new-schbench-request-latency Units : Normalized 99th percentile latency in us Interpretation: Lower is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.17) 1.00 [ -0.33]( 0.29) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.17) 1.00 [ 0.00]( 0.17) 2 1.00 [ 0.00]( 0.17) 1.00 [ 0.00]( 0.17) 1.00 [ 0.00]( 0.17) 1.00 [ 0.33]( 0.17) 1.00 [ 0.00]( 0.00) 4 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.29) 1.00 [ 0.00]( 0.17) 1.00 [ 0.33]( 0.17) 1.00 [ 0.00]( 0.17) 8 1.00 [ 0.00]( 0.17) 1.00 [ 0.00]( 0.00) 1.00 [ 0.33]( 0.17) 1.00 [ 0.33]( 0.17) 1.00 [ 0.00]( 0.17) 16 1.00 [ 0.00]( 3.93) 0.95 [ 4.71]( 0.17) 0.95 [ 4.71](11.00) 1.04 [ -3.77]( 4.58) 1.18 [-18.21]( 7.53) 32 1.00 [ 0.00]( 9.53) 1.47 [-46.67](12.05) 1.37 [-36.84](14.04) 1.34 [-34.04]( 5.00) 1.63 [-62.57]( 4.20) 64 1.00 [ 0.00]( 6.01) 1.09 [ -9.29]( 0.22) 1.10 [ -9.76]( 0.11) 1.10 [ -9.99]( 0.11) 1.10 [-10.22]( 0.11) 128 1.00 [ 0.00]( 0.28) 1.00 [ 0.21]( 0.32) 1.00 [ 0.00]( 0.11) 1.00 [ 0.21]( 0.32) 1.85 [-84.79]( 0.11) 256 1.00 [ 0.00]( 2.80) 1.02 [ -2.26]( 1.76) 1.07 [ -6.54]( 1.12) 1.19 [-19.37]( 7.04) 1.06 [ -5.79]( 1.55) 512 1.00 [ 0.00]( 0.36) 1.02 [ -1.79]( 0.90) 1.00 [ 0.20]( 1.63) 1.01 [ -1.39]( 0.37) 0.99 [ 1.00]( 1.40) Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #workers: tip[pct imp](CV) swqueue_v1[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.18) 1.01 [ -0.71]( 0.18) 1.01 [ -0.71]( 0.18) 1.00 [ 0.35]( 0.32) 2 1.00 [ 0.00]( 0.18) 0.99 [ 0.71]( 0.18) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.18) 1.00 [ 0.35]( 0.00) 4 1.00 [ 0.00]( 0.18) 1.00 [ 0.35]( 0.00) 1.00 [ -0.35]( 0.00) 1.00 [ -0.35]( 0.00) 1.00 [ 0.35]( 0.18) 8 1.00 [ 0.00]( 0.18) 0.99 [ 0.71]( 0.18) 1.00 [ 0.00]( 0.00) 1.00 [ 0.00]( 0.00) 1.00 [ 0.35]( 0.00) 16 1.00 [ 0.00]( 0.00) 0.99 [ 0.71]( 0.18) 1.00 [ 0.00]( 0.18) 1.00 [ 0.00]( 0.18) 1.00 [ 0.35]( 0.00) 32 1.00 [ 0.00]( 0.00) 0.99 [ 0.71]( 0.18) 1.00 [ 0.00]( 0.00) 1.00 [ -0.35]( 0.00) 1.00 [ 0.35]( 0.00) 64 1.00 [ 0.00]( 0.00) 1.00 [ 0.35](31.00) 1.00 [ -0.35](29.77) 1.00 [ -0.35]( 0.18) 1.05 [ -5.11](30.08) 128 1.00 [ 0.00]( 2.35) 0.98 [ 1.52]( 0.35) 1.00 [ 0.38]( 0.34) 1.03 [ -3.05]( 0.19) 1.01 [ -0.76]( 0.59) 256 1.00 [ 0.00]( 0.18) 0.98 [ 2.14]( 0.19) 0.99 [ 1.43]( 0.19) 1.01 [ -1.07]( 0.18) 1.45 [-44.56](18.01) 512 1.00 [ 0.00]( 0.55) 1.09 [ -9.45]( 1.79) 1.12 [-12.35]( 2.39) 1.15 [-14.93]( 1.68) 1.21 [-21.37]( 8.03) ================================================================== Test : SPECjbb2015 Units : maxJOPs and critJOPs Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Metric: tip[pct imp] swqueue_v1[pct imp] noshard[pct imp] shard_wakeup[pct imp] shard_all[pct imp] maxJOPS 1.00 [ 0.00] 1.00 [ 0.00] 1.00 [ 0.08] 1.01 [ 1.42] 1.00 [ 0.08] critJOPS 1.00 [ 0.00] 0.97 [ -3.43] 0.98 [ -2.38] 0.98 [ -1.82] 0.99 [ -0.96] ================================================================== Test : YCSB + Mongodb Units : Throughput Interpretation: Higher is better ================================================================== Zen3, 2 Sockets, 64 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Metric: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) Throughput 1.00 [ 0.00]( 0.33) 0.99 [ -0.77]( 0.81) 0.99 [ -1.04](2.14) 0.99 [ -1.04]( 0.36) 0.99 [ -1.46]( 1.29) Please let me know if you need any other data. -- Thanks and Regards gautham.
On Fri, Jul 21, 2023 at 02:42:57PM +0530, Gautham R. Shenoy wrote: > Hello David, Hello Gautham, Thank you for taking the time to run these benchmarks. Apologies for the delayed response -- I've been traveling this week. > On Mon, Jul 10, 2023 at 03:03:35PM -0500, David Vernet wrote: > > Changes > > ------- > > > > This is v2 of the shared wakequeue (now called shared runqueue) > > patchset. The following are changes from the RFC v1 patchset > > (https://lore.kernel.org/lkml/20230613052004.2836135-1-void@manifault.com/). > > > > v1 -> v2 changes: > > - Change name from swqueue to shared_runq (Peter) > > > > - Sharded per-LLC shared runqueues to avoid contention on > > scheduler-heavy workloads (Peter) > > > > - Pull tasks from the shared_runq in newidle_balance() rather than in > > pick_next_task_fair() (Peter and Vincent) > > > > - Rename a few functions to reflect their actual purpose. For example, > > shared_runq_dequeue_task() instead of swqueue_remove_task() (Peter) > > > > - Expose move_queued_task() from core.c rather than migrate_task_to() > > (Peter) > > > > - Properly check is_cpu_allowed() when pulling a task from a shared_runq > > to ensure it can actually be migrated (Peter and Gautham) > > > > - Dropped RFC tag > > > > This patch set is based off of commit ebb83d84e49b ("sched/core: Avoid > > multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") on the > > sched/core branch of tip.git. > > I have evaluated this v2 patchset on AMD Zen3 and Zen4 servers. > > tldr: > > * We see non-trivial improvements on hackbench on both Zen3 and Zen4 > until the system is super-overloaded, at which point we see > regressions. This makes sense to me. SHARED_RUNQ is more likely to help performance when the system is not over-utilized, as it has more of a chance to actually increase work conservation. If the system is over-utilized, it's likely that a core will be able to find a task regardless of whether it looks at the shared runq. That said, I wasn't able to reproduce the regressions (with --groups 16) on my 7950X, presumably because it only has 8 cores / CCX. > * tbench shows regressions on Zen3 with each client > configuration. tbench on Zen4 shows some improvements when the system is > overloaded. Hmm, I also observed tbench not performing well with SHARED_RUNQ on my Zen4 / 7950X, but only with heavy load. It also seems that sharding helps a lot for tbench on Zen3, whereas Zen4 performs fine without it. I'm having trouble reasoning about why Zen4 wouldn't require sharding whereas Zen3 would given that Zen4 has more cores per CCX. Just to verify -- these benchmarks were run with boost disabled, correct? Otherwise, there could be a lot of run-to-run variance depending on thermal throttling. > > * netperf shows minor improvements on Zen3 when the system is under > low to moderate load. netperf regresses on Zen3 at high load, and at > all load-points on Zen4. netperf in general seems to regress as the size of the LLC inreases due to it relentlessly hammering the runqueue, though it's still surprising to me that your Zen4 test showed regressions under low / moderate load as well. Was this with -t TCP_RR, or -t UDP_RR? I observed SHARED_RUNQ improving performance on my 7950X for -t TCP_RR as described on [0], so I'd be curious to better understand where the slowdowns are coming from (presumably it's just contending on the shard lock due to having a larger CCX?) [0]: https://lore.kernel.org/all/20230615000103.GC2883716@maniforge/ > * Stream, SPECjbb2015 and Mongodb show no significant difference compared > to the current tip. > > * With netperf and tbench, using the shared-runqueue during > enqueue_entity performs badly. My reading of your Zen4 numbers on tbench seem to imply that it actually performs well under heavy load. Copying here for convenience: Zen4, 2 Sockets, 128 cores per socket, SMT2: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clients: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) 1 1.00 [ 0.00]( 0.19) 0.98 [ -1.72]( 0.19) 0.99 [ -1.15]( 0.28) 0.98 [ -1.79]( 0.28) 0.99 [ -1.49]( 0.10) 2 1.00 [ 0.00]( 0.63) 0.98 [ -2.28]( 0.63) 0.98 [ -1.91]( 0.26) 0.97 [ -3.14]( 0.25) 0.98 [ -1.77]( 0.32) 4 1.00 [ 0.00]( 0.22) 1.00 [ 0.00]( 1.13) 0.99 [ -0.69]( 0.57) 0.98 [ -1.59]( 0.35) 0.99 [ -0.64]( 0.18) 8 1.00 [ 0.00]( 1.14) 0.99 [ -0.73]( 0.61) 0.98 [ -2.28]( 2.61) 0.97 [ -2.56]( 0.34) 0.98 [ -1.77]( 0.70) 16 1.00 [ 0.00]( 0.98) 0.97 [ -2.54]( 1.24) 0.98 [ -1.71]( 1.86) 0.98 [ -1.53]( 0.62) 0.96 [ -3.56]( 0.93) 32 1.00 [ 0.00]( 0.76) 0.98 [ -2.31]( 1.35) 0.98 [ -2.06]( 0.77) 0.96 [ -3.53]( 1.63) 0.88 [-11.72]( 2.77) 64 1.00 [ 0.00]( 0.96) 0.96 [ -4.45]( 3.53) 0.97 [ -3.44]( 1.53) 0.96 [ -3.52]( 0.89) 0.31 [-69.03]( 0.64) 128 1.00 [ 0.00]( 3.03) 0.95 [ -4.78]( 0.56) 0.98 [ -2.48]( 0.47) 0.92 [ -7.73]( 0.16) 0.20 [-79.75]( 0.24) 256 1.00 [ 0.00]( 0.04) 0.93 [ -7.21]( 1.00) 0.94 [ -5.90]( 0.63) 0.59 [-41.29]( 1.76) 0.16 [-83.71]( 0.07) 512 1.00 [ 0.00]( 3.08) 1.07 [ 7.07](17.78) 1.15 [ 15.49]( 2.65) 0.82 [-17.53](29.11) 0.93 [ -7.18](32.23) 1024 1.00 [ 0.00]( 0.60) 1.16 [ 15.61]( 0.07) 1.16 [ 15.92]( 0.06) 1.12 [ 11.57]( 1.86) 1.12 [ 11.97]( 0.21) 2048 1.00 [ 0.00]( 0.16) 1.15 [ 14.62]( 0.90) 1.15 [ 15.20]( 0.29) 1.08 [ 7.64]( 1.44) 1.15 [ 14.57]( 0.23) I'm also struggling to come up for an explanation for why Zen4 would operate well with SHARED_RUNQ under heavy load. Do you have a theory? > Server configurations used: > > AMD Zen3 Server: > * 2 sockets, > * 64 cores per socket, > * SMT2 enabled > * Total of 256 threads. > * Configured in Nodes-Per-Socket(NPS)=1 > > AMD Zen4 Server: > * 2 sockets, > * 128 cores per socket, > * SMT2 enabled > * Total of 512 threads. > * Configured in Nodes-Per-Socket(NPS)=1 > > The trends on NPS=2 and NPS=4 are similar. So I am not posting those. > > > Legend: > tip : Tip kernel with top commit ebb83d84e49b > ("sched/core: Avoid multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") > > swqueue_v1 : Your v1 patches applied on top of the aforemenioned tip commit. > > noshard : shared-runqueue v2 patches 1-5. This uses a shared-runqueue > during wakeup. No sharding. > > shard_wakeup : shared-runqueue v2 patches 1-6. This uses a > shared-runqueue during wakeup and has shards with > shard size = 6 (default) > > shard_all : v2 patches 1-7. This uses a sharded shared-runqueue during > enqueue_entity So, what's your overall impression from these numbers? My general impression so far is the following: - SHARED_RUNQ works best when the system would otherwise be under-utilized. If the system is going to be overloaded, it's unlikely to provide a significant benefit over CFS, and may even just add overhead with no benefit (or just cause worse cache locality). - SHARED_RUNQ isn't well-suited to workloads such as netperf which pummel the scheduler. Sharding helps a lot here, but doesn't completely fix the problem depending on how aggressively tasks are hammering the runqueue. - To the point above, using SHARED_RUNQ in __enqueue_entity() / __dequeue_entity(), rather than just on the wakeup path, is a net positive. Workloads which hammer the runq such as netperf or schbench -L -m 52 -p 512 -r 10 -t 1 will do poorly in both scenarios, so we may as well get the better work conservation from __enqueue_entity() / __dequeue_entity(). hackbench is one example of a workload that benefits from this, another is kernel compile, and I strongly suspect that HHVM would similarly benefit. - Sharding in general doesn't seem to regress performance by much when it wouldn't have otherwise been necessary to avoid contention. hackbench is better without sharding on Zen3, but it's also better with shard_all on Zen4. In general, if our goal is to better support hosts with large CCXs, I think we'll just need to support sharding. Thoughts? I have the v3 version of the patch set which properly supports domain recreation and hotplug, but I still need to get updated benchmark numbers on it, as well as benchmark spreading a shared_runq over multiple CCXs per Peter's comment in [1] about the initial motivation behind SIS_NODE also applying to SHARED_RUNQ. [1]: https://lore.kernel.org/all/20230711114207.GK3062772@hirez.programming.kicks-ass.net/ Given the points above, I would ideally like to just run the shard_all variant and compare that to the numbers I collected on v2 and shared in [2]. What do you think? There will be tradeoffs no matter what we choose to do, but enqueuing / dequeuing in __enqueue_entity() / __dequeue_entity() seems to perform the best for workloads that don't hammer the runqueue, and sharding seems like a given if we do decide to do that. [2]: https://lore.kernel.org/all/20230710200342.358255-1-void@manifault.com/ Thanks, David
Hello David, On Tue, Jul 25, 2023 at 03:22:55PM -0500, David Vernet wrote: > On Fri, Jul 21, 2023 at 02:42:57PM +0530, Gautham R. Shenoy wrote: > > Hello David, > > Hello Gautham, > > Thank you for taking the time to run these benchmarks. Apologies for the > delayed response -- I've been traveling this week. No issues. As you can see, there has been an delay from my end as well. > > > On Mon, Jul 10, 2023 at 03:03:35PM -0500, David Vernet wrote: > > > Changes > > > ------- > > > > > > This is v2 of the shared wakequeue (now called shared runqueue) > > > patchset. The following are changes from the RFC v1 patchset > > > (https://lore.kernel.org/lkml/20230613052004.2836135-1-void@manifault.com/). > > > > > > v1 -> v2 changes: > > > - Change name from swqueue to shared_runq (Peter) > > > > > > - Sharded per-LLC shared runqueues to avoid contention on > > > scheduler-heavy workloads (Peter) > > > > > > - Pull tasks from the shared_runq in newidle_balance() rather than in > > > pick_next_task_fair() (Peter and Vincent) > > > > > > - Rename a few functions to reflect their actual purpose. For example, > > > shared_runq_dequeue_task() instead of swqueue_remove_task() (Peter) > > > > > > - Expose move_queued_task() from core.c rather than migrate_task_to() > > > (Peter) > > > > > > - Properly check is_cpu_allowed() when pulling a task from a shared_runq > > > to ensure it can actually be migrated (Peter and Gautham) > > > > > > - Dropped RFC tag > > > > > > This patch set is based off of commit ebb83d84e49b ("sched/core: Avoid > > > multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") on the > > > sched/core branch of tip.git. > > > > I have evaluated this v2 patchset on AMD Zen3 and Zen4 servers. > > > > tldr: > > > > * We see non-trivial improvements on hackbench on both Zen3 and Zen4 > > until the system is super-overloaded, at which point we see > > regressions. > > This makes sense to me. SHARED_RUNQ is more likely to help performance > when the system is not over-utilized, as it has more of a chance to > actually increase work conservation. If the system is over-utilized, > it's likely that a core will be able to find a task regardless of > whether it looks at the shared runq. > > That said, I wasn't able to reproduce the regressions (with --groups 16) > on my 7950X, presumably because it only has 8 cores / CCX. > > > * tbench shows regressions on Zen3 with each client > > configuration. tbench on Zen4 shows some improvements when the system is > > overloaded. > > Hmm, I also observed tbench not performing well with SHARED_RUNQ on my > Zen4 / 7950X, but only with heavy load. It also seems that sharding > helps a lot for tbench on Zen3, whereas Zen4 performs fine without it. > I'm having trouble reasoning about why Zen4 wouldn't require sharding > whereas Zen3 would given that Zen4 has more cores per CCX. Yes, I have been thinking about it as well. Both Zen3 (Milan) and Zen4 (Bergamo) servers that I ran these tests on have 8 cores per CCX. Bergamo has 2 CCXes per CCD, whle Milan has 1 CCX per CCD. We don't model the CCD in the sched-domain hierarchy currently, so from the point of view of the LLC domain (which is the CCX), the number of cores between the two systems per LLC are identical. > > Just to verify -- these benchmarks were run with boost disabled, > correct? Otherwise, there could be a lot of run-to-run variance > depending on thermal throttling. Checking my scripts, these benchmarks were run with C2 disabled, performance governor enabled, and acpi-cpufreq as the scaling governor. Boost was enabled, so yes, there could be run-to-run variance. I can rerun them this weekend with boost disabled. I also need to understand the overloaded cases of tbench and netperf where the shared-runq is performing better. > > > > > * netperf shows minor improvements on Zen3 when the system is under > > low to moderate load. netperf regresses on Zen3 at high load, and at > > all load-points on Zen4. > > netperf in general seems to regress as the size of the LLC inreases due > to it relentlessly hammering the runqueue, though it's still surprising > to me that your Zen4 test showed regressions under low / moderate load > as well. Was this with -t TCP_RR, or -t UDP_RR? I observed SHARED_RUNQ > improving performance on my 7950X for -t TCP_RR as described on [0], so > I'd be curious to better understand where the slowdowns are coming from > (presumably it's just contending on the shard lock due to having a > larger CCX?) I ran netperf with TCP_RR with the server running on localhost. The exact command is: netperf -H 127.0.0.1 -t TCP_RR -l 100 -- -r 100 \ -k REQUEST_SIZE,RESPONSE_SIZE,ELAPSED_TIME,THROUGHPUT,THROUGHPUT_UNITS,MIN_LATENCY,MEAN_LATENCY,P50_LATENCY,P90_LATENCY,P99_LATENCY,MAX_LATENCY,STDDEV_LATENCY I am yet to debug why we are seeing performance drop in the low-utilization cases. > > [0]: https://lore.kernel.org/all/20230615000103.GC2883716@maniforge/ > > > * Stream, SPECjbb2015 and Mongodb show no significant difference compared > > to the current tip. > > > > * With netperf and tbench, using the shared-runqueue during > > enqueue_entity performs badly. > > My reading of your Zen4 numbers on tbench seem to imply that it actually > performs well under heavy load. Copying here for convenience: > > Zen4, 2 Sockets, 128 cores per socket, SMT2: > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Clients: tip[pct imp](CV) swqueue[pct imp](CV) noshard[pct imp](CV) shard_wakeup[pct imp](CV) shard_all[pct imp](CV) > 1 1.00 [ 0.00]( 0.19) 0.98 [ -1.72]( 0.19) 0.99 [ -1.15]( 0.28) 0.98 [ -1.79]( 0.28) 0.99 [ -1.49]( 0.10) > 2 1.00 [ 0.00]( 0.63) 0.98 [ -2.28]( 0.63) 0.98 [ -1.91]( 0.26) 0.97 [ -3.14]( 0.25) 0.98 [ -1.77]( 0.32) > 4 1.00 [ 0.00]( 0.22) 1.00 [ 0.00]( 1.13) 0.99 [ -0.69]( 0.57) 0.98 [ -1.59]( 0.35) 0.99 [ -0.64]( 0.18) > 8 1.00 [ 0.00]( 1.14) 0.99 [ -0.73]( 0.61) 0.98 [ -2.28]( 2.61) 0.97 [ -2.56]( 0.34) 0.98 [ -1.77]( 0.70) > 16 1.00 [ 0.00]( 0.98) 0.97 [ -2.54]( 1.24) 0.98 [ -1.71]( 1.86) 0.98 [ -1.53]( 0.62) 0.96 [ -3.56]( 0.93) > 32 1.00 [ 0.00]( 0.76) 0.98 [ -2.31]( 1.35) 0.98 [ -2.06]( 0.77) 0.96 [ -3.53]( 1.63) 0.88 [-11.72]( 2.77) > 64 1.00 [ 0.00]( 0.96) 0.96 [ -4.45]( 3.53) 0.97 [ -3.44]( 1.53) 0.96 [ -3.52]( 0.89) 0.31 [-69.03]( 0.64) > 128 1.00 [ 0.00]( 3.03) 0.95 [ -4.78]( 0.56) 0.98 [ -2.48]( 0.47) 0.92 [ -7.73]( 0.16) 0.20 [-79.75]( 0.24) > 256 1.00 [ 0.00]( 0.04) 0.93 [ -7.21]( 1.00) 0.94 [ -5.90]( 0.63) 0.59 [-41.29]( 1.76) 0.16 [-83.71]( 0.07) > 512 1.00 [ 0.00]( 3.08) 1.07 [ 7.07](17.78) 1.15 [ 15.49]( 2.65) 0.82 [-17.53](29.11) 0.93 [ -7.18](32.23) > 1024 1.00 [ 0.00]( 0.60) 1.16 [ 15.61]( 0.07) 1.16 [ 15.92]( 0.06) 1.12 [ 11.57]( 1.86) 1.12 [ 11.97]( 0.21) > 2048 1.00 [ 0.00]( 0.16) 1.15 [ 14.62]( 0.90) 1.15 [ 15.20]( 0.29) 1.08 [ 7.64]( 1.44) 1.15 [ 14.57]( 0.23) > > I'm also struggling to come up for an explanation for why Zen4 would > operate well with SHARED_RUNQ under heavy load. Do you have a theory? Yes. So, my theory is that with SHARED_RUNQ, we delay entering idle state due to the checking of the shared runqueue while acquiring a lock. So perhaps this is helping in an unintended manner. I want to rerun those parts while collecting the idle-statistics. > > > Server configurations used: > > > > AMD Zen3 Server: > > * 2 sockets, > > * 64 cores per socket, > > * SMT2 enabled > > * Total of 256 threads. > > * Configured in Nodes-Per-Socket(NPS)=1 > > > > AMD Zen4 Server: > > * 2 sockets, > > * 128 cores per socket, > > * SMT2 enabled > > * Total of 512 threads. > > * Configured in Nodes-Per-Socket(NPS)=1 > > > > The trends on NPS=2 and NPS=4 are similar. So I am not posting those. > > > > > > Legend: > > tip : Tip kernel with top commit ebb83d84e49b > > ("sched/core: Avoid multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") > > > > swqueue_v1 : Your v1 patches applied on top of the aforemenioned tip commit. > > > > noshard : shared-runqueue v2 patches 1-5. This uses a shared-runqueue > > during wakeup. No sharding. > > > > shard_wakeup : shared-runqueue v2 patches 1-6. This uses a > > shared-runqueue during wakeup and has shards with > > shard size = 6 (default) > > > > shard_all : v2 patches 1-7. This uses a sharded shared-runqueue during > > enqueue_entity > > So, what's your overall impression from these numbers? My general > impression so far is the following: > > - SHARED_RUNQ works best when the system would otherwise be > under-utilized. If the system is going to be overloaded, it's unlikely > to provide a significant benefit over CFS, and may even just add > overhead with no benefit (or just cause worse cache locality). I agree with you here. The only thing that saw a consistent benefit was hackbench under moderate load. > > - SHARED_RUNQ isn't well-suited to workloads such as netperf which > pummel the scheduler. Sharding helps a lot here, but doesn't > completely fix the problem depending on how aggressively tasks are > hammering the runqueue. Yeah. Even with sharding (which I would assume would definitely help on platforms with a larger LLC domain), each idle entry would result in searching the shared-wakequeue and probability of finding something there is very low if the workload runs for a very short duration. > > - To the point above, using SHARED_RUNQ in __enqueue_entity() / > __dequeue_entity(), rather than just on the wakeup path, is a net > positive. Workloads which hammer the runq such as netperf or schbench > -L -m 52 -p 512 -r 10 -t 1 will do poorly in both scenarios, so we may > as well get the better work conservation from __enqueue_entity() / > __dequeue_entity(). hackbench is one example of a workload that > benefits from this, another is kernel compile, and I strongly suspect > that HHVM would similarly benefit. Well, the magnitude of the performance degradation is much higher for tbench and netperf when we have shared_runq being used in the __enqueue_entity()/__dequeue_entity() path. So it is very workload dependent. I would like to try out with a variant that used shared_runq() in the __enqueue/__dequeue_entity() path, without sharding though. Just to see if it makes any difference. > > - Sharding in general doesn't seem to regress performance by much when > it wouldn't have otherwise been necessary to avoid contention. > hackbench is better without sharding on Zen3, but it's also better > with shard_all on Zen4. > > In general, if our goal is to better support hosts with large CCXs, I > think we'll just need to support sharding. I think the shard size needs to be determined as a function of the LLC size. Or have the arch specific code pick the size to suit a particular generation. At least on Zen3, Zen4 servers with 8 cores per LLC domain, creation of shards was not providing additional benefit. > > Thoughts? > > I have the v3 version of the patch set which properly supports domain > recreation and hotplug, but I still need to get updated benchmark > numbers on it, as well as benchmark spreading a shared_runq over > multiple CCXs per Peter's comment in [1] about the initial motivation > behind SIS_NODE also applying to SHARED_RUNQ. > [1]: https://lore.kernel.org/all/20230711114207.GK3062772@hirez.programming.kicks-ass.net/ Based on the various flavors of SIS_NODE that we have experimented with on the EPYC servers, it seems to work very well when the probability of finding an idle core/cpu in the wider sched-domain is higher. In that case, the extra time spent in searching for that idle core/cpu is justified by the fact that the task gets to run immediately. However, as the utilization on the system increases, we are less likely to find an idle core/cpu, the additional time spent searching does show up as a regression. What we need is a way to limit the downside in these latter case without lowering that upside that we see in the low-moderate utilization cases. > > Given the points above, I would ideally like to just run the shard_all > variant and compare that to the numbers I collected on v2 and shared in > [2]. What do you think? Would that be a fair comparison, because SIS_NODE only explores a wider-search during wakeups while shard_all would add task to the shared_runq even during a regular enqueue. > There will be tradeoffs no matter what we choose > to do, but enqueuing / dequeuing in __enqueue_entity() / > __dequeue_entity() seems to perform the best for workloads that don't > hammer the runqueue, and sharding seems like a given if we do decide to > do that. Or see if we can avoid using shard_runq/SIS_NODE when the probability of reducing the scheduling latency and improving utilization is low. In these case, the default scheduling strategy should just work fine. However, I don't know of any clean way to detect such a situation. That quest is still on :-) > > [2]: https://lore.kernel.org/all/20230710200342.358255-1-void@manifault.com/ > > Thanks, > David -- Thanks and Regards gautham.