[OpenMP,nvptx] Use bar.sync/arrive for barriers when tasking is not used
Commit Message
Hi,
our work on SPEChpc2021 benchmarks show that, after the fix for PR99555 was committed:
[libgomp, nvptx] Fix hang in gomp_team_barrier_wait_end
https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=5ed77fb3ed1ee0289a0ec9499ef52b99b39421f1
while that patch fixed the hang, there were quite severe performance regressions caused
by this new barrier code. Under OpenMP target offload mode, Minisweep regressed by about 350%,
while HPGMG-FV was about 2x slower.
So the problem was presumably the new barriers, which replaced erroneous but fast bar.sync
instructions, with correct but really heavy-weight futex_wait/wake operations on the GPU.
This is probably required for preserving correct task vs. barrier behavior.
However, the observation is that: when tasks-related functionality are not used at all by
the team inside an OpenMP target region, and a barrier is just a place to wait for all
threads to rejoin (no problem of invoking waiting tasks to re-start) a barrier can in that
case be implemented by simple bar.sync and bar.arrive PTX instructions. That should be
able to recover most performance the cases that usually matter, e.g. 'omp parallel for' inside
'omp target'.
So the plan is to mark cases where 'tasks are never used'. This patch adds a 'task_never_used'
flag inside struct gomp_team, initialized to true, and set to false when tasks are added to
the team. The nvptx specific gomp_team_barrier_wait_end routines can then use simple barrier
when team->task_never_used remains true on the barrier.
Some other cases, like the master/masked construct, and single construct, also needs to have
task_never_used set false; because these constructs inherently creates asymmetric loads where
only a subset of threads run through the region (which may or may not use tasking), there may
be the case where different threads wait at the end assuming different task_never_used cases.
For correctness, these constructs must have team->task_never_used conservatively marked false
at the start of the construct.
This patch has been divided into two: the first is the inlining of contents of config/linux/bar.c
into config/nvptx/bar.c (instead of an include). This is needed now because some parts of
gomp_team_barrier_wait_[cancel_]end now needs nvptx specific adjustments. The second contains
the above described changes.
Tested on powerpc64le-linux and x86_64-linux with nvptx offloading, seeking approval for trunk.
Thanks,
Chung-Lin
From c2fdc31880d2d040822e8abece015c29a6d7b472 Mon Sep 17 00:00:00 2001
From: Chung-Lin Tang <cltang@codesourcery.com>
Date: Thu, 1 Sep 2022 05:53:49 -0700
Subject: [PATCH 1/2] libgomp: inline config/linux/bar.c into
config/nvptx/bar.c
Preparing to add nvptx specific modifications to gomp_team_barrier_wait_end,
et al., so change from using an #include of config/linux/bar.c
in config/nvptx/bar.c, to a full copy of the implementation.
2022-09-01 Chung-Lin Tang <cltang@codesourcery.com>
libgomp/ChangeLog:
* config/nvptx/bar.c: Adjust include of "../linux/bar.c" into an
inlining of contents of config/linux/bar.c,
---
libgomp/config/nvptx/bar.c | 183 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 180 insertions(+), 3 deletions(-)
Comments
On Thu, Sep 01, 2022 at 11:39:42PM +0800, Chung-Lin Tang wrote:
> our work on SPEChpc2021 benchmarks show that, after the fix for PR99555 was committed:
> [libgomp, nvptx] Fix hang in gomp_team_barrier_wait_end
> https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=5ed77fb3ed1ee0289a0ec9499ef52b99b39421f1
>
> while that patch fixed the hang, there were quite severe performance regressions caused
> by this new barrier code. Under OpenMP target offload mode, Minisweep regressed by about 350%,
> while HPGMG-FV was about 2x slower.
>
> So the problem was presumably the new barriers, which replaced erroneous but fast bar.sync
> instructions, with correct but really heavy-weight futex_wait/wake operations on the GPU.
> This is probably required for preserving correct task vs. barrier behavior.
>
> However, the observation is that: when tasks-related functionality are not used at all by
> the team inside an OpenMP target region, and a barrier is just a place to wait for all
> threads to rejoin (no problem of invoking waiting tasks to re-start) a barrier can in that
> case be implemented by simple bar.sync and bar.arrive PTX instructions. That should be
> able to recover most performance the cases that usually matter, e.g. 'omp parallel for' inside
> 'omp target'.
>
> So the plan is to mark cases where 'tasks are never used'. This patch adds a 'task_never_used'
> flag inside struct gomp_team, initialized to true, and set to false when tasks are added to
> the team. The nvptx specific gomp_team_barrier_wait_end routines can then use simple barrier
> when team->task_never_used remains true on the barrier.
I'll defer the nvptx specific changes to Tom because I'm not familiar enough
with NVPTX. But I'll certainly object against any changes for this outside
of nvptx. We don't need or want the task_never_used field and its
maintainance nor GOMP_task_set_used entrypoint in host libgomp.so nor for
NVPTX.
As you use it for many other constructs (master/masked/critical/single -
does omp_set_lock etc. count too? only one thread acquires the lock, others
don't), it looks very much misnamed, perhaps better talk about thread
divergence or what is the PTX term for it.
Anyway, there is no point to track this all on the host or for amdgcn of
xeon phi offloading, nothing will use that info ever, so it is just wasted
memory and CPU cycles.
I don't understand how it can safely work, because if it needs to fallback
to the fixed behavior for master or single, why isn't user just using
if (omp_get_thread_num () == 0)
{
// whatever
}
etc. problematic too?
If it can for some reason work safely, then instead of adding
GOMP_task_set_used calls add some ifn call that will be after IPA folded to
nothing everywhere but on NVPTX and only have those calls on NVPTX, on the
library add some macros for the team->task_ever_used tweaks, defined to
nothing except for config/nvptx/*.h and limit the changes to PTX libgomp.a
then.
But I'm afraid a lot of code creates some asymmetric loads, even just a
work-sharing loop, if number of iterations isn't divisible by number of
threads, some threads could do less work, or with dynamic etc. schedules,
...
Jakub
@@ -161,6 +161,183 @@ static inline void do_wait (int *addr, int val)
futex_wait (addr, val);
}
-/* Reuse the linux implementation. */
-#define GOMP_WAIT_H 1
-#include "../linux/bar.c"
+/* Below is based on the linux implementation. */
+
+void
+gomp_barrier_wait_end (gomp_barrier_t *bar, gomp_barrier_state_t state)
+{
+ if (__builtin_expect (state & BAR_WAS_LAST, 0))
+ {
+ /* Next time we'll be awaiting TOTAL threads again. */
+ bar->awaited = bar->total;
+ __atomic_store_n (&bar->generation, bar->generation + BAR_INCR,
+ MEMMODEL_RELEASE);
+ futex_wake ((int *) &bar->generation, INT_MAX);
+ }
+ else
+ {
+ do
+ do_wait ((int *) &bar->generation, state);
+ while (__atomic_load_n (&bar->generation, MEMMODEL_ACQUIRE) == state);
+ }
+}
+
+void
+gomp_barrier_wait (gomp_barrier_t *bar)
+{
+ gomp_barrier_wait_end (bar, gomp_barrier_wait_start (bar));
+}
+
+/* Like gomp_barrier_wait, except that if the encountering thread
+ is not the last one to hit the barrier, it returns immediately.
+ The intended usage is that a thread which intends to gomp_barrier_destroy
+ this barrier calls gomp_barrier_wait, while all other threads
+ call gomp_barrier_wait_last. When gomp_barrier_wait returns,
+ the barrier can be safely destroyed. */
+
+void
+gomp_barrier_wait_last (gomp_barrier_t *bar)
+{
+ gomp_barrier_state_t state = gomp_barrier_wait_start (bar);
+ if (state & BAR_WAS_LAST)
+ gomp_barrier_wait_end (bar, state);
+}
+
+void
+gomp_team_barrier_wake (gomp_barrier_t *bar, int count)
+{
+ futex_wake ((int *) &bar->generation, count == 0 ? INT_MAX : count);
+}
+
+void
+gomp_team_barrier_wait_end (gomp_barrier_t *bar, gomp_barrier_state_t state)
+{
+ unsigned int generation, gen;
+
+ if (__builtin_expect (state & BAR_WAS_LAST, 0))
+ {
+ /* Next time we'll be awaiting TOTAL threads again. */
+ struct gomp_thread *thr = gomp_thread ();
+ struct gomp_team *team = thr->ts.team;
+
+ bar->awaited = bar->total;
+ team->work_share_cancelled = 0;
+ if (__builtin_expect (team->task_count, 0))
+ {
+ gomp_barrier_handle_tasks (state);
+ state &= ~BAR_WAS_LAST;
+ }
+ else
+ {
+ state &= ~BAR_CANCELLED;
+ state += BAR_INCR - BAR_WAS_LAST;
+ __atomic_store_n (&bar->generation, state, MEMMODEL_RELEASE);
+ futex_wake ((int *) &bar->generation, INT_MAX);
+ return;
+ }
+ }
+
+ generation = state;
+ state &= ~BAR_CANCELLED;
+ do
+ {
+ do_wait ((int *) &bar->generation, generation);
+ gen = __atomic_load_n (&bar->generation, MEMMODEL_ACQUIRE);
+ if (__builtin_expect (gen & BAR_TASK_PENDING, 0))
+ {
+ gomp_barrier_handle_tasks (state);
+ gen = __atomic_load_n (&bar->generation, MEMMODEL_ACQUIRE);
+ }
+ generation |= gen & BAR_WAITING_FOR_TASK;
+ }
+ while (gen != state + BAR_INCR);
+}
+
+void
+gomp_team_barrier_wait (gomp_barrier_t *bar)
+{
+ gomp_team_barrier_wait_end (bar, gomp_barrier_wait_start (bar));
+}
+
+void
+gomp_team_barrier_wait_final (gomp_barrier_t *bar)
+{
+ gomp_barrier_state_t state = gomp_barrier_wait_final_start (bar);
+ if (__builtin_expect (state & BAR_WAS_LAST, 0))
+ bar->awaited_final = bar->total;
+ gomp_team_barrier_wait_end (bar, state);
+}
+
+bool
+gomp_team_barrier_wait_cancel_end (gomp_barrier_t *bar,
+ gomp_barrier_state_t state)
+{
+ unsigned int generation, gen;
+
+ if (__builtin_expect (state & BAR_WAS_LAST, 0))
+ {
+ /* Next time we'll be awaiting TOTAL threads again. */
+ /* BAR_CANCELLED should never be set in state here, because
+ cancellation means that at least one of the threads has been
+ cancelled, thus on a cancellable barrier we should never see
+ all threads to arrive. */
+ struct gomp_thread *thr = gomp_thread ();
+ struct gomp_team *team = thr->ts.team;
+
+ bar->awaited = bar->total;
+ team->work_share_cancelled = 0;
+ if (__builtin_expect (team->task_count, 0))
+ {
+ gomp_barrier_handle_tasks (state);
+ state &= ~BAR_WAS_LAST;
+ }
+ else
+ {
+ state += BAR_INCR - BAR_WAS_LAST;
+ __atomic_store_n (&bar->generation, state, MEMMODEL_RELEASE);
+ futex_wake ((int *) &bar->generation, INT_MAX);
+ return false;
+ }
+ }
+
+ if (__builtin_expect (state & BAR_CANCELLED, 0))
+ return true;
+
+ generation = state;
+ do
+ {
+ do_wait ((int *) &bar->generation, generation);
+ gen = __atomic_load_n (&bar->generation, MEMMODEL_ACQUIRE);
+ if (__builtin_expect (gen & BAR_CANCELLED, 0))
+ return true;
+ if (__builtin_expect (gen & BAR_TASK_PENDING, 0))
+ {
+ gomp_barrier_handle_tasks (state);
+ gen = __atomic_load_n (&bar->generation, MEMMODEL_ACQUIRE);
+ }
+ generation |= gen & BAR_WAITING_FOR_TASK;
+ }
+ while (gen != state + BAR_INCR);
+
+ return false;
+}
+
+bool
+gomp_team_barrier_wait_cancel (gomp_barrier_t *bar)
+{
+ return gomp_team_barrier_wait_cancel_end (bar, gomp_barrier_wait_start (bar));
+}
+
+void
+gomp_team_barrier_cancel (struct gomp_team *team)
+{
+ gomp_mutex_lock (&team->task_lock);
+ if (team->barrier.generation & BAR_CANCELLED)
+ {
+ gomp_mutex_unlock (&team->task_lock);
+ return;
+ }
+ team->barrier.generation |= BAR_CANCELLED;
+ gomp_mutex_unlock (&team->task_lock);
+ futex_wake ((int *) &team->barrier.generation, INT_MAX);
+}