RISC-V: Optimize a special case of VLA SLP
Checks
Commit Message
When working on fixing bugs of zvl1024b. I notice a special VLA SLP case
can be better optimized.
v = vec_perm (op1, op2, { nunits - 1, nunits, nunits + 1, ... })
Before this patch, we are using genriec approach (vrgather):
vid
vadd.vx
vrgather
vmsgeu
vrgather
With this patch, we use vec_extract + slide1up:
scalar = vec_extract (last element of op1)
v = slide1up (op2, scalar)
I am gonna to run testing zvl128b/zvl256b/zvl512b/zvl1024b of both RV32 and RV64.
Ok for trunk if no regression on those testing above ?
PR target/112599
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_extract_and_slide1up_patterns):
(expand_vec_perm_const_1):
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112599-2.c: New test.
---
gcc/config/riscv/riscv-v.cc | 33 ++++++++++++
.../gcc.target/riscv/rvv/autovec/pr112599-2.c | 51 +++++++++++++++++++
2 files changed, 84 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/pr112599-2.c
Comments
Thanks Robin.
Send V2:
https://gcc.gnu.org/pipermail/gcc-patches/2023-November/638033.html
with adding changeLog since I realize changlog issue in V1:
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_extract_and_slide1up_patterns):
(expand_vec_perm_const_1):
Tested on zvl128b/zvl256b/zvl512b/zvl1024b on both RV32 and RV64 no regression.
Hope we can land it on GCC-14.
juzhe.zhong@rivai.ai
From: Robin Dapp
Date: 2023-11-23 22:58
To: Juzhe-Zhong; gcc-patches
CC: rdapp.gcc; kito.cheng; kito.cheng; jeffreyalaw
Subject: Re: [PATCH] RISC-V: Optimize a special case of VLA SLP
LGTM (and harmless enough) but I'd rather wait for a second look or a
maintainer's OK as we're past stage 1 and it's not a real bugfix.
(On top, it's Thanksgiving so not many people will even notice).
On a related note, this should probably be a middle-end optimization
but before a variable-index vec extract most likely nobody bothered.
Regards
Robin
I don't think loop vectorizer can do more optimization here.
GCC pass to vec_perm_const targethook vec_perm <,,(nunits - 1, nunits , nuits + 1, ....)>
to handle that. It's very target dependent. We can't do more about that.
For RVV, it's better transform this case into vec_extract + vec_shl_insert.
However, for ARM SVE, it's not. ARM SVE has a dedicated instruction to handle that (trn),
it's better to pass vec_perm_const with this permute indice for ARM SVE.
juzhe.zhong@rivai.ai
From: Robin Dapp
Date: 2023-11-23 22:58
To: Juzhe-Zhong; gcc-patches
CC: rdapp.gcc; kito.cheng; kito.cheng; jeffreyalaw
Subject: Re: [PATCH] RISC-V: Optimize a special case of VLA SLP
LGTM (and harmless enough) but I'd rather wait for a second look or a
maintainer's OK as we're past stage 1 and it's not a real bugfix.
(On top, it's Thanksgiving so not many people will even notice).
On a related note, this should probably be a middle-end optimization
but before a variable-index vec extract most likely nobody bothered.
Regards
Robin
@@ -3232,6 +3232,37 @@ shuffle_bswap_pattern (struct expand_vec_perm_d *d)
return true;
}
+/* Recognize the pattern that can be shuffled by vec_extract and slide1up
+ approach. */
+
+static bool
+shuffle_extract_and_slide1up_patterns (struct expand_vec_perm_d *d)
+{
+ poly_uint64 nunits = GET_MODE_NUNITS (d->vmode);
+
+ /* Recognize { nunits - 1, nunits, nunits + 1, ... }. */
+ if (!d->perm.series_p (0, 2, nunits - 1, 2)
+ || !d->perm.series_p (1, 2, nunits, 2))
+ return false;
+
+ /* Success! */
+ if (d->testing_p)
+ return true;
+
+ /* Extract the last element of the first vector. */
+ scalar_mode smode = GET_MODE_INNER (d->vmode);
+ rtx tmp = gen_reg_rtx (smode);
+ emit_vec_extract (tmp, d->op0, nunits - 1);
+
+ /* Insert the scalar into element 0. */
+ unsigned int unspec
+ = FLOAT_MODE_P (d->vmode) ? UNSPEC_VFSLIDE1UP : UNSPEC_VSLIDE1UP;
+ insn_code icode = code_for_pred_slide (unspec, d->vmode);
+ rtx ops[] = {d->target, d->op1, tmp};
+ emit_vlmax_insn (icode, BINARY_OP, ops);
+ return true;
+}
+
/* Recognize the pattern that can be shuffled by generic approach. */
static bool
@@ -3310,6 +3341,8 @@ expand_vec_perm_const_1 (struct expand_vec_perm_d *d)
return true;
if (shuffle_bswap_pattern (d))
return true;
+ if (shuffle_extract_and_slide1up_patterns (d))
+ return true;
if (shuffle_generic_patterns (d))
return true;
return false;
new file mode 100644
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -O3" } */
+
+struct s { struct s *n; } *p;
+struct s ss;
+#define MAX 10
+struct s sss[MAX];
+int count = 0;
+
+int look( struct s *p, struct s **pp )
+{
+ for ( ; p; p = p->n )
+ ;
+ *pp = p;
+ count++;
+ return( 1 );
+}
+
+void sub( struct s *p, struct s **pp )
+{
+ for ( ; look( p, pp ); ) {
+ if ( p )
+ p = p->n;
+ else
+ break;
+ }
+}
+
+int
+foo(void)
+{
+ struct s *pp;
+ struct s *next;
+ int i;
+
+ p = &ss;
+ next = p;
+ for ( i = 0; i < MAX; i++ ) {
+ next->n = &sss[i];
+ next = next->n;
+ }
+ next->n = 0;
+
+ sub( p, &pp );
+ if (count != MAX+2)
+ __builtin_abort ();
+ return 0;
+}
+
+/* { dg-final { scan-assembler-not {vrgather} } } */
+/* { dg-final { scan-assembler-times {vslide1up\.vx} 1 } } */