[x86_PATCH] peephole2 to resolve failure of gcc.target/i386/pr43644-2.c
Checks
Commit Message
This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux. After a few attempts at tweaking pattern constraints in the hope
of convincing reload to produce a more aggressive (but potentially
unsafe) register allocation, I think the best solution is to use a
peephole2 to catch/clean-up this specific case.
Specifically, the function:
unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
return x+y;
}
currently generates:
foo: movq %rdx, %rcx
movq %rdi, %rax
movq %rsi, %rdx
addq %rcx, %rax
adcq $0, %rdx
ret
and with this patch/peephole2 now generates:
foo: movq %rdx, %rax
movq %rsi, %rdx
addq %rdi, %rax
adcq $0, %rdx
ret
which I believe is optimal.
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32}
with no new failures. Ok for mainline?
2023-12-21 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/43644
* config/i386/i386.md (define_peephole2): Tweak register allocation
of *add<dwi>3_doubleword_concat_zext.
gcc/testsuite/ChangeLog
PR target/43644
* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
Thanks in advance, and for your patience with this testsuite noise.
Roger
--
Comments
On Fri, Dec 22, 2023 at 11:14 AM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> This patch resolves the failure of pr43644-2.c in the testsuite, a code
> quality test I added back in July, that started failing as the code GCC
> generates for 128-bit values (and their parameter passing) has been in
> flux. After a few attempts at tweaking pattern constraints in the hope
> of convincing reload to produce a more aggressive (but potentially
> unsafe) register allocation, I think the best solution is to use a
> peephole2 to catch/clean-up this specific case.
>
> Specifically, the function:
>
> unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
> return x+y;
> }
>
> currently generates:
>
> foo: movq %rdx, %rcx
> movq %rdi, %rax
> movq %rsi, %rdx
> addq %rcx, %rax
> adcq $0, %rdx
> ret
>
> and with this patch/peephole2 now generates:
>
> foo: movq %rdx, %rax
> movq %rsi, %rdx
> addq %rdi, %rax
> adcq $0, %rdx
> ret
>
> which I believe is optimal.
How about simply moving the assignment to the MSB in the split pattern
after the LSB calculation:
[(set (match_dup 0) (match_dup 4))
- (set (match_dup 5) (match_dup 2))
(parallel [(set (reg:CCC FLAGS_REG)
(compare:CCC
(plus:DWIH (match_dup 0) (match_dup 1))
(match_dup 0)))
(set (match_dup 0)
(plus:DWIH (match_dup 0) (match_dup 1)))])
+ (set (match_dup 5) (match_dup 2))
(parallel [(set (match_dup 5)
(plus:DWIH
(plus:DWIH
There is an earlyclobber on the output operand, so we are sure that
assignments to (op 0) and (op 5) won't clobber anything.
cprop_hardreg pass will then do the cleanup for us, resulting in:
foo: movq %rdi, %rax
addq %rdx, %rax
movq %rsi, %rdx
adcq $0, %rdx
Uros.
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-m32}
> with no new failures. Ok for mainline?
>
>
> 2023-12-21 Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> PR target/43644
> * config/i386/i386.md (define_peephole2): Tweak register allocation
> of *add<dwi>3_doubleword_concat_zext.
>
> gcc/testsuite/ChangeLog
> PR target/43644
> * gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
>
>
> Thanks in advance, and for your patience with this testsuite noise.
> Roger
> --
>
diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md
index 4c6368bf3b7..9f97d407975 100644
--- a/gcc/config/i386/i386.md
+++ b/gcc/config/i386/i386.md
@@ -6411,13 +6411,13 @@ (define_insn_and_split "*add<dwi>3_doubleword_concat_zext"
"#"
"&& reload_completed"
[(set (match_dup 0) (match_dup 4))
- (set (match_dup 5) (match_dup 2))
(parallel [(set (reg:CCC FLAGS_REG)
(compare:CCC
(plus:DWIH (match_dup 0) (match_dup 1))
(match_dup 0)))
(set (match_dup 0)
(plus:DWIH (match_dup 0) (match_dup 1)))])
+ (set (match_dup 5) (match_dup 2))
(parallel [(set (match_dup 5)
(plus:DWIH
(plus:DWIH
Hi Uros,
> From: Uros Bizjak <ubizjak@gmail.com>
> Sent: 28 December 2023 10:33
> On Fri, Dec 22, 2023 at 11:14 AM Roger Sayle <roger@nextmovesoftware.com>
> wrote:
> >
> > This patch resolves the failure of pr43644-2.c in the testsuite, a
> > code quality test I added back in July, that started failing as the
> > code GCC generates for 128-bit values (and their parameter passing)
> > has been in flux. After a few attempts at tweaking pattern
> > constraints in the hope of convincing reload to produce a more
> > aggressive (but potentially
> > unsafe) register allocation, I think the best solution is to use a
> > peephole2 to catch/clean-up this specific case.
> >
> > Specifically, the function:
> >
> > unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
> > return x+y;
> > }
> >
> > currently generates:
> >
> > foo: movq %rdx, %rcx
> > movq %rdi, %rax
> > movq %rsi, %rdx
> > addq %rcx, %rax
> > adcq $0, %rdx
> > ret
> >
> > and with this patch/peephole2 now generates:
> >
> > foo: movq %rdx, %rax
> > movq %rsi, %rdx
> > addq %rdi, %rax
> > adcq $0, %rdx
> > ret
> >
> > which I believe is optimal.
>
> How about simply moving the assignment to the MSB in the split pattern after the
> LSB calculation:
>
> [(set (match_dup 0) (match_dup 4))
> - (set (match_dup 5) (match_dup 2))
> (parallel [(set (reg:CCC FLAGS_REG)
> (compare:CCC
> (plus:DWIH (match_dup 0) (match_dup 1))
> (match_dup 0)))
> (set (match_dup 0)
> (plus:DWIH (match_dup 0) (match_dup 1)))])
> + (set (match_dup 5) (match_dup 2))
> (parallel [(set (match_dup 5)
> (plus:DWIH
> (plus:DWIH
>
> There is an earlyclobber on the output operand, so we are sure that assignments
> to (op 0) and (op 5) won't clobber anything.
> cprop_hardreg pass will then do the cleanup for us, resulting in:
>
> foo: movq %rdi, %rax
> addq %rdx, %rax
> movq %rsi, %rdx
> adcq $0, %rdx
>
> Uros.
I agree. This is a much better fix.
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32}
with no new failures. Ok for mainline?
2023-12-31 Uros Bizjak <ubizjak@gmail.com>
Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/43644
* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
order of instructions after split, to minimize number of moves.
gcc/testsuite/ChangeLog
PR target/43644
* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
Thanks again (and Happy New Year).
Roger
--
diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md
index e862368..6671274 100644
--- a/gcc/config/i386/i386.md
+++ b/gcc/config/i386/i386.md
@@ -6412,13 +6412,13 @@
"#"
"&& reload_completed"
[(set (match_dup 0) (match_dup 4))
- (set (match_dup 5) (match_dup 2))
(parallel [(set (reg:CCC FLAGS_REG)
(compare:CCC
(plus:DWIH (match_dup 0) (match_dup 1))
(match_dup 0)))
(set (match_dup 0)
(plus:DWIH (match_dup 0) (match_dup 1)))])
+ (set (match_dup 5) (match_dup 2))
(parallel [(set (match_dup 5)
(plus:DWIH
(plus:DWIH
diff --git a/gcc/testsuite/gcc.target/i386/pr43644-2.c b/gcc/testsuite/gcc.target/i386/pr43644-2.c
index d470b0a..3316ac6 100644
--- a/gcc/testsuite/gcc.target/i386/pr43644-2.c
+++ b/gcc/testsuite/gcc.target/i386/pr43644-2.c
@@ -6,4 +6,4 @@ unsigned __int128 foo(unsigned __int128 x, unsigned long long y)
return x+y;
}
-/* { dg-final { scan-assembler-times "movq" 1 } } */
+/* { dg-final { scan-assembler-times "movq" 2 } } */
On Sun, Dec 31, 2023 at 4:56 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> Hi Uros,
>
> > From: Uros Bizjak <ubizjak@gmail.com>
> > Sent: 28 December 2023 10:33
> > On Fri, Dec 22, 2023 at 11:14 AM Roger Sayle <roger@nextmovesoftware.com>
> > wrote:
> > >
> > > This patch resolves the failure of pr43644-2.c in the testsuite, a
> > > code quality test I added back in July, that started failing as the
> > > code GCC generates for 128-bit values (and their parameter passing)
> > > has been in flux. After a few attempts at tweaking pattern
> > > constraints in the hope of convincing reload to produce a more
> > > aggressive (but potentially
> > > unsafe) register allocation, I think the best solution is to use a
> > > peephole2 to catch/clean-up this specific case.
> > >
> > > Specifically, the function:
> > >
> > > unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
> > > return x+y;
> > > }
> > >
> > > currently generates:
> > >
> > > foo: movq %rdx, %rcx
> > > movq %rdi, %rax
> > > movq %rsi, %rdx
> > > addq %rcx, %rax
> > > adcq $0, %rdx
> > > ret
> > >
> > > and with this patch/peephole2 now generates:
> > >
> > > foo: movq %rdx, %rax
> > > movq %rsi, %rdx
> > > addq %rdi, %rax
> > > adcq $0, %rdx
> > > ret
> > >
> > > which I believe is optimal.
> >
> > How about simply moving the assignment to the MSB in the split pattern after the
> > LSB calculation:
> >
> > [(set (match_dup 0) (match_dup 4))
> > - (set (match_dup 5) (match_dup 2))
> > (parallel [(set (reg:CCC FLAGS_REG)
> > (compare:CCC
> > (plus:DWIH (match_dup 0) (match_dup 1))
> > (match_dup 0)))
> > (set (match_dup 0)
> > (plus:DWIH (match_dup 0) (match_dup 1)))])
> > + (set (match_dup 5) (match_dup 2))
> > (parallel [(set (match_dup 5)
> > (plus:DWIH
> > (plus:DWIH
> >
> > There is an earlyclobber on the output operand, so we are sure that assignments
> > to (op 0) and (op 5) won't clobber anything.
> > cprop_hardreg pass will then do the cleanup for us, resulting in:
> >
> > foo: movq %rdi, %rax
> > addq %rdx, %rax
> > movq %rsi, %rdx
> > adcq $0, %rdx
> >
> > Uros.
>
> I agree. This is a much better fix.
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-m32}
> with no new failures. Ok for mainline?
>
> 2023-12-31 Uros Bizjak <ubizjak@gmail.com>
> Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> PR target/43644
> * config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
> order of instructions after split, to minimize number of moves.
>
> gcc/testsuite/ChangeLog
> PR target/43644
> * gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
OK.
Thanks, and Happy New Year to you, too!
Uros.
@@ -6428,6 +6428,38 @@
(clobber (reg:CC FLAGS_REG))])]
"split_double_mode (<DWI>mode, &operands[0], 1, &operands[0], &operands[5]);")
+(define_peephole2
+ [(set (match_operand:SWI48 0 "general_reg_operand")
+ (match_operand:SWI48 1 "general_reg_operand"))
+ (set (match_operand:SWI48 2 "general_reg_operand")
+ (match_operand:SWI48 3 "general_reg_operand"))
+ (set (match_dup 1) (match_operand:SWI48 4 "general_reg_operand"))
+ (parallel [(set (reg:CCC FLAGS_REG)
+ (compare:CCC
+ (plus:SWI48 (match_dup 2) (match_dup 0))
+ (match_dup 2)))
+ (set (match_dup 2)
+ (plus:SWI48 (match_dup 2) (match_dup 0)))])]
+ "REGNO (operands[0]) != REGNO (operands[1])
+ && REGNO (operands[0]) != REGNO (operands[2])
+ && REGNO (operands[0]) != REGNO (operands[3])
+ && REGNO (operands[0]) != REGNO (operands[4])
+ && REGNO (operands[1]) != REGNO (operands[2])
+ && REGNO (operands[1]) != REGNO (operands[3])
+ && REGNO (operands[1]) != REGNO (operands[4])
+ && REGNO (operands[2]) != REGNO (operands[3])
+ && REGNO (operands[2]) != REGNO (operands[4])
+ && REGNO (operands[3]) != REGNO (operands[4])
+ && peep2_reg_dead_p (4, operands[0])"
+ [(set (match_dup 2) (match_dup 1))
+ (set (match_dup 1) (match_dup 4))
+ (parallel [(set (reg:CCC FLAGS_REG)
+ (compare:CCC
+ (plus:SWI48 (match_dup 2) (match_dup 3))
+ (match_dup 2)))
+ (set (match_dup 2)
+ (plus:SWI48 (match_dup 2) (match_dup 3)))])])
+
(define_insn "*add<mode>_1"
[(set (match_operand:SWI48 0 "nonimmediate_operand" "=rm,r,r,r,r,r,r,r")
(plus:SWI48
@@ -6,4 +6,4 @@ unsigned __int128 foo(unsigned __int128 x, unsigned long long y)
return x+y;
}
-/* { dg-final { scan-assembler-times "movq" 1 } } */
+/* { dg-final { scan-assembler-times "movq" 2 } } */