tree-ssa-ccp, wide-int: Fix up handling of [LR]ROTATE_EXPR in bitwise ccp [PR109778]
Checks
Commit Message
Hi!
The following testcase is miscompiled, because bitwise ccp2 handles
a rotate with a signed type incorrectly.
Seems tree-ssa-ccp.cc has the only callers of wi::[lr]rotate with 3
arguments, all other callers just rotate in the right precision and
I think work correctly. ccp works with widest_ints and so rotations
by the excessive precision certainly don't match what it wants
when it sees a rotate in some specific bitsize. Still, if it is
unsigned rotate and the widest_int is zero extended from width,
the functions perform left shift and logical right shift on the value
and then at the end zero extend the result of left shift and uselessly
also the result of logical right shift and return | of that.
On the testcase we the signed char rrotate by 4 argument is
CONSTANT -75 i.e. 0xffffffff....fffffb5 with mask 2.
The mask is correctly rotated to 0x20, but because the 8-bit constant
is sign extended to 192-bit one, the logical right shift by 4 doesn't
yield expected 0xb, but gives 0xfffffffffff....ffffb, and then
return wi::zext (left, width) | wi::zext (right, width); where left is
0xfffffff....fb50, so we return 0xfb instead of the expected
0x5b.
The following patch fixes that by doing the zero extension in case of
the right variable before doing wi::lrshift rather than after it.
Also, wi::[lr]rotate widht width < precision always zero extends
the result. I'm afraid it can't do better because it doesn't know
if it is done for an unsigned or signed type, but the caller in this
case knows that very well, so I've done the extension based on sgn
in the caller. E.g. 0x5b rotated right (or left) by 4 with width 8
previously gave 0xb5, but sgn == SIGNED in widest_int it should be
0xffffffff....fffb5 instead.
Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk
and release branches?
2023-05-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109778
* wide-int.h (wi::lrotate, wi::rrotate): Call wi::lrshift on
wi::zext (x, width) rather than x if width != precision, rather
than using wi::zext (right, width) after the shift.
* tree-ssa-ccp.cc (bit_value_binop): Call wi::ext on the results
of wi::lrotate or wi::rrotate.
* gcc.c-torture/execute/pr109778.c: New test.
Jakub
Comments
On Tue, 9 May 2023, Jakub Jelinek wrote:
> Hi!
>
> The following testcase is miscompiled, because bitwise ccp2 handles
> a rotate with a signed type incorrectly.
> Seems tree-ssa-ccp.cc has the only callers of wi::[lr]rotate with 3
> arguments, all other callers just rotate in the right precision and
> I think work correctly. ccp works with widest_ints and so rotations
> by the excessive precision certainly don't match what it wants
> when it sees a rotate in some specific bitsize. Still, if it is
> unsigned rotate and the widest_int is zero extended from width,
> the functions perform left shift and logical right shift on the value
> and then at the end zero extend the result of left shift and uselessly
> also the result of logical right shift and return | of that.
> On the testcase we the signed char rrotate by 4 argument is
> CONSTANT -75 i.e. 0xffffffff....fffffb5 with mask 2.
> The mask is correctly rotated to 0x20, but because the 8-bit constant
> is sign extended to 192-bit one, the logical right shift by 4 doesn't
> yield expected 0xb, but gives 0xfffffffffff....ffffb, and then
> return wi::zext (left, width) | wi::zext (right, width); where left is
> 0xfffffff....fb50, so we return 0xfb instead of the expected
> 0x5b.
>
> The following patch fixes that by doing the zero extension in case of
> the right variable before doing wi::lrshift rather than after it.
>
> Also, wi::[lr]rotate widht width < precision always zero extends
> the result. I'm afraid it can't do better because it doesn't know
> if it is done for an unsigned or signed type, but the caller in this
> case knows that very well, so I've done the extension based on sgn
> in the caller. E.g. 0x5b rotated right (or left) by 4 with width 8
> previously gave 0xb5, but sgn == SIGNED in widest_int it should be
> 0xffffffff....fffb5 instead.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk
> and release branches?
OK.
Thanks,
Richard.
> 2023-05-09 Jakub Jelinek <jakub@redhat.com>
>
> PR tree-optimization/109778
> * wide-int.h (wi::lrotate, wi::rrotate): Call wi::lrshift on
> wi::zext (x, width) rather than x if width != precision, rather
> than using wi::zext (right, width) after the shift.
> * tree-ssa-ccp.cc (bit_value_binop): Call wi::ext on the results
> of wi::lrotate or wi::rrotate.
>
> * gcc.c-torture/execute/pr109778.c: New test.
>
> --- gcc/wide-int.h.jj 2023-04-18 11:00:39.926725744 +0200
> +++ gcc/wide-int.h 2023-05-08 23:36:41.104412818 +0200
> @@ -3187,9 +3187,11 @@ wi::lrotate (const T1 &x, const T2 &y, u
> width = precision;
> WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
> WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
> - WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
> + WI_UNARY_RESULT (T1) right
> + = wi::lrshift (width != precision ? wi::zext (x, width) : x,
> + wi::sub (width, ymod));
> if (width != precision)
> - return wi::zext (left, width) | wi::zext (right, width);
> + return wi::zext (left, width) | right;
> return left | right;
> }
>
> @@ -3204,10 +3206,11 @@ wi::rrotate (const T1 &x, const T2 &y, u
> if (width == 0)
> width = precision;
> WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
> - WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
> + WI_UNARY_RESULT (T1) right
> + = wi::lrshift (width != precision ? wi::zext (x, width) : x, ymod);
> WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
> if (width != precision)
> - return wi::zext (left, width) | wi::zext (right, width);
> + return wi::zext (left, width) | right;
> return left | right;
> }
>
> --- gcc/tree-ssa-ccp.cc.jj 2023-01-02 09:32:39.990030918 +0100
> +++ gcc/tree-ssa-ccp.cc 2023-05-09 00:03:02.692915316 +0200
> @@ -1552,6 +1552,8 @@ bit_value_binop (enum tree_code code, si
> *mask = wi::lrotate (r1mask, shift, width);
> *val = wi::lrotate (r1val, shift, width);
> }
> + *mask = wi::ext (*mask, width, sgn);
> + *val = wi::ext (*val, width, sgn);
> }
> }
> else if (wi::ltu_p (r2val | r2mask, width)
> @@ -1593,8 +1595,8 @@ bit_value_binop (enum tree_code code, si
> /* Accumulate the result. */
> res_mask |= tmp_mask | (res_val ^ tmp_val);
> }
> - *val = wi::bit_and_not (res_val, res_mask);
> - *mask = res_mask;
> + *val = wi::ext (wi::bit_and_not (res_val, res_mask), width, sgn);
> + *mask = wi::ext (res_mask, width, sgn);
> }
> break;
>
> --- gcc/testsuite/gcc.c-torture/execute/pr109778.c.jj 2023-05-09 00:05:20.249959226 +0200
> +++ gcc/testsuite/gcc.c-torture/execute/pr109778.c 2023-05-09 00:04:58.870263249 +0200
> @@ -0,0 +1,26 @@
> +/* PR tree-optimization/109778 */
> +
> +int a, b, c, d, *e = &c;
> +
> +static inline unsigned
> +foo (unsigned char x)
> +{
> + x = 1 | x << 1;
> + x = x >> 4 | x << 4;
> + return x;
> +}
> +
> +static inline void
> +bar (unsigned x)
> +{
> + *e = 8 > foo (x + 86) - 86;
> +}
> +
> +int
> +main ()
> +{
> + d = a && b;
> + bar (d + 4);
> + if (c != 1)
> + __builtin_abort ();
> +}
>
> Jakub
>
>
@@ -3187,9 +3187,11 @@ wi::lrotate (const T1 &x, const T2 &y, u
width = precision;
WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
- WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
+ WI_UNARY_RESULT (T1) right
+ = wi::lrshift (width != precision ? wi::zext (x, width) : x,
+ wi::sub (width, ymod));
if (width != precision)
- return wi::zext (left, width) | wi::zext (right, width);
+ return wi::zext (left, width) | right;
return left | right;
}
@@ -3204,10 +3206,11 @@ wi::rrotate (const T1 &x, const T2 &y, u
if (width == 0)
width = precision;
WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
- WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
+ WI_UNARY_RESULT (T1) right
+ = wi::lrshift (width != precision ? wi::zext (x, width) : x, ymod);
WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
if (width != precision)
- return wi::zext (left, width) | wi::zext (right, width);
+ return wi::zext (left, width) | right;
return left | right;
}
@@ -1552,6 +1552,8 @@ bit_value_binop (enum tree_code code, si
*mask = wi::lrotate (r1mask, shift, width);
*val = wi::lrotate (r1val, shift, width);
}
+ *mask = wi::ext (*mask, width, sgn);
+ *val = wi::ext (*val, width, sgn);
}
}
else if (wi::ltu_p (r2val | r2mask, width)
@@ -1593,8 +1595,8 @@ bit_value_binop (enum tree_code code, si
/* Accumulate the result. */
res_mask |= tmp_mask | (res_val ^ tmp_val);
}
- *val = wi::bit_and_not (res_val, res_mask);
- *mask = res_mask;
+ *val = wi::ext (wi::bit_and_not (res_val, res_mask), width, sgn);
+ *mask = wi::ext (res_mask, width, sgn);
}
break;
@@ -0,0 +1,26 @@
+/* PR tree-optimization/109778 */
+
+int a, b, c, d, *e = &c;
+
+static inline unsigned
+foo (unsigned char x)
+{
+ x = 1 | x << 1;
+ x = x >> 4 | x << 4;
+ return x;
+}
+
+static inline void
+bar (unsigned x)
+{
+ *e = 8 > foo (x + 86) - 86;
+}
+
+int
+main ()
+{
+ d = a && b;
+ bar (d + 4);
+ if (c != 1)
+ __builtin_abort ();
+}