lower-bitint: Fix up lower_addsub_overflow [PR112807]
Checks
Commit Message
Hi!
lower_addsub_overflow uses handle_cast or handle_operand to extract current
limb from the operands. Both of those functions heavily assume that they
return a large or huge BITINT_TYPE. The problem in the testcase is that
this is violated. Normally, lower_addsub_overflow isn't even called if
neither the return's type element type nor any of the operand is large/huge
BITINT_TYPE (on x86_64 129+ bits), for middle BITINT_TYPE (on x86_64 65-128
bits) some other code casts such operands to {,unsigned }__int128.
In the testcase the result is complex unsigned, so small, but one of the
arguments is _BitInt(256), so lower_addsub_overflow is called. But
range_for_prec asks the ranger for ranges of the operands and in this
case the first argument has [0, 0xffffffff] range and second [-2, 1], so
unsigned 32-bit and signed 2-bit, and in such case the code for
handle_operand/handle_cast purposes would use the _BitInt(256) type for the
first operand (ok), but because prec3 aka maximum of result precision and
the VRP computes ranges of the arguments is 32, use cast to 32-bit
BITINT_TYPE, which is why it didn't work correctly.
The following patch ensures that in such cases we use handle_cast to the
type of the other argument.
Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
Perhaps incrementally, we could try to optimize this in an earlier phase,
see that while the .{ADD,SUB}_OVERFLOW has large/huge _BitInt argument, as
ranger says it fits into a smaller type, add a cast of the larger argument
to the smaller precision type in which it fits. Either in
gimple_lower_bitint, or match.pd. An argument for the latter is that e.g.
complex unsigned .ADD_OVERFLOW (unsigned_long_long_arg, unsigned_arg)
where ranger says unsigned_long_long_arg fits into unsigned 32-bit could
be also more efficient as
.ADD_OVERFLOW ((unsigned) unsigned_long_long_arg, unsigned_arg)
2023-12-02 Jakub Jelinek <jakub@redhat.com>
PR middle-end/112807
* gimple-lower-bitint.cc (bitint_large_huge::lower_addsub_overflow):
When choosing type0 and type1 types, if prec3 has small/middle bitint
kind, use maximum of type0 and type1's precision instead of prec3.
* gcc.dg/bitint-46.c: New test.
Jakub
Comments
> Am 02.12.2023 um 12:05 schrieb Jakub Jelinek <jakub@redhat.com>:
>
> Hi!
>
> lower_addsub_overflow uses handle_cast or handle_operand to extract current
> limb from the operands. Both of those functions heavily assume that they
> return a large or huge BITINT_TYPE. The problem in the testcase is that
> this is violated. Normally, lower_addsub_overflow isn't even called if
> neither the return's type element type nor any of the operand is large/huge
> BITINT_TYPE (on x86_64 129+ bits), for middle BITINT_TYPE (on x86_64 65-128
> bits) some other code casts such operands to {,unsigned }__int128.
> In the testcase the result is complex unsigned, so small, but one of the
> arguments is _BitInt(256), so lower_addsub_overflow is called. But
> range_for_prec asks the ranger for ranges of the operands and in this
> case the first argument has [0, 0xffffffff] range and second [-2, 1], so
> unsigned 32-bit and signed 2-bit, and in such case the code for
> handle_operand/handle_cast purposes would use the _BitInt(256) type for the
> first operand (ok), but because prec3 aka maximum of result precision and
> the VRP computes ranges of the arguments is 32, use cast to 32-bit
> BITINT_TYPE, which is why it didn't work correctly.
> The following patch ensures that in such cases we use handle_cast to the
> type of the other argument.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
Ok
> Perhaps incrementally, we could try to optimize this in an earlier phase,
> see that while the .{ADD,SUB}_OVERFLOW has large/huge _BitInt argument, as
> ranger says it fits into a smaller type, add a cast of the larger argument
> to the smaller precision type in which it fits. Either in
> gimple_lower_bitint, or match.pd. An argument for the latter is that e.g.
> complex unsigned .ADD_OVERFLOW (unsigned_long_long_arg, unsigned_arg)
> where ranger says unsigned_long_long_arg fits into unsigned 32-bit could
> be also more efficient as
> .ADD_OVERFLOW ((unsigned) unsigned_long_long_arg, unsigned_arg)
Sounds reasonable.
Richard
> 2023-12-02 Jakub Jelinek <jakub@redhat.com>
>
> PR middle-end/112807
> * gimple-lower-bitint.cc (bitint_large_huge::lower_addsub_overflow):
> When choosing type0 and type1 types, if prec3 has small/middle bitint
> kind, use maximum of type0 and type1's precision instead of prec3.
>
> * gcc.dg/bitint-46.c: New test.
>
> --- gcc/gimple-lower-bitint.cc.jj 2023-12-01 10:56:45.535228688 +0100
> +++ gcc/gimple-lower-bitint.cc 2023-12-01 18:38:24.633663667 +0100
> @@ -3911,15 +3911,18 @@ bitint_large_huge::lower_addsub_overflow
>
> tree type0 = TREE_TYPE (arg0);
> tree type1 = TREE_TYPE (arg1);
> - if (TYPE_PRECISION (type0) < prec3)
> + int prec5 = prec3;
> + if (bitint_precision_kind (prec5) < bitint_prec_large)
> + prec5 = MAX (TYPE_PRECISION (type0), TYPE_PRECISION (type1));
> + if (TYPE_PRECISION (type0) < prec5)
> {
> - type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
> + type0 = build_bitint_type (prec5, TYPE_UNSIGNED (type0));
> if (TREE_CODE (arg0) == INTEGER_CST)
> arg0 = fold_convert (type0, arg0);
> }
> - if (TYPE_PRECISION (type1) < prec3)
> + if (TYPE_PRECISION (type1) < prec5)
> {
> - type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
> + type1 = build_bitint_type (prec5, TYPE_UNSIGNED (type1));
> if (TREE_CODE (arg1) == INTEGER_CST)
> arg1 = fold_convert (type1, arg1);
> }
> --- gcc/testsuite/gcc.dg/bitint-46.c.jj 2023-12-01 18:47:12.460245617 +0100
> +++ gcc/testsuite/gcc.dg/bitint-46.c 2023-12-01 18:46:41.297683578 +0100
> @@ -0,0 +1,32 @@
> +/* PR middle-end/112807 */
> +/* { dg-do compile { target bitint } } */
> +/* { dg-options "-std=gnu23 -O2" } */
> +
> +#if __BITINT_MAXWIDTH__ >= 256
> +__attribute__((noipa)) int
> +foo (_BitInt (256) a, _BitInt (2) b)
> +{
> + if (a < 0 || a > ~0U)
> + return -1;
> + return __builtin_sub_overflow_p (a, b, 0);
> +}
> +#endif
> +
> +int
> +main ()
> +{
> +#if __BITINT_MAXWIDTH__ >= 256
> + if (foo (-5wb, 1wb) != -1
> + || foo (1 + (_BitInt (256)) ~0U, -2) != -1
> + || foo (0, 0) != 0
> + || foo (0, 1) != 0
> + || foo (0, -1) != 0
> + || foo (~0U, 0) != 1
> + || foo (__INT_MAX__, 0) != 0
> + || foo (__INT_MAX__, -1) != 1
> + || foo (1 + (_BitInt (256)) __INT_MAX__, 0) != 1
> + || foo (1 + (_BitInt (256)) __INT_MAX__, 1) != 0
> + || foo (1 + (_BitInt (256)) __INT_MAX__, -2) != 1)
> + __builtin_abort ();
> +#endif
> +}
>
> Jakub
>
@@ -3911,15 +3911,18 @@ bitint_large_huge::lower_addsub_overflow
tree type0 = TREE_TYPE (arg0);
tree type1 = TREE_TYPE (arg1);
- if (TYPE_PRECISION (type0) < prec3)
+ int prec5 = prec3;
+ if (bitint_precision_kind (prec5) < bitint_prec_large)
+ prec5 = MAX (TYPE_PRECISION (type0), TYPE_PRECISION (type1));
+ if (TYPE_PRECISION (type0) < prec5)
{
- type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
+ type0 = build_bitint_type (prec5, TYPE_UNSIGNED (type0));
if (TREE_CODE (arg0) == INTEGER_CST)
arg0 = fold_convert (type0, arg0);
}
- if (TYPE_PRECISION (type1) < prec3)
+ if (TYPE_PRECISION (type1) < prec5)
{
- type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
+ type1 = build_bitint_type (prec5, TYPE_UNSIGNED (type1));
if (TREE_CODE (arg1) == INTEGER_CST)
arg1 = fold_convert (type1, arg1);
}
@@ -0,0 +1,32 @@
+/* PR middle-end/112807 */
+/* { dg-do compile { target bitint } } */
+/* { dg-options "-std=gnu23 -O2" } */
+
+#if __BITINT_MAXWIDTH__ >= 256
+__attribute__((noipa)) int
+foo (_BitInt (256) a, _BitInt (2) b)
+{
+ if (a < 0 || a > ~0U)
+ return -1;
+ return __builtin_sub_overflow_p (a, b, 0);
+}
+#endif
+
+int
+main ()
+{
+#if __BITINT_MAXWIDTH__ >= 256
+ if (foo (-5wb, 1wb) != -1
+ || foo (1 + (_BitInt (256)) ~0U, -2) != -1
+ || foo (0, 0) != 0
+ || foo (0, 1) != 0
+ || foo (0, -1) != 0
+ || foo (~0U, 0) != 1
+ || foo (__INT_MAX__, 0) != 0
+ || foo (__INT_MAX__, -1) != 1
+ || foo (1 + (_BitInt (256)) __INT_MAX__, 0) != 1
+ || foo (1 + (_BitInt (256)) __INT_MAX__, 1) != 0
+ || foo (1 + (_BitInt (256)) __INT_MAX__, -2) != 1)
+ __builtin_abort ();
+#endif
+}