lower-bitint: Fix up handle_operand_addr INTEGER_CST handling [PR113361]

Message ID ZaJbBs0d2PeCzubq@tucnak
State Unresolved
Headers
Series lower-bitint: Fix up handle_operand_addr INTEGER_CST handling [PR113361] |

Checks

Context Check Description
snail/gcc-patch-check warning Git am fail log

Commit Message

Jakub Jelinek Jan. 13, 2024, 9:42 a.m. UTC
  Hi!

As the testcase shows, the INTEGER_CST handling in handle_operand_addr
(i.e. what is used when passing address of an integer to a bitint library
routine) wasn't correct.  If the minimum precision to represent an
INTEGER_CST is smaller or equal to limb_prec, the code correctly uses
m_limb_type; if the minimum precision of a _BitInt INTEGER_CST is large
enough such that the bitint is middle, large or huge, everything is fine
too.  But the code wasn't handling correctly e.g. __int128 constants which
need more than limb_prec bits or _BitInt constants which on the architecture
are considered small (say have DImode limb_mode, TImode abi_limb_mode and
for [65, 128] bits use TImode scalar like the proposed aarch64 patch).
Best would be to use an array of 2/3/4 limbs in that case, but we'd need to
convert the INTEGER_CST to a CONSTRUCTOR in the right endianity etc.,
so the code was using mid_min_prec to enforce a middle _BitInt precision.
Except that mid_min_prec can be 0 and not computed yet, or it doesn't have
to be the smallest middle _BitInt precision, just the smallest so far
encountered.  So, on the testcase one possibility was that it used precision
65 from mid_min_prec, even when the INTEGER_CST actually needed larger
minimum precision (96 bits at least), or crashed when mid_min_prec was 0.

The patch fixes it in 2 hunks, the first makes sure we actually try to
create a BITINT_TYPE for the > limb_prec cases like __int128, and the second
instead of using mid_min_prec attempts to increase mp precision until it
isn't small anymore.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2024-01-13  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/113361
	* gimple-lower-bitint.cc (bitint_large_huge::handle_operand_addr):
	Fix up determination of the type for > limb_prec constants.

	* gcc.dg/torture/bitint-47.c: New test.


	Jakub
  

Comments

Richard Biener Jan. 13, 2024, 10:24 a.m. UTC | #1
> Am 13.01.2024 um 10:44 schrieb Jakub Jelinek <jakub@redhat.com>:
> 
> Hi!
> 
> As the testcase shows, the INTEGER_CST handling in handle_operand_addr
> (i.e. what is used when passing address of an integer to a bitint library
> routine) wasn't correct.  If the minimum precision to represent an
> INTEGER_CST is smaller or equal to limb_prec, the code correctly uses
> m_limb_type; if the minimum precision of a _BitInt INTEGER_CST is large
> enough such that the bitint is middle, large or huge, everything is fine
> too.  But the code wasn't handling correctly e.g. __int128 constants which
> need more than limb_prec bits or _BitInt constants which on the architecture
> are considered small (say have DImode limb_mode, TImode abi_limb_mode and
> for [65, 128] bits use TImode scalar like the proposed aarch64 patch).
> Best would be to use an array of 2/3/4 limbs in that case, but we'd need to
> convert the INTEGER_CST to a CONSTRUCTOR in the right endianity etc.,
> so the code was using mid_min_prec to enforce a middle _BitInt precision.
> Except that mid_min_prec can be 0 and not computed yet, or it doesn't have
> to be the smallest middle _BitInt precision, just the smallest so far
> encountered.  So, on the testcase one possibility was that it used precision
> 65 from mid_min_prec, even when the INTEGER_CST actually needed larger
> minimum precision (96 bits at least), or crashed when mid_min_prec was 0.
> 
> The patch fixes it in 2 hunks, the first makes sure we actually try to
> create a BITINT_TYPE for the > limb_prec cases like __int128, and the second
> instead of using mid_min_prec attempts to increase mp precision until it
> isn't small anymore.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

Ok

Richard 

> 2024-01-13  Jakub Jelinek  <jakub@redhat.com>
> 
>    PR tree-optimization/113361
>    * gimple-lower-bitint.cc (bitint_large_huge::handle_operand_addr):
>    Fix up determination of the type for > limb_prec constants.
> 
>    * gcc.dg/torture/bitint-47.c: New test.
> 
> --- gcc/gimple-lower-bitint.cc.jj    2024-01-12 11:23:12.000000000 +0100
> +++ gcc/gimple-lower-bitint.cc    2024-01-13 00:18:19.255889866 +0100
> @@ -2227,7 +2227,9 @@ bitint_large_huge::handle_operand_addr (
>       mp = CEIL (min_prec, limb_prec) * limb_prec;
>       if (mp == 0)
>    mp = 1;
> -      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op)))
> +      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op))
> +      && (TREE_CODE (TREE_TYPE (op)) == BITINT_TYPE
> +          || TYPE_PRECISION (TREE_TYPE (op)) <= limb_prec))
>    type = TREE_TYPE (op);
>       else
>    type = build_bitint_type (mp, 1);
> @@ -2237,11 +2239,15 @@ bitint_large_huge::handle_operand_addr (
>      if (TYPE_PRECISION (type) <= limb_prec)
>        type = m_limb_type;
>      else
> -        /* This case is for targets which e.g. have 64-bit
> -           limb but categorize up to 128-bits _BitInts as
> -           small.  We could use type of m_limb_type[2] and
> -           similar instead to save space.  */
> -        type = build_bitint_type (mid_min_prec, 1);
> +        {
> +          while (bitint_precision_kind (mp) == bitint_prec_small)
> +        mp += limb_prec;
> +          /* This case is for targets which e.g. have 64-bit
> +         limb but categorize up to 128-bits _BitInts as
> +         small.  We could use type of m_limb_type[2] and
> +         similar instead to save space.  */
> +          type = build_bitint_type (mp, 1);
> +        }
>    }
>       if (prec_stored)
>    {
> --- gcc/testsuite/gcc.dg/torture/bitint-47.c.jj    2024-01-13 00:23:40.627562314 +0100
> +++ gcc/testsuite/gcc.dg/torture/bitint-47.c    2024-01-13 00:25:35.571025508 +0100
> @@ -0,0 +1,31 @@
> +/* PR tree-optimization/113361 */
> +/* { dg-do run { target { bitint && int128 } } } */
> +/* { dg-options "-std=gnu23" } */
> +/* { dg-skip-if "" { ! run_expensive_tests }  { "*" } { "-O0" "-O2" } } */
> +/* { dg-skip-if "" { ! run_expensive_tests } { "-flto" } { "" } } */
> +
> +#if __BITINT_MAXWIDTH__ >= 129
> +int
> +foo (_BitInt(65) x)
> +{
> +  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
> +}
> +
> +int
> +bar (_BitInt(63) x)
> +{
> +  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
> +}
> +#endif
> +
> +int
> +main ()
> +{
> +#if __BITINT_MAXWIDTH__ >= 129
> +  if (!foo (5167856845))
> +    __builtin_abort ();
> +  if (!bar (5167856845))
> +    __builtin_abort ();
> +#endif
> +  return 0;
> +}
> 
>    Jakub
>
  
Richard Biener Jan. 15, 2024, 8:33 a.m. UTC | #2
On Sat, 13 Jan 2024, Jakub Jelinek wrote:

> Hi!
> 
> As the testcase shows, the INTEGER_CST handling in handle_operand_addr
> (i.e. what is used when passing address of an integer to a bitint library
> routine) wasn't correct.  If the minimum precision to represent an
> INTEGER_CST is smaller or equal to limb_prec, the code correctly uses
> m_limb_type; if the minimum precision of a _BitInt INTEGER_CST is large
> enough such that the bitint is middle, large or huge, everything is fine
> too.  But the code wasn't handling correctly e.g. __int128 constants which
> need more than limb_prec bits or _BitInt constants which on the architecture
> are considered small (say have DImode limb_mode, TImode abi_limb_mode and
> for [65, 128] bits use TImode scalar like the proposed aarch64 patch).
> Best would be to use an array of 2/3/4 limbs in that case, but we'd need to
> convert the INTEGER_CST to a CONSTRUCTOR in the right endianity etc.,
> so the code was using mid_min_prec to enforce a middle _BitInt precision.
> Except that mid_min_prec can be 0 and not computed yet, or it doesn't have
> to be the smallest middle _BitInt precision, just the smallest so far
> encountered.  So, on the testcase one possibility was that it used precision
> 65 from mid_min_prec, even when the INTEGER_CST actually needed larger
> minimum precision (96 bits at least), or crashed when mid_min_prec was 0.
> 
> The patch fixes it in 2 hunks, the first makes sure we actually try to
> create a BITINT_TYPE for the > limb_prec cases like __int128, and the second
> instead of using mid_min_prec attempts to increase mp precision until it
> isn't small anymore.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

OK.

> 2024-01-13  Jakub Jelinek  <jakub@redhat.com>
> 
> 	PR tree-optimization/113361
> 	* gimple-lower-bitint.cc (bitint_large_huge::handle_operand_addr):
> 	Fix up determination of the type for > limb_prec constants.
> 
> 	* gcc.dg/torture/bitint-47.c: New test.
> 
> --- gcc/gimple-lower-bitint.cc.jj	2024-01-12 11:23:12.000000000 +0100
> +++ gcc/gimple-lower-bitint.cc	2024-01-13 00:18:19.255889866 +0100
> @@ -2227,7 +2227,9 @@ bitint_large_huge::handle_operand_addr (
>        mp = CEIL (min_prec, limb_prec) * limb_prec;
>        if (mp == 0)
>  	mp = 1;
> -      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op)))
> +      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op))
> +	  && (TREE_CODE (TREE_TYPE (op)) == BITINT_TYPE
> +	      || TYPE_PRECISION (TREE_TYPE (op)) <= limb_prec))
>  	type = TREE_TYPE (op);
>        else
>  	type = build_bitint_type (mp, 1);
> @@ -2237,11 +2239,15 @@ bitint_large_huge::handle_operand_addr (
>  	  if (TYPE_PRECISION (type) <= limb_prec)
>  	    type = m_limb_type;
>  	  else
> -	    /* This case is for targets which e.g. have 64-bit
> -	       limb but categorize up to 128-bits _BitInts as
> -	       small.  We could use type of m_limb_type[2] and
> -	       similar instead to save space.  */
> -	    type = build_bitint_type (mid_min_prec, 1);
> +	    {
> +	      while (bitint_precision_kind (mp) == bitint_prec_small)
> +		mp += limb_prec;
> +	      /* This case is for targets which e.g. have 64-bit
> +		 limb but categorize up to 128-bits _BitInts as
> +		 small.  We could use type of m_limb_type[2] and
> +		 similar instead to save space.  */
> +	      type = build_bitint_type (mp, 1);
> +	    }
>  	}
>        if (prec_stored)
>  	{
> --- gcc/testsuite/gcc.dg/torture/bitint-47.c.jj	2024-01-13 00:23:40.627562314 +0100
> +++ gcc/testsuite/gcc.dg/torture/bitint-47.c	2024-01-13 00:25:35.571025508 +0100
> @@ -0,0 +1,31 @@
> +/* PR tree-optimization/113361 */
> +/* { dg-do run { target { bitint && int128 } } } */
> +/* { dg-options "-std=gnu23" } */
> +/* { dg-skip-if "" { ! run_expensive_tests }  { "*" } { "-O0" "-O2" } } */
> +/* { dg-skip-if "" { ! run_expensive_tests } { "-flto" } { "" } } */
> +
> +#if __BITINT_MAXWIDTH__ >= 129
> +int
> +foo (_BitInt(65) x)
> +{
> +  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
> +}
> +
> +int
> +bar (_BitInt(63) x)
> +{
> +  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
> +}
> +#endif
> +
> +int
> +main ()
> +{
> +#if __BITINT_MAXWIDTH__ >= 129
> +  if (!foo (5167856845))
> +    __builtin_abort ();
> +  if (!bar (5167856845))
> +    __builtin_abort ();
> +#endif
> +  return 0;
> +}
> 
> 	Jakub
> 
>
  

Patch

--- gcc/gimple-lower-bitint.cc.jj	2024-01-12 11:23:12.000000000 +0100
+++ gcc/gimple-lower-bitint.cc	2024-01-13 00:18:19.255889866 +0100
@@ -2227,7 +2227,9 @@  bitint_large_huge::handle_operand_addr (
       mp = CEIL (min_prec, limb_prec) * limb_prec;
       if (mp == 0)
 	mp = 1;
-      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op)))
+      if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op))
+	  && (TREE_CODE (TREE_TYPE (op)) == BITINT_TYPE
+	      || TYPE_PRECISION (TREE_TYPE (op)) <= limb_prec))
 	type = TREE_TYPE (op);
       else
 	type = build_bitint_type (mp, 1);
@@ -2237,11 +2239,15 @@  bitint_large_huge::handle_operand_addr (
 	  if (TYPE_PRECISION (type) <= limb_prec)
 	    type = m_limb_type;
 	  else
-	    /* This case is for targets which e.g. have 64-bit
-	       limb but categorize up to 128-bits _BitInts as
-	       small.  We could use type of m_limb_type[2] and
-	       similar instead to save space.  */
-	    type = build_bitint_type (mid_min_prec, 1);
+	    {
+	      while (bitint_precision_kind (mp) == bitint_prec_small)
+		mp += limb_prec;
+	      /* This case is for targets which e.g. have 64-bit
+		 limb but categorize up to 128-bits _BitInts as
+		 small.  We could use type of m_limb_type[2] and
+		 similar instead to save space.  */
+	      type = build_bitint_type (mp, 1);
+	    }
 	}
       if (prec_stored)
 	{
--- gcc/testsuite/gcc.dg/torture/bitint-47.c.jj	2024-01-13 00:23:40.627562314 +0100
+++ gcc/testsuite/gcc.dg/torture/bitint-47.c	2024-01-13 00:25:35.571025508 +0100
@@ -0,0 +1,31 @@ 
+/* PR tree-optimization/113361 */
+/* { dg-do run { target { bitint && int128 } } } */
+/* { dg-options "-std=gnu23" } */
+/* { dg-skip-if "" { ! run_expensive_tests }  { "*" } { "-O0" "-O2" } } */
+/* { dg-skip-if "" { ! run_expensive_tests } { "-flto" } { "" } } */
+
+#if __BITINT_MAXWIDTH__ >= 129
+int
+foo (_BitInt(65) x)
+{
+  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
+}
+
+int
+bar (_BitInt(63) x)
+{
+  return __builtin_mul_overflow_p ((__int128) 0xffffffff << 64, x, (_BitInt(129)) 0);
+}
+#endif
+
+int
+main ()
+{
+#if __BITINT_MAXWIDTH__ >= 129
+  if (!foo (5167856845))
+    __builtin_abort ();
+  if (!bar (5167856845))
+    __builtin_abort ();
+#endif
+  return 0;
+}