bitint: Fix handling of VIEW_CONVERT_EXPRs to minimally supported huge INTEGER_TYPEs [PR113783]
Checks
Commit Message
Hi!
On the following testcases memcpy lowering folds the calls to
reading and writing of MEM_REFs with huge INTEGER_TYPEs - uint256_t
with OImode or uint512_t with XImode. Further optimization turn
the load from MEM_REF from the large/huge _BitInt var into VIEW_CONVERT_EXPR
from it to the uint256_t/uint512_t. The backend doesn't really
support those except for "movoi"/"movxi" insns, so it isn't possible
to handle it like casts to supportable INTEGER_TYPEs where we can
construct those from individual limbs - there are no OImode/XImode shifts
and the like we can use.
So, the following patch makes sure for such VCEs that the SSA_NAME operand
of the VCE lives in memory and then turns it into a VIEW_CONVERT_EXPR so
that we actually load the OImode/XImode integer from memory (i.e. a mov).
We need to make sure those aren't merged with other
operations in the gimple_lower_bitint hunks.
For SSA_NAMEs which have underlying VAR_DECLs that is all we need, those
VAR_DECL have ARRAY_TYPEs.
For SSA_NAMEs which have underlying PARM_DECLs or RESULT_DECLs those have
BITINT_TYPE and I had to tweak expand_expr_real_1 for that so that it
doesn't try convert_modes on those when one of the modes is BLKmode - we
want to fall through into the adjust_address on the MEM.
Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
2024-02-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113783
* gimple-lower-bitint.cc (bitint_large_huge::lower_stmt): Look
through VIEW_CONVERT_EXPR for final cast checks. Handle
VIEW_CONVERT_EXPRs from large/huge _BitInt to > MAX_FIXED_MODE_SIZE
INTEGER_TYPEs.
(gimple_lower_bitint): Don't merge mergeable operations or other
casts with VIEW_CONVERT_EXPRs to > MAX_FIXED_MODE_SIZE INTEGER_TYPEs.
* expr.cc (expand_expr_real_1): Don't use convert_modes if either
mode is BLKmode.
* gcc.dg/bitint-88.c: New test.
Jakub
Comments
On Fri, 9 Feb 2024, Jakub Jelinek wrote:
> Hi!
>
> On the following testcases memcpy lowering folds the calls to
> reading and writing of MEM_REFs with huge INTEGER_TYPEs - uint256_t
> with OImode or uint512_t with XImode. Further optimization turn
> the load from MEM_REF from the large/huge _BitInt var into VIEW_CONVERT_EXPR
> from it to the uint256_t/uint512_t. The backend doesn't really
> support those except for "movoi"/"movxi" insns, so it isn't possible
> to handle it like casts to supportable INTEGER_TYPEs where we can
> construct those from individual limbs - there are no OImode/XImode shifts
> and the like we can use.
> So, the following patch makes sure for such VCEs that the SSA_NAME operand
> of the VCE lives in memory and then turns it into a VIEW_CONVERT_EXPR so
> that we actually load the OImode/XImode integer from memory (i.e. a mov).
> We need to make sure those aren't merged with other
> operations in the gimple_lower_bitint hunks.
> For SSA_NAMEs which have underlying VAR_DECLs that is all we need, those
> VAR_DECL have ARRAY_TYPEs.
> For SSA_NAMEs which have underlying PARM_DECLs or RESULT_DECLs those have
> BITINT_TYPE and I had to tweak expand_expr_real_1 for that so that it
> doesn't try convert_modes on those when one of the modes is BLKmode - we
> want to fall through into the adjust_address on the MEM.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
OK.
Thanks,
Richard.
> 2024-02-09 Jakub Jelinek <jakub@redhat.com>
>
> PR tree-optimization/113783
> * gimple-lower-bitint.cc (bitint_large_huge::lower_stmt): Look
> through VIEW_CONVERT_EXPR for final cast checks. Handle
> VIEW_CONVERT_EXPRs from large/huge _BitInt to > MAX_FIXED_MODE_SIZE
> INTEGER_TYPEs.
> (gimple_lower_bitint): Don't merge mergeable operations or other
> casts with VIEW_CONVERT_EXPRs to > MAX_FIXED_MODE_SIZE INTEGER_TYPEs.
> * expr.cc (expand_expr_real_1): Don't use convert_modes if either
> mode is BLKmode.
>
> * gcc.dg/bitint-88.c: New test.
>
> --- gcc/gimple-lower-bitint.cc.jj 2024-02-06 12:58:48.296021497 +0100
> +++ gcc/gimple-lower-bitint.cc 2024-02-08 12:49:40.435313811 +0100
> @@ -5263,6 +5263,8 @@ bitint_large_huge::lower_stmt (gimple *s
> {
> lhs = gimple_assign_lhs (stmt);
> tree rhs1 = gimple_assign_rhs1 (stmt);
> + if (TREE_CODE (rhs1) == VIEW_CONVERT_EXPR)
> + rhs1 = TREE_OPERAND (rhs1, 0);
> if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
> @@ -5273,6 +5275,44 @@ bitint_large_huge::lower_stmt (gimple *s
> || POINTER_TYPE_P (TREE_TYPE (lhs))))
> {
> final_cast_p = true;
> + if (TREE_CODE (TREE_TYPE (lhs)) == INTEGER_TYPE
> + && TYPE_PRECISION (TREE_TYPE (lhs)) > MAX_FIXED_MODE_SIZE
> + && gimple_assign_rhs_code (stmt) == VIEW_CONVERT_EXPR)
> + {
> + /* Handle VIEW_CONVERT_EXPRs to not generally supported
> + huge INTEGER_TYPEs like uint256_t or uint512_t. These
> + are usually emitted from memcpy folding and backends
> + support moves with them but that is usually it. */
> + if (TREE_CODE (rhs1) == INTEGER_CST)
> + {
> + rhs1 = fold_unary (VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
> + rhs1);
> + gcc_assert (rhs1 && TREE_CODE (rhs1) == INTEGER_CST);
> + gimple_assign_set_rhs1 (stmt, rhs1);
> + gimple_assign_set_rhs_code (stmt, INTEGER_CST);
> + update_stmt (stmt);
> + return;
> + }
> + gcc_assert (TREE_CODE (rhs1) == SSA_NAME);
> + if (SSA_NAME_IS_DEFAULT_DEF (rhs1)
> + && (!SSA_NAME_VAR (rhs1) || VAR_P (SSA_NAME_VAR (rhs1))))
> + {
> + tree var = create_tmp_reg (TREE_TYPE (lhs));
> + rhs1 = get_or_create_ssa_default_def (cfun, var);
> + gimple_assign_set_rhs1 (stmt, rhs1);
> + gimple_assign_set_rhs_code (stmt, SSA_NAME);
> + }
> + else
> + {
> + int part = var_to_partition (m_map, rhs1);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + rhs1 = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
> + m_vars[part]);
> + gimple_assign_set_rhs1 (stmt, rhs1);
> + }
> + update_stmt (stmt);
> + return;
> + }
> if (TREE_CODE (rhs1) == SSA_NAME
> && (m_names == NULL
> || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> @@ -6103,7 +6143,13 @@ gimple_lower_bitint (void)
> if (gimple_assign_cast_p (use_stmt))
> {
> tree lhs = gimple_assign_lhs (use_stmt);
> - if (INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
> + if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
> + /* Don't merge with VIEW_CONVERT_EXPRs to
> + huge INTEGER_TYPEs used sometimes in memcpy
> + expansion. */
> + && (TREE_CODE (TREE_TYPE (lhs)) != INTEGER_TYPE
> + || (TYPE_PRECISION (TREE_TYPE (lhs))
> + <= MAX_FIXED_MODE_SIZE)))
> continue;
> }
> else if (gimple_store_p (use_stmt)
> @@ -6158,6 +6204,18 @@ gimple_lower_bitint (void)
> == gimple_bb (SSA_NAME_DEF_STMT (s))))
> goto force_name;
> break;
> + case VIEW_CONVERT_EXPR:
> + /* Don't merge with VIEW_CONVERT_EXPRs to
> + huge INTEGER_TYPEs used sometimes in memcpy
> + expansion. */
> + {
> + tree lhs = gimple_assign_lhs (use_stmt);
> + if (TREE_CODE (TREE_TYPE (lhs)) == INTEGER_TYPE
> + && (TYPE_PRECISION (TREE_TYPE (lhs))
> + > MAX_FIXED_MODE_SIZE))
> + goto force_name;
> + }
> + break;
> default:
> break;
> }
> --- gcc/expr.cc.jj 2024-01-30 08:45:06.773844050 +0100
> +++ gcc/expr.cc 2024-02-08 13:05:09.228313857 +0100
> @@ -12445,7 +12445,10 @@ expand_expr_real_1 (tree exp, rtx target
> }
> }
> /* If both types are integral, convert from one mode to the other. */
> - else if (INTEGRAL_TYPE_P (type) && INTEGRAL_TYPE_P (TREE_TYPE (treeop0)))
> + else if (INTEGRAL_TYPE_P (type)
> + && INTEGRAL_TYPE_P (TREE_TYPE (treeop0))
> + && mode != BLKmode
> + && GET_MODE (op0) != BLKmode)
> op0 = convert_modes (mode, GET_MODE (op0), op0,
> TYPE_UNSIGNED (TREE_TYPE (treeop0)));
> /* If the output type is a bit-field type, do an extraction. */
> --- gcc/testsuite/gcc.dg/bitint-88.c.jj 2024-02-08 13:12:03.131520889 +0100
> +++ gcc/testsuite/gcc.dg/bitint-88.c 2024-02-08 13:09:16.018859902 +0100
> @@ -0,0 +1,38 @@
> +/* PR tree-optimization/113783 */
> +/* { dg-do compile { target bitint } } */
> +/* { dg-options "-O2" } */
> +/* { dg-additional-options "-mavx512f" { target i?86-*-* x86_64-*-* } } */
> +
> +int i;
> +
> +#if __BITINT_MAXWIDTH__ >= 246
> +void
> +foo (void *p, _BitInt(246) x)
> +{
> + __builtin_memcpy (p, &x, sizeof x);
> +}
> +
> +_BitInt(246)
> +bar (void *p, _BitInt(246) x)
> +{
> + _BitInt(246) y = x + 1;
> + __builtin_memcpy (p, &y, sizeof y);
> + return x;
> +}
> +#endif
> +
> +#if __BITINT_MAXWIDTH__ >= 502
> +void
> +baz (void *p, _BitInt(502) x)
> +{
> + __builtin_memcpy (p, &x, sizeof x);
> +}
> +
> +_BitInt(502)
> +qux (void *p, _BitInt(502) x)
> +{
> + _BitInt(502) y = x + 1;
> + __builtin_memcpy (p, &y, sizeof y);
> + return x;
> +}
> +#endif
>
> Jakub
>
>
@@ -5263,6 +5263,8 @@ bitint_large_huge::lower_stmt (gimple *s
{
lhs = gimple_assign_lhs (stmt);
tree rhs1 = gimple_assign_rhs1 (stmt);
+ if (TREE_CODE (rhs1) == VIEW_CONVERT_EXPR)
+ rhs1 = TREE_OPERAND (rhs1, 0);
if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
&& bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
&& INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
@@ -5273,6 +5275,44 @@ bitint_large_huge::lower_stmt (gimple *s
|| POINTER_TYPE_P (TREE_TYPE (lhs))))
{
final_cast_p = true;
+ if (TREE_CODE (TREE_TYPE (lhs)) == INTEGER_TYPE
+ && TYPE_PRECISION (TREE_TYPE (lhs)) > MAX_FIXED_MODE_SIZE
+ && gimple_assign_rhs_code (stmt) == VIEW_CONVERT_EXPR)
+ {
+ /* Handle VIEW_CONVERT_EXPRs to not generally supported
+ huge INTEGER_TYPEs like uint256_t or uint512_t. These
+ are usually emitted from memcpy folding and backends
+ support moves with them but that is usually it. */
+ if (TREE_CODE (rhs1) == INTEGER_CST)
+ {
+ rhs1 = fold_unary (VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
+ rhs1);
+ gcc_assert (rhs1 && TREE_CODE (rhs1) == INTEGER_CST);
+ gimple_assign_set_rhs1 (stmt, rhs1);
+ gimple_assign_set_rhs_code (stmt, INTEGER_CST);
+ update_stmt (stmt);
+ return;
+ }
+ gcc_assert (TREE_CODE (rhs1) == SSA_NAME);
+ if (SSA_NAME_IS_DEFAULT_DEF (rhs1)
+ && (!SSA_NAME_VAR (rhs1) || VAR_P (SSA_NAME_VAR (rhs1))))
+ {
+ tree var = create_tmp_reg (TREE_TYPE (lhs));
+ rhs1 = get_or_create_ssa_default_def (cfun, var);
+ gimple_assign_set_rhs1 (stmt, rhs1);
+ gimple_assign_set_rhs_code (stmt, SSA_NAME);
+ }
+ else
+ {
+ int part = var_to_partition (m_map, rhs1);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ rhs1 = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
+ m_vars[part]);
+ gimple_assign_set_rhs1 (stmt, rhs1);
+ }
+ update_stmt (stmt);
+ return;
+ }
if (TREE_CODE (rhs1) == SSA_NAME
&& (m_names == NULL
|| !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
@@ -6103,7 +6143,13 @@ gimple_lower_bitint (void)
if (gimple_assign_cast_p (use_stmt))
{
tree lhs = gimple_assign_lhs (use_stmt);
- if (INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
+ if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
+ /* Don't merge with VIEW_CONVERT_EXPRs to
+ huge INTEGER_TYPEs used sometimes in memcpy
+ expansion. */
+ && (TREE_CODE (TREE_TYPE (lhs)) != INTEGER_TYPE
+ || (TYPE_PRECISION (TREE_TYPE (lhs))
+ <= MAX_FIXED_MODE_SIZE)))
continue;
}
else if (gimple_store_p (use_stmt)
@@ -6158,6 +6204,18 @@ gimple_lower_bitint (void)
== gimple_bb (SSA_NAME_DEF_STMT (s))))
goto force_name;
break;
+ case VIEW_CONVERT_EXPR:
+ /* Don't merge with VIEW_CONVERT_EXPRs to
+ huge INTEGER_TYPEs used sometimes in memcpy
+ expansion. */
+ {
+ tree lhs = gimple_assign_lhs (use_stmt);
+ if (TREE_CODE (TREE_TYPE (lhs)) == INTEGER_TYPE
+ && (TYPE_PRECISION (TREE_TYPE (lhs))
+ > MAX_FIXED_MODE_SIZE))
+ goto force_name;
+ }
+ break;
default:
break;
}
@@ -12445,7 +12445,10 @@ expand_expr_real_1 (tree exp, rtx target
}
}
/* If both types are integral, convert from one mode to the other. */
- else if (INTEGRAL_TYPE_P (type) && INTEGRAL_TYPE_P (TREE_TYPE (treeop0)))
+ else if (INTEGRAL_TYPE_P (type)
+ && INTEGRAL_TYPE_P (TREE_TYPE (treeop0))
+ && mode != BLKmode
+ && GET_MODE (op0) != BLKmode)
op0 = convert_modes (mode, GET_MODE (op0), op0,
TYPE_UNSIGNED (TREE_TYPE (treeop0)));
/* If the output type is a bit-field type, do an extraction. */
@@ -0,0 +1,38 @@
+/* PR tree-optimization/113783 */
+/* { dg-do compile { target bitint } } */
+/* { dg-options "-O2" } */
+/* { dg-additional-options "-mavx512f" { target i?86-*-* x86_64-*-* } } */
+
+int i;
+
+#if __BITINT_MAXWIDTH__ >= 246
+void
+foo (void *p, _BitInt(246) x)
+{
+ __builtin_memcpy (p, &x, sizeof x);
+}
+
+_BitInt(246)
+bar (void *p, _BitInt(246) x)
+{
+ _BitInt(246) y = x + 1;
+ __builtin_memcpy (p, &y, sizeof y);
+ return x;
+}
+#endif
+
+#if __BITINT_MAXWIDTH__ >= 502
+void
+baz (void *p, _BitInt(502) x)
+{
+ __builtin_memcpy (p, &x, sizeof x);
+}
+
+_BitInt(502)
+qux (void *p, _BitInt(502) x)
+{
+ _BitInt(502) y = x + 1;
+ __builtin_memcpy (p, &y, sizeof y);
+ return x;
+}
+#endif