[1/5] Middle-end _BitInt support [PR102989]
Checks
Commit Message
Hi!
The following patch introduces the middle-end part of the _BitInt
support, a new BITINT_TYPE, handling it where needed and most importantly
the new bitintlower lowering pass which lowers most operations on
medium _BitInt into operations on corresponding integer types,
large _BitInt into straight line code operating on 2 or more limbs and
finally huge _BitInt into a loop plus optional straight line code.
As the only supported architecture is little-endian, the lowering only
supports little-endian for now, because it would be impossible to test it
all for big-endian. Rest is written with any endian support in mind, but
of course only little-endian has been actually tested.
I hope it is ok to add big-endian support to the lowering pass incrementally
later when first big-endian target shows with the backend support.
There are 2 possibilities of adding such support, one would be minimal one,
just tweak limb_access function and perhaps one or two other spots and
transform there the indexes from little endian (index 0 is least significant)
to big endian for just the memory access. Advantage is I think maintainance
costs, disadvantage is that the loops will still iterate from 0 to some number
of limbs and we'd rely on IVOPTs or something similar changing it later if
needed. Or we could make those indexes endian related everywhere, though
I'm afraid that would be several hundreds of changes.
For switches indexed by large/huge _BitInt the patch invokes what the switch
lowering pass does (but only on those specific switches, not all of them);
the switch lowering breaks the switches into clusters and none of the clusters
can have a range which doesn't fit into 64-bit UWHI, everything else will be
turned into a tree of comparisons. For clusters normally emitted as smaller
switches, because we already have a guarantee that the low .. high range is
at most 64 bits, the patch forces subtraction of the low and turns it into
a 64-bit switch. This is done before the actual pass starts.
Similarly, we cancel lowering of certain constructs like ABS_EXPR, ABSU_EXPR,
MIN_EXPR, MAX_EXPR and COND_EXPR and turn those back to simpler comparisons
etc., so that fewer operations need to be lowered later.
There is some -fsanitize=undefined support for _BitInt, but some of the
diagnostics is limited by lack of proper support in the library.
I've filed https://github.com/llvm/llvm-project/issues/64100 to request
proper support, for now some of the diagnostics might have less or more
confusing or inaccurate wording but UB should still be diagnosed when it
happens.
2023-07-27 Jakub Jelinek <jakub@redhat.com>
PR c/102989
gcc/
* tree.def (BITINT_TYPE): New type.
* tree.h (TREE_CHECK6, TREE_NOT_CHECK6): Define.
(NUMERICAL_TYPE_CHECK, INTEGRAL_TYPE_P): Include
BITINT_TYPE.
(BITINT_TYPE_P): Define.
(tree_check6, tree_not_check6): New inline functions.
(any_integral_type_check): Include BITINT_TYPE.
(build_bitint_type): Declare.
* tree.cc (tree_code_size, wide_int_to_tree_1, cache_integer_cst,
build_zero_cst, type_hash_canon_hash, type_cache_hasher::equal,
type_hash_canon): Handle BITINT_TYPE.
(bitint_type_cache): New variable.
(build_bitint_type): New function.
(signed_or_unsigned_type_for, verify_type_variant, verify_type):
Handle BITINT_TYPE.
(tree_cc_finalize): Free bitint_type_cache.
* builtins.cc (type_to_class): Handle BITINT_TYPE.
(fold_builtin_unordered_cmp): Handle BITINT_TYPE like INTEGER_TYPE.
* calls.cc (store_one_arg): Handle large/huge BITINT_TYPE INTEGER_CSTs
as call arguments.
* cfgexpand.cc (expand_asm_stmt): Handle large/huge BITINT_TYPE
INTEGER_CSTs as inline asm inputs.
(expand_debug_expr): Punt on BLKmode BITINT_TYPE INTEGER_CSTs.
* config/i386/i386.cc (classify_argument): Handle BITINT_TYPE.
(ix86_bitint_type_info): New function.
(TARGET_C_BITINT_TYPE_INFO): Redefine.
* convert.cc (convert_to_pointer_1, convert_to_real_1,
convert_to_complex_1): Handle BITINT_TYPE like INTEGER_TYPE.
(convert_to_integer_1): Likewise. For BITINT_TYPE don't check
GET_MODE_PRECISION (TYPE_MODE (type)).
* doc/tm.texi.in (TARGET_C_BITINT_TYPE_INFO): New.
* doc/tm.texi: Regenerated.
* dwarf2out.cc (base_type_die, is_base_type, modified_type_die,
gen_type_die_with_usage): Handle BITINT_TYPE.
(rtl_for_decl_init): Punt on BLKmode BITINT_TYPE INTEGER_CSTs or
handle those which fit into shwi.
* expr.cc (expand_expr_real_1): Reduce to bitfield precision reads
from BITINT_TYPE vars, parameters or memory locations.
* fold-const.cc (fold_convert_loc, make_range_step): Handle
BITINT_TYPE.
(extract_muldiv_1): For BITINT_TYPE use TYPE_PRECISION rather than
GET_MODE_SIZE (SCALAR_INT_TYPE_MODE).
(native_encode_int, native_interpret_int, native_interpret_expr):
Handle BITINT_TYPE.
* gimple-expr.cc (useless_type_conversion_p): Make BITINT_TYPE
to some other integral type or vice versa conversions non-useless.
* gimple-fold.cc (gimple_fold_builtin_memset): Punt for BITINT_TYPE.
* gimple-lower-bitint.cc: New file.
* gimple-lower-bitint.h: New file.
* internal-fn.cc (expand_ubsan_result_store): Add LHS, MODE and
DO_ERROR arguments. For non-mode precision BITINT_TYPE results
check if all padding bits up to mode precision are zeros or sign
bit copies and if not, jump to DO_ERROR.
(expand_addsub_overflow, expand_neg_overflow): Adjust
expand_ubsan_result_store callers.
(expand_mul_overflow): Likewise. For unsigned non-mode precision
operands force pos_neg? to 1.
(expand_MULBITINT, expand_DIVMODBITINT, expand_FLOATTOBITINT,
expand_BITINTTOFLOAT): New functions.
* internal-fn.def (MULBITINT, DIVMODBITINT, FLOATTOBITINT,
BITINTTOFLOAT): New internal functions.
* internal-fn.h (expand_MULBITINT, expand_DIVMODBITINT,
expand_FLOATTOBITINT, expand_BITINTTOFLOAT): Declare.
* lto-streamer-in.cc (lto_input_tree_1): Assert TYPE_PRECISION
is up to WIDE_INT_MAX_PRECISION rather than MAX_BITSIZE_MODE_ANY_INT.
* Makefile.in (OBJS): Add gimple-lower-bitint.o.
* match.pd (non-equality compare simplifications from fold_binary):
Punt if TYPE_MODE (arg1_type) is BLKmode.
* passes.def: Add pass_lower_bitint after pass_lower_complex and
pass_lower_bitint_O0 after pass_lower_complex_O0.
* pretty-print.h (pp_wide_int): Handle printing of large precision
wide_ints which would buffer overflow digit_buffer.
* stor-layout.cc (layout_type): Handle BITINT_TYPE. Handle
COMPLEX_TYPE with BLKmode element type and assert it is BITINT_TYPE.
* target.def (bitint_type_info): New C target hook.
* target.h (struct bitint_info): New type.
* targhooks.cc (default_bitint_type_info): New function.
* targhooks.h (default_bitint_type_info): Declare.
* tree-pass.h (PROP_gimple_lbitint): Define.
(make_pass_lower_bitint_O0, make_pass_lower_bitint): Declare.
* tree-pretty-print.cc (dump_generic_node): Handle BITINT_TYPE.
Handle printing large wide_ints which would buffer overflow
digit_buffer.
* tree-ssa-coalesce.cc: Include gimple-lower-bitint.h.
(build_ssa_conflict_graph): Call build_bitint_stmt_ssa_conflicts if
map->bitint.
(create_coalesce_list_for_region): For map->bitint ignore SSA_NAMEs
not in that bitmap, and allow res without default def.
(compute_optimized_partition_bases): In map->bitint mode try hard to
coalesce any SSA_NAMEs with the same size.
(coalesce_bitint): New function.
(coalesce_ssa_name): In map->bitint mode, or map->bitmap into
used_in_copies and call coalesce_bitint.
* tree-ssa-live.cc (init_var_map): Add BITINT argument, initialize
map->bitint and set map->outofssa_p to false if it is non-NULL.
* tree-ssa-live.h (struct _var_map): Add bitint member.
(init_var_map): Adjust declaration.
(region_contains_p): Handle map->bitint like map->outofssa_p.
* tree-ssa-sccvn.cc: Include target.h.
(eliminate_dom_walker::eliminate_stmt): Punt for large/huge
BITINT_TYPE.
* tree-switch-conversion.cc (jump_table_cluster::emit): For more than
64-bit BITINT_TYPE subtract low bound from expression and cast to
64-bit integer type both the controlling expression and case labels.
* typeclass.h (enum type_class): Add bitint_type_class enumerator.
* ubsan.cc: Include target.h and langhooks.h.
(ubsan_encode_value): Pass BITINT_TYPE values which fit into pointer
size converted to pointer sized integer, pass BITINT_TYPE values
which fit into TImode (if supported) or DImode as those integer types
or otherwise for now punt (pass 0).
(ubsan_type_descriptor): Handle BITINT_TYPE. For pstyle of
UBSAN_PRINT_FORCE_INT use TK_Integer (0x0000) mode with a
TImode/DImode precision rather than TK_Unknown used otherwise for
large/huge BITINT_TYPEs.
(instrument_si_overflow): Instrument BITINT_TYPE operations even when
they don't have mode precision.
* ubsan.h (enum ubsan_print_style): New enumerator.
* varasm.cc (output_constant): Handle BITINT_TYPE INTEGER_CSTs.
* vr-values.cc (check_for_binary_op_overflow): Use widest2_int rather
than widest_int.
(simplify_using_ranges::simplify_internal_call_using_ranges): Use
unsigned_type_for rather than build_nonstandard_integer_type.
Jakub
Comments
On Thu, 27 Jul 2023, Jakub Jelinek wrote:
> Hi!
>
> The following patch introduces the middle-end part of the _BitInt
> support, a new BITINT_TYPE, handling it where needed and most importantly
> the new bitintlower lowering pass which lowers most operations on
> medium _BitInt into operations on corresponding integer types,
> large _BitInt into straight line code operating on 2 or more limbs and
> finally huge _BitInt into a loop plus optional straight line code.
>
> As the only supported architecture is little-endian, the lowering only
> supports little-endian for now, because it would be impossible to test it
> all for big-endian. Rest is written with any endian support in mind, but
> of course only little-endian has been actually tested.
> I hope it is ok to add big-endian support to the lowering pass incrementally
> later when first big-endian target shows with the backend support.
> There are 2 possibilities of adding such support, one would be minimal one,
> just tweak limb_access function and perhaps one or two other spots and
> transform there the indexes from little endian (index 0 is least significant)
> to big endian for just the memory access. Advantage is I think maintainance
> costs, disadvantage is that the loops will still iterate from 0 to some number
> of limbs and we'd rely on IVOPTs or something similar changing it later if
> needed. Or we could make those indexes endian related everywhere, though
> I'm afraid that would be several hundreds of changes.
>
> For switches indexed by large/huge _BitInt the patch invokes what the switch
> lowering pass does (but only on those specific switches, not all of them);
> the switch lowering breaks the switches into clusters and none of the clusters
> can have a range which doesn't fit into 64-bit UWHI, everything else will be
> turned into a tree of comparisons. For clusters normally emitted as smaller
> switches, because we already have a guarantee that the low .. high range is
> at most 64 bits, the patch forces subtraction of the low and turns it into
> a 64-bit switch. This is done before the actual pass starts.
> Similarly, we cancel lowering of certain constructs like ABS_EXPR, ABSU_EXPR,
> MIN_EXPR, MAX_EXPR and COND_EXPR and turn those back to simpler comparisons
> etc., so that fewer operations need to be lowered later.
>
> There is some -fsanitize=undefined support for _BitInt, but some of the
> diagnostics is limited by lack of proper support in the library.
> I've filed https://github.com/llvm/llvm-project/issues/64100 to request
> proper support, for now some of the diagnostics might have less or more
> confusing or inaccurate wording but UB should still be diagnosed when it
> happens.
>
> 2023-07-27 Jakub Jelinek <jakub@redhat.com>
>
> PR c/102989
> gcc/
> * tree.def (BITINT_TYPE): New type.
> * tree.h (TREE_CHECK6, TREE_NOT_CHECK6): Define.
> (NUMERICAL_TYPE_CHECK, INTEGRAL_TYPE_P): Include
> BITINT_TYPE.
> (BITINT_TYPE_P): Define.
> (tree_check6, tree_not_check6): New inline functions.
> (any_integral_type_check): Include BITINT_TYPE.
> (build_bitint_type): Declare.
> * tree.cc (tree_code_size, wide_int_to_tree_1, cache_integer_cst,
> build_zero_cst, type_hash_canon_hash, type_cache_hasher::equal,
> type_hash_canon): Handle BITINT_TYPE.
> (bitint_type_cache): New variable.
> (build_bitint_type): New function.
> (signed_or_unsigned_type_for, verify_type_variant, verify_type):
> Handle BITINT_TYPE.
> (tree_cc_finalize): Free bitint_type_cache.
> * builtins.cc (type_to_class): Handle BITINT_TYPE.
> (fold_builtin_unordered_cmp): Handle BITINT_TYPE like INTEGER_TYPE.
> * calls.cc (store_one_arg): Handle large/huge BITINT_TYPE INTEGER_CSTs
> as call arguments.
> * cfgexpand.cc (expand_asm_stmt): Handle large/huge BITINT_TYPE
> INTEGER_CSTs as inline asm inputs.
> (expand_debug_expr): Punt on BLKmode BITINT_TYPE INTEGER_CSTs.
> * config/i386/i386.cc (classify_argument): Handle BITINT_TYPE.
> (ix86_bitint_type_info): New function.
> (TARGET_C_BITINT_TYPE_INFO): Redefine.
> * convert.cc (convert_to_pointer_1, convert_to_real_1,
> convert_to_complex_1): Handle BITINT_TYPE like INTEGER_TYPE.
> (convert_to_integer_1): Likewise. For BITINT_TYPE don't check
> GET_MODE_PRECISION (TYPE_MODE (type)).
> * doc/tm.texi.in (TARGET_C_BITINT_TYPE_INFO): New.
> * doc/tm.texi: Regenerated.
> * dwarf2out.cc (base_type_die, is_base_type, modified_type_die,
> gen_type_die_with_usage): Handle BITINT_TYPE.
> (rtl_for_decl_init): Punt on BLKmode BITINT_TYPE INTEGER_CSTs or
> handle those which fit into shwi.
> * expr.cc (expand_expr_real_1): Reduce to bitfield precision reads
> from BITINT_TYPE vars, parameters or memory locations.
> * fold-const.cc (fold_convert_loc, make_range_step): Handle
> BITINT_TYPE.
> (extract_muldiv_1): For BITINT_TYPE use TYPE_PRECISION rather than
> GET_MODE_SIZE (SCALAR_INT_TYPE_MODE).
> (native_encode_int, native_interpret_int, native_interpret_expr):
> Handle BITINT_TYPE.
> * gimple-expr.cc (useless_type_conversion_p): Make BITINT_TYPE
> to some other integral type or vice versa conversions non-useless.
> * gimple-fold.cc (gimple_fold_builtin_memset): Punt for BITINT_TYPE.
> * gimple-lower-bitint.cc: New file.
> * gimple-lower-bitint.h: New file.
> * internal-fn.cc (expand_ubsan_result_store): Add LHS, MODE and
> DO_ERROR arguments. For non-mode precision BITINT_TYPE results
> check if all padding bits up to mode precision are zeros or sign
> bit copies and if not, jump to DO_ERROR.
> (expand_addsub_overflow, expand_neg_overflow): Adjust
> expand_ubsan_result_store callers.
> (expand_mul_overflow): Likewise. For unsigned non-mode precision
> operands force pos_neg? to 1.
> (expand_MULBITINT, expand_DIVMODBITINT, expand_FLOATTOBITINT,
> expand_BITINTTOFLOAT): New functions.
> * internal-fn.def (MULBITINT, DIVMODBITINT, FLOATTOBITINT,
> BITINTTOFLOAT): New internal functions.
> * internal-fn.h (expand_MULBITINT, expand_DIVMODBITINT,
> expand_FLOATTOBITINT, expand_BITINTTOFLOAT): Declare.
> * lto-streamer-in.cc (lto_input_tree_1): Assert TYPE_PRECISION
> is up to WIDE_INT_MAX_PRECISION rather than MAX_BITSIZE_MODE_ANY_INT.
> * Makefile.in (OBJS): Add gimple-lower-bitint.o.
> * match.pd (non-equality compare simplifications from fold_binary):
> Punt if TYPE_MODE (arg1_type) is BLKmode.
> * passes.def: Add pass_lower_bitint after pass_lower_complex and
> pass_lower_bitint_O0 after pass_lower_complex_O0.
> * pretty-print.h (pp_wide_int): Handle printing of large precision
> wide_ints which would buffer overflow digit_buffer.
> * stor-layout.cc (layout_type): Handle BITINT_TYPE. Handle
> COMPLEX_TYPE with BLKmode element type and assert it is BITINT_TYPE.
> * target.def (bitint_type_info): New C target hook.
> * target.h (struct bitint_info): New type.
> * targhooks.cc (default_bitint_type_info): New function.
> * targhooks.h (default_bitint_type_info): Declare.
> * tree-pass.h (PROP_gimple_lbitint): Define.
> (make_pass_lower_bitint_O0, make_pass_lower_bitint): Declare.
> * tree-pretty-print.cc (dump_generic_node): Handle BITINT_TYPE.
> Handle printing large wide_ints which would buffer overflow
> digit_buffer.
> * tree-ssa-coalesce.cc: Include gimple-lower-bitint.h.
> (build_ssa_conflict_graph): Call build_bitint_stmt_ssa_conflicts if
> map->bitint.
> (create_coalesce_list_for_region): For map->bitint ignore SSA_NAMEs
> not in that bitmap, and allow res without default def.
> (compute_optimized_partition_bases): In map->bitint mode try hard to
> coalesce any SSA_NAMEs with the same size.
> (coalesce_bitint): New function.
> (coalesce_ssa_name): In map->bitint mode, or map->bitmap into
> used_in_copies and call coalesce_bitint.
> * tree-ssa-live.cc (init_var_map): Add BITINT argument, initialize
> map->bitint and set map->outofssa_p to false if it is non-NULL.
> * tree-ssa-live.h (struct _var_map): Add bitint member.
> (init_var_map): Adjust declaration.
> (region_contains_p): Handle map->bitint like map->outofssa_p.
> * tree-ssa-sccvn.cc: Include target.h.
> (eliminate_dom_walker::eliminate_stmt): Punt for large/huge
> BITINT_TYPE.
> * tree-switch-conversion.cc (jump_table_cluster::emit): For more than
> 64-bit BITINT_TYPE subtract low bound from expression and cast to
> 64-bit integer type both the controlling expression and case labels.
> * typeclass.h (enum type_class): Add bitint_type_class enumerator.
> * ubsan.cc: Include target.h and langhooks.h.
> (ubsan_encode_value): Pass BITINT_TYPE values which fit into pointer
> size converted to pointer sized integer, pass BITINT_TYPE values
> which fit into TImode (if supported) or DImode as those integer types
> or otherwise for now punt (pass 0).
> (ubsan_type_descriptor): Handle BITINT_TYPE. For pstyle of
> UBSAN_PRINT_FORCE_INT use TK_Integer (0x0000) mode with a
> TImode/DImode precision rather than TK_Unknown used otherwise for
> large/huge BITINT_TYPEs.
> (instrument_si_overflow): Instrument BITINT_TYPE operations even when
> they don't have mode precision.
> * ubsan.h (enum ubsan_print_style): New enumerator.
> * varasm.cc (output_constant): Handle BITINT_TYPE INTEGER_CSTs.
> * vr-values.cc (check_for_binary_op_overflow): Use widest2_int rather
> than widest_int.
> (simplify_using_ranges::simplify_internal_call_using_ranges): Use
> unsigned_type_for rather than build_nonstandard_integer_type.
>
> --- gcc/tree.def.jj 2023-07-17 09:07:42.154282849 +0200
> +++ gcc/tree.def 2023-07-27 15:03:24.223234605 +0200
> @@ -113,7 +113,7 @@ DEFTREECODE (BLOCK, "block", tcc_excepti
> /* The ordering of the following codes is optimized for the checking
> macros in tree.h. Changing the order will degrade the speed of the
> compiler. OFFSET_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, INTEGER_TYPE,
> - REAL_TYPE, POINTER_TYPE. */
> + BITINT_TYPE, REAL_TYPE, POINTER_TYPE. */
>
> /* An offset is a pointer relative to an object.
> The TREE_TYPE field is the type of the object at the offset.
> @@ -144,6 +144,9 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type
> and TYPE_PRECISION (number of bits used by this type). */
> DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)
>
> +/* Bit-precise integer type. */
> +DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
> +
So what was the main reason to not make BITINT_TYPE equal to INTEGER_TYPE?
Maybe note that in the comment as
"While bit-precise integer types share the same properties as
INTEGER_TYPE ..."
?
Note INTEGER_TYPE is documeted in generic.texi but unless I missed
it the changelog above doesn't mention documentation for BITINT_TYPE
added there.
> /* C's float and double. Different floating types are distinguished
> by machine mode and by the TYPE_SIZE and the TYPE_PRECISION. */
> DEFTREECODE (REAL_TYPE, "real_type", tcc_type, 0)
> --- gcc/tree.h.jj 2023-07-17 09:07:42.155282836 +0200
> +++ gcc/tree.h 2023-07-27 15:03:24.256234145 +0200
> @@ -363,6 +363,14 @@ code_helper::is_builtin_fn () const
> (tree_not_check5 ((T), __FILE__, __LINE__, __FUNCTION__, \
> (CODE1), (CODE2), (CODE3), (CODE4), (CODE5)))
>
> +#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
> +(tree_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
> + (CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
> +
> +#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
> +(tree_not_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
> + (CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
> +
> #define CONTAINS_STRUCT_CHECK(T, STRUCT) \
> (contains_struct_check ((T), (STRUCT), __FILE__, __LINE__, __FUNCTION__))
>
> @@ -485,6 +493,8 @@ extern void omp_clause_range_check_faile
> #define TREE_NOT_CHECK4(T, CODE1, CODE2, CODE3, CODE4) (T)
> #define TREE_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
> #define TREE_NOT_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
> +#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
> +#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
> #define TREE_CLASS_CHECK(T, CODE) (T)
> #define TREE_RANGE_CHECK(T, CODE1, CODE2) (T)
> #define EXPR_CHECK(T) (T)
> @@ -528,8 +538,8 @@ extern void omp_clause_range_check_faile
> TREE_CHECK2 (T, ARRAY_TYPE, INTEGER_TYPE)
>
> #define NUMERICAL_TYPE_CHECK(T) \
> - TREE_CHECK5 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
> - FIXED_POINT_TYPE)
> + TREE_CHECK6 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
> + FIXED_POINT_TYPE, BITINT_TYPE)
>
> /* Here is how primitive or already-canonicalized types' hash codes
> are made. */
> @@ -603,7 +613,8 @@ extern void omp_clause_range_check_faile
> #define INTEGRAL_TYPE_P(TYPE) \
> (TREE_CODE (TYPE) == ENUMERAL_TYPE \
> || TREE_CODE (TYPE) == BOOLEAN_TYPE \
> - || TREE_CODE (TYPE) == INTEGER_TYPE)
> + || TREE_CODE (TYPE) == INTEGER_TYPE \
> + || TREE_CODE (TYPE) == BITINT_TYPE)
>
> /* Nonzero if TYPE represents an integral type, including complex
> and vector integer types. */
> @@ -614,6 +625,10 @@ extern void omp_clause_range_check_faile
> || VECTOR_TYPE_P (TYPE)) \
> && INTEGRAL_TYPE_P (TREE_TYPE (TYPE))))
>
> +/* Nonzero if TYPE is bit-precise integer type. */
> +
> +#define BITINT_TYPE_P(TYPE) (TREE_CODE (TYPE) == BITINT_TYPE)
> +
> /* Nonzero if TYPE represents a non-saturating fixed-point type. */
>
> #define NON_SAT_FIXED_POINT_TYPE_P(TYPE) \
> @@ -3684,6 +3699,38 @@ tree_not_check5 (tree __t, const char *_
> }
>
> inline tree
> +tree_check6 (tree __t, const char *__f, int __l, const char *__g,
> + enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
> + enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
> +{
> + if (TREE_CODE (__t) != __c1
> + && TREE_CODE (__t) != __c2
> + && TREE_CODE (__t) != __c3
> + && TREE_CODE (__t) != __c4
> + && TREE_CODE (__t) != __c5
> + && TREE_CODE (__t) != __c6)
> + tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
> + 0);
> + return __t;
> +}
> +
> +inline tree
> +tree_not_check6 (tree __t, const char *__f, int __l, const char *__g,
> + enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
> + enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
> +{
> + if (TREE_CODE (__t) == __c1
> + || TREE_CODE (__t) == __c2
> + || TREE_CODE (__t) == __c3
> + || TREE_CODE (__t) == __c4
> + || TREE_CODE (__t) == __c5
> + || TREE_CODE (__t) == __c6)
> + tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
> + __c6, 0);
> + return __t;
> +}
> +
> +inline tree
> contains_struct_check (tree __t, const enum tree_node_structure_enum __s,
> const char *__f, int __l, const char *__g)
> {
> @@ -3821,7 +3868,7 @@ any_integral_type_check (tree __t, const
> {
> if (!ANY_INTEGRAL_TYPE_P (__t))
> tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
> - INTEGER_TYPE, 0);
> + INTEGER_TYPE, BITINT_TYPE, 0);
> return __t;
> }
>
> @@ -3940,6 +3987,38 @@ tree_not_check5 (const_tree __t, const c
> }
>
> inline const_tree
> +tree_check6 (const_tree __t, const char *__f, int __l, const char *__g,
> + enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
> + enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
> +{
> + if (TREE_CODE (__t) != __c1
> + && TREE_CODE (__t) != __c2
> + && TREE_CODE (__t) != __c3
> + && TREE_CODE (__t) != __c4
> + && TREE_CODE (__t) != __c5
> + && TREE_CODE (__t) != __c6)
> + tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
> + 0);
> + return __t;
> +}
> +
> +inline const_tree
> +tree_not_check6 (const_tree __t, const char *__f, int __l, const char *__g,
> + enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
> + enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
> +{
> + if (TREE_CODE (__t) == __c1
> + || TREE_CODE (__t) == __c2
> + || TREE_CODE (__t) == __c3
> + || TREE_CODE (__t) == __c4
> + || TREE_CODE (__t) == __c5
> + || TREE_CODE (__t) == __c6)
> + tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
> + __c6, 0);
> + return __t;
> +}
> +
> +inline const_tree
> contains_struct_check (const_tree __t, const enum tree_node_structure_enum __s,
> const char *__f, int __l, const char *__g)
> {
> @@ -4047,7 +4126,7 @@ any_integral_type_check (const_tree __t,
> {
> if (!ANY_INTEGRAL_TYPE_P (__t))
> tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
> - INTEGER_TYPE, 0);
> + INTEGER_TYPE, BITINT_TYPE, 0);
> return __t;
> }
>
> @@ -5579,6 +5658,7 @@ extern void build_common_builtin_nodes (
> extern void tree_cc_finalize (void);
> extern tree build_nonstandard_integer_type (unsigned HOST_WIDE_INT, int);
> extern tree build_nonstandard_boolean_type (unsigned HOST_WIDE_INT);
> +extern tree build_bitint_type (unsigned HOST_WIDE_INT, int);
> extern tree build_range_type (tree, tree, tree);
> extern tree build_nonshared_range_type (tree, tree, tree);
> extern bool subrange_type_for_debug_p (const_tree, tree *, tree *);
> --- gcc/tree.cc.jj 2023-07-17 09:07:42.154282849 +0200
> +++ gcc/tree.cc 2023-07-27 15:03:24.217234689 +0200
> @@ -991,6 +991,7 @@ tree_code_size (enum tree_code code)
> case VOID_TYPE:
> case FUNCTION_TYPE:
> case METHOD_TYPE:
> + case BITINT_TYPE:
> case LANG_TYPE: return sizeof (tree_type_non_common);
> default:
> gcc_checking_assert (code >= NUM_TREE_CODES);
> @@ -1732,6 +1733,7 @@ wide_int_to_tree_1 (tree type, const wid
>
> case INTEGER_TYPE:
> case OFFSET_TYPE:
> + case BITINT_TYPE:
> if (TYPE_SIGN (type) == UNSIGNED)
> {
> /* Cache [0, N). */
> @@ -1915,6 +1917,7 @@ cache_integer_cst (tree t, bool might_du
>
> case INTEGER_TYPE:
> case OFFSET_TYPE:
> + case BITINT_TYPE:
> if (TYPE_UNSIGNED (type))
> {
> /* Cache 0..N */
> @@ -2637,7 +2640,7 @@ build_zero_cst (tree type)
> {
> case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
> case POINTER_TYPE: case REFERENCE_TYPE:
> - case OFFSET_TYPE: case NULLPTR_TYPE:
> + case OFFSET_TYPE: case NULLPTR_TYPE: case BITINT_TYPE:
> return build_int_cst (type, 0);
> case REAL_TYPE:
> @@ -6053,7 +6056,16 @@ type_hash_canon_hash (tree type)
> hstate.add_object (TREE_INT_CST_ELT (t, i));
> break;
> }
> -
> +
> + case BITINT_TYPE:
> + {
> + unsigned prec = TYPE_PRECISION (type);
> + unsigned uns = TYPE_UNSIGNED (type);
> + hstate.add_object (prec);
> + hstate.add_int (uns);
> + break;
> + }
> +
> case REAL_TYPE:
> case FIXED_POINT_TYPE:
> {
> @@ -6136,6 +6148,11 @@ type_cache_hasher::equal (type_hash *a,
> || tree_int_cst_equal (TYPE_MIN_VALUE (a->type),
> TYPE_MIN_VALUE (b->type))));
>
> + case BITINT_TYPE:
> + if (TYPE_PRECISION (a->type) != TYPE_PRECISION (b->type))
> + return false;
> + return TYPE_UNSIGNED (a->type) == TYPE_UNSIGNED (b->type);
> +
> case FIXED_POINT_TYPE:
> return TYPE_SATURATING (a->type) == TYPE_SATURATING (b->type);
>
> @@ -6236,7 +6253,7 @@ type_hash_canon (unsigned int hashcode,
> /* Free also min/max values and the cache for integer
> types. This can't be done in free_node, as LTO frees
> those on its own. */
> - if (TREE_CODE (type) == INTEGER_TYPE)
> + if (TREE_CODE (type) == INTEGER_TYPE || TREE_CODE (type) == BITINT_TYPE)
> {
> if (TYPE_MIN_VALUE (type)
> && TREE_TYPE (TYPE_MIN_VALUE (type)) == type)
> @@ -7154,6 +7171,44 @@ build_nonstandard_boolean_type (unsigned
> return type;
> }
>
> +static GTY(()) vec<tree, va_gc> *bitint_type_cache;
> +
> +/* Builds a signed or unsigned _BitInt(PRECISION) type. */
> +tree
> +build_bitint_type (unsigned HOST_WIDE_INT precision, int unsignedp)
> +{
> + tree itype, ret;
> +
> + if (unsignedp)
> + unsignedp = MAX_INT_CACHED_PREC + 1;
> +
> + if (bitint_type_cache == NULL)
> + vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
> +
> + if (precision <= MAX_INT_CACHED_PREC)
> + {
> + itype = (*bitint_type_cache)[precision + unsignedp];
> + if (itype)
> + return itype;
I think we added this kind of cache for standard INTEGER_TYPE because
the middle-end builds those all over the place and going through
the type_hash is expensive. Is that true for _BitInt as well? If
not it doesn't seem worth the extra caching.
In fact, I wonder whether the middle-end does/should treat
_BitInt<N> and an INTEGER_TYPE with precision N any different?
Aka, should we build an INTEGER_TYPE whenever N is say less than
the number of bits in word_mode?
> + }
> +
> + itype = make_node (BITINT_TYPE);
> + TYPE_PRECISION (itype) = precision;
> +
> + if (unsignedp)
> + fixup_unsigned_type (itype);
> + else
> + fixup_signed_type (itype);
> +
> + inchash::hash hstate;
> + inchash::add_expr (TYPE_MAX_VALUE (itype), hstate);
> + ret = type_hash_canon (hstate.end (), itype);
> + if (precision <= MAX_INT_CACHED_PREC)
> + (*bitint_type_cache)[precision + unsignedp] = ret;
> +
> + return ret;
> +}
> +
> /* Create a range of some discrete type TYPE (an INTEGER_TYPE, ENUMERAL_TYPE
> or BOOLEAN_TYPE) with low bound LOWVAL and high bound HIGHVAL. If SHARED
> is true, reuse such a type that has already been constructed. */
> @@ -11041,6 +11096,8 @@ signed_or_unsigned_type_for (int unsigne
> else
> return NULL_TREE;
>
> + if (TREE_CODE (type) == BITINT_TYPE)
> + return build_bitint_type (bits, unsignedp);
> return build_nonstandard_integer_type (bits, unsignedp);
> }
>
> @@ -13462,6 +13519,7 @@ verify_type_variant (const_tree t, tree
> if ((TREE_CODE (t) == ENUMERAL_TYPE && COMPLETE_TYPE_P (t))
> || TREE_CODE (t) == INTEGER_TYPE
> || TREE_CODE (t) == BOOLEAN_TYPE
> + || TREE_CODE (t) == BITINT_TYPE
> || SCALAR_FLOAT_TYPE_P (t)
> || FIXED_POINT_TYPE_P (t))
> {
> @@ -14201,6 +14259,7 @@ verify_type (const_tree t)
> }
> else if (TREE_CODE (t) == INTEGER_TYPE
> || TREE_CODE (t) == BOOLEAN_TYPE
> + || TREE_CODE (t) == BITINT_TYPE
> || TREE_CODE (t) == OFFSET_TYPE
> || TREE_CODE (t) == REFERENCE_TYPE
> || TREE_CODE (t) == NULLPTR_TYPE
> @@ -14260,6 +14319,7 @@ verify_type (const_tree t)
> }
> if (TREE_CODE (t) != INTEGER_TYPE
> && TREE_CODE (t) != BOOLEAN_TYPE
> + && TREE_CODE (t) != BITINT_TYPE
> && TREE_CODE (t) != OFFSET_TYPE
> && TREE_CODE (t) != REFERENCE_TYPE
> && TREE_CODE (t) != NULLPTR_TYPE
> @@ -15035,6 +15095,7 @@ void
> tree_cc_finalize (void)
> {
> clear_nonstandard_integer_type_cache ();
> + vec_free (bitint_type_cache);
> }
>
> #if CHECKING_P
> --- gcc/builtins.cc.jj 2023-07-24 17:48:26.432041329 +0200
> +++ gcc/builtins.cc 2023-07-27 15:03:24.222234619 +0200
> @@ -1876,6 +1876,7 @@ type_to_class (tree type)
> ? string_type_class : array_type_class);
> case LANG_TYPE: return lang_type_class;
> case OPAQUE_TYPE: return opaque_type_class;
> + case BITINT_TYPE: return bitint_type_class;
> default: return no_type_class;
> }
> }
> @@ -9423,9 +9424,11 @@ fold_builtin_unordered_cmp (location_t l
> /* Choose the wider of two real types. */
> cmp_type = TYPE_PRECISION (type0) >= TYPE_PRECISION (type1)
> ? type0 : type1;
> - else if (code0 == REAL_TYPE && code1 == INTEGER_TYPE)
> + else if (code0 == REAL_TYPE
> + && (code1 == INTEGER_TYPE || code1 == BITINT_TYPE))
> cmp_type = type0;
> - else if (code0 == INTEGER_TYPE && code1 == REAL_TYPE)
> + else if ((code0 == INTEGER_TYPE || code0 == BITINT_TYPE)
> + && code1 == REAL_TYPE)
> cmp_type = type1;
>
> arg0 = fold_convert_loc (loc, cmp_type, arg0);
> --- gcc/calls.cc.jj 2023-06-20 20:17:01.706613302 +0200
> +++ gcc/calls.cc 2023-07-27 15:03:24.240234368 +0200
> @@ -5016,6 +5016,24 @@ store_one_arg (struct arg_data *arg, rtx
> if (arg->pass_on_stack)
> stack_arg_under_construction++;
>
> + if (TREE_CODE (pval) == INTEGER_CST
> + && TREE_CODE (TREE_TYPE (pval)) == BITINT_TYPE)
> + {
> + unsigned int prec = TYPE_PRECISION (TREE_TYPE (pval));
> + struct bitint_info info;
> + gcc_assert (targetm.c.bitint_type_info (prec, &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
> + if (prec > limb_prec)
> + {
> + scalar_int_mode arith_mode
> + = (targetm.scalar_mode_supported_p (TImode)
> + ? TImode : DImode);
> + if (prec > GET_MODE_PRECISION (arith_mode))
> + pval = tree_output_constant_def (pval);
> + }
A comment would be helpful to understand what we are doing here.
> + }
> +
> arg->value = expand_expr (pval,
> (partial
> || TYPE_MODE (TREE_TYPE (pval)) != arg->mode)
> --- gcc/cfgexpand.cc.jj 2023-06-06 20:02:35.588211832 +0200
> +++ gcc/cfgexpand.cc 2023-07-27 15:03:24.262234061 +0200
> @@ -3096,6 +3096,15 @@ expand_asm_stmt (gasm *stmt)
> {
> tree t = gimple_asm_input_op (stmt, i);
> input_tvec[i] = TREE_VALUE (t);
> + if (TREE_CODE (input_tvec[i]) == INTEGER_CST
> + && TREE_CODE (TREE_TYPE (input_tvec[i])) == BITINT_TYPE)
> + {
> + scalar_int_mode arith_mode
> + = (targetm.scalar_mode_supported_p (TImode) ? TImode : DImode);
> + if (TYPE_PRECISION (TREE_TYPE (input_tvec[i]))
> + > GET_MODE_PRECISION (arith_mode))
> + input_tvec[i] = tree_output_constant_def (input_tvec[i]);
> + }
> constraints[i + noutputs]
> = TREE_STRING_POINTER (TREE_VALUE (TREE_PURPOSE (t)));
> }
> @@ -4524,6 +4533,10 @@ expand_debug_expr (tree exp)
> /* Fall through. */
>
> case INTEGER_CST:
> + if (TREE_CODE (TREE_TYPE (exp)) == BITINT_TYPE
> + && TYPE_MODE (TREE_TYPE (exp)) == BLKmode)
> + return NULL;
> + /* FALLTHRU */
> case REAL_CST:
> case FIXED_CST:
> op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER);
> --- gcc/config/i386/i386.cc.jj 2023-07-19 10:01:17.380467993 +0200
> +++ gcc/config/i386/i386.cc 2023-07-27 15:03:24.230234508 +0200
> @@ -2121,7 +2121,8 @@ classify_argument (machine_mode mode, co
> return 0;
> }
splitting out target support to a separate patch might be helpful
> - if (type && AGGREGATE_TYPE_P (type))
> + if (type && (AGGREGATE_TYPE_P (type)
> + || (TREE_CODE (type) == BITINT_TYPE && words > 1)))
> {
> int i;
> tree field;
> @@ -2270,6 +2271,14 @@ classify_argument (machine_mode mode, co
> }
> break;
>
> + case BITINT_TYPE:
> + /* _BitInt(N) for N > 64 is passed as structure containing
> + (N + 63) / 64 64-bit elements. */
> + if (words > 2)
> + return 0;
> + classes[0] = classes[1] = X86_64_INTEGER_CLASS;
> + return 2;
> +
> default:
> gcc_unreachable ();
> }
> @@ -24799,6 +24808,26 @@ ix86_get_excess_precision (enum excess_p
> return FLT_EVAL_METHOD_UNPREDICTABLE;
> }
>
> +/* Return true if _BitInt(N) is supported and fill details about it into
> + *INFO. */
> +bool
> +ix86_bitint_type_info (int n, struct bitint_info *info)
> +{
> + if (!TARGET_64BIT)
> + return false;
> + if (n <= 8)
> + info->limb_mode = QImode;
> + else if (n <= 16)
> + info->limb_mode = HImode;
> + else if (n <= 32)
> + info->limb_mode = SImode;
> + else
> + info->limb_mode = DImode;
> + info->big_endian = false;
> + info->extended = false;
> + return true;
> +}
> +
> /* Implement PUSH_ROUNDING. On 386, we have pushw instruction that
> decrements by exactly 2 no matter what the position was, there is no pushb.
>
> @@ -25403,6 +25432,8 @@ ix86_run_selftests (void)
>
> #undef TARGET_C_EXCESS_PRECISION
> #define TARGET_C_EXCESS_PRECISION ix86_get_excess_precision
> +#undef TARGET_C_BITINT_TYPE_INFO
> +#define TARGET_C_BITINT_TYPE_INFO ix86_bitint_type_info
> #undef TARGET_PROMOTE_PROTOTYPES
> #define TARGET_PROMOTE_PROTOTYPES hook_bool_const_tree_true
> #undef TARGET_PUSH_ARGUMENT
> --- gcc/convert.cc.jj 2023-01-04 23:12:56.937574700 +0100
> +++ gcc/convert.cc 2023-07-27 15:03:24.258234117 +0200
> @@ -77,6 +77,7 @@ convert_to_pointer_1 (tree type, tree ex
> case INTEGER_TYPE:
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> {
> /* If the input precision differs from the target pointer type
> precision, first convert the input expression to an integer type of
> @@ -316,6 +317,7 @@ convert_to_real_1 (tree type, tree expr,
> case INTEGER_TYPE:
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> return build1 (FLOAT_EXPR, type, expr);
>
> case FIXED_POINT_TYPE:
> @@ -660,6 +662,7 @@ convert_to_integer_1 (tree type, tree ex
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> case OFFSET_TYPE:
> + case BITINT_TYPE:
> /* If this is a logical operation, which just returns 0 or 1, we can
> change the type of the expression. */
>
> @@ -701,7 +704,9 @@ convert_to_integer_1 (tree type, tree ex
> type corresponding to its mode, then do a nop conversion
> to TYPE. */
> else if (TREE_CODE (type) == ENUMERAL_TYPE
> - || maybe_ne (outprec, GET_MODE_PRECISION (TYPE_MODE (type))))
> + || (TREE_CODE (type) != BITINT_TYPE
> + && maybe_ne (outprec,
> + GET_MODE_PRECISION (TYPE_MODE (type)))))
> {
> expr
> = convert_to_integer_1 (lang_hooks.types.type_for_mode
> @@ -1000,6 +1005,7 @@ convert_to_complex_1 (tree type, tree ex
> case INTEGER_TYPE:
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> return build2 (COMPLEX_EXPR, type, convert (subtype, expr),
> convert (subtype, integer_zero_node));
>
> --- gcc/doc/tm.texi.in.jj 2023-05-30 17:52:34.476857273 +0200
> +++ gcc/doc/tm.texi.in 2023-07-27 15:03:24.286233725 +0200
> @@ -936,6 +936,8 @@ Return a value, with the same meaning as
> @code{FLT_EVAL_METHOD} that describes which excess precision should be
> applied.
>
> +@hook TARGET_C_BITINT_TYPE_INFO
> +
> @hook TARGET_PROMOTE_FUNCTION_MODE
>
> @defmac PARM_BOUNDARY
> --- gcc/doc/tm.texi.jj 2023-05-30 17:52:34.474857301 +0200
> +++ gcc/doc/tm.texi 2023-07-27 15:03:24.284233753 +0200
> @@ -1020,6 +1020,11 @@ Return a value, with the same meaning as
> @code{FLT_EVAL_METHOD} that describes which excess precision should be
> applied.
>
> +@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, struct bitint_info *@var{info})
> +This target hook returns true if _BitInt(N) is supported and provides some
> +details on it.
> +@end deftypefn
> +
document the "details" here please?
> @deftypefn {Target Hook} machine_mode TARGET_PROMOTE_FUNCTION_MODE (const_tree @var{type}, machine_mode @var{mode}, int *@var{punsignedp}, const_tree @var{funtype}, int @var{for_return})
> Like @code{PROMOTE_MODE}, but it is applied to outgoing function arguments or
> function return values. The target hook should return the new mode
> --- gcc/dwarf2out.cc.jj 2023-07-19 10:01:17.402467687 +0200
> +++ gcc/dwarf2out.cc 2023-07-27 15:04:07.726625658 +0200
> @@ -13298,6 +13298,14 @@ base_type_die (tree type, bool reverse)
> encoding = DW_ATE_boolean;
> break;
>
> + case BITINT_TYPE:
> + /* C23 _BitInt(N). */
> + if (TYPE_UNSIGNED (type))
> + encoding = DW_ATE_unsigned;
> + else
> + encoding = DW_ATE_signed;
> + break;
> +
> default:
> /* No other TREE_CODEs are Dwarf fundamental types. */
> gcc_unreachable ();
> @@ -13308,6 +13316,8 @@ base_type_die (tree type, bool reverse)
> add_AT_unsigned (base_type_result, DW_AT_byte_size,
> int_size_in_bytes (type));
> add_AT_unsigned (base_type_result, DW_AT_encoding, encoding);
> + if (TREE_CODE (type) == BITINT_TYPE)
> + add_AT_unsigned (base_type_result, DW_AT_bit_size, TYPE_PRECISION (type));
>
> if (need_endianity_attribute_p (reverse))
> add_AT_unsigned (base_type_result, DW_AT_endianity,
> @@ -13392,6 +13402,7 @@ is_base_type (tree type)
> case FIXED_POINT_TYPE:
> case COMPLEX_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> return true;
>
> case VOID_TYPE:
> @@ -13990,12 +14001,24 @@ modified_type_die (tree type, int cv_qua
> name = DECL_NAME (name);
> add_name_attribute (mod_type_die, IDENTIFIER_POINTER (name));
> }
> - /* This probably indicates a bug. */
> else if (mod_type_die && mod_type_die->die_tag == DW_TAG_base_type)
> {
> - name = TYPE_IDENTIFIER (type);
> - add_name_attribute (mod_type_die,
> - name ? IDENTIFIER_POINTER (name) : "__unknown__");
> + if (TREE_CODE (type) == BITINT_TYPE)
> + {
> + char name_buf[sizeof ("unsigned _BitInt(2147483647)")];
> + snprintf (name_buf, sizeof (name_buf),
> + "%s_BitInt(%d)", TYPE_UNSIGNED (type) ? "unsigned " : "",
> + TYPE_PRECISION (type));
> + add_name_attribute (mod_type_die, name_buf);
> + }
> + else
> + {
> + /* This probably indicates a bug. */
> + name = TYPE_IDENTIFIER (type);
> + add_name_attribute (mod_type_die,
> + name
> + ? IDENTIFIER_POINTER (name) : "__unknown__");
> + }
> }
>
> if (qualified_type && !reverse_base_type)
> @@ -20523,6 +20546,22 @@ rtl_for_decl_init (tree init, tree type)
> return NULL;
> }
>
> + /* RTL can't deal with BLKmode INTEGER_CSTs. */
> + if (TREE_CODE (init) == INTEGER_CST
> + && TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
> + && TYPE_MODE (TREE_TYPE (init)) == BLKmode)
> + {
> + if (tree_fits_shwi_p (init))
> + {
> + bool uns = TYPE_UNSIGNED (TREE_TYPE (init));
> + tree type
> + = build_nonstandard_integer_type (HOST_BITS_PER_WIDE_INT, uns);
> + init = fold_convert (type, init);
> + }
> + else
> + return NULL;
> + }
> +
it feels like we should avoid the above and fix expand_expr instead.
The assert immediately following seems to "support" a NULL_RTX return
value so the above trick should work there, too, and we can possibly
avoid creating a new INTEGER_TYPE and INTEGER_CST? Another option
would be to simply use immed_wide_int_const or simply
build a VOIDmode CONST_INT directly here?
> rtl = expand_expr (init, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
>
> /* If expand_expr returns a MEM, it wasn't immediate. */
> @@ -26361,6 +26400,7 @@ gen_type_die_with_usage (tree type, dw_d
> case FIXED_POINT_TYPE:
> case COMPLEX_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> /* No DIEs needed for fundamental types. */
> break;
>
> --- gcc/expr.cc.jj 2023-07-02 12:07:08.455164393 +0200
> +++ gcc/expr.cc 2023-07-27 15:03:24.253234187 +0200
> @@ -10828,6 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
> ssa_name = exp;
> decl_rtl = get_rtx_for_ssa_name (ssa_name);
> exp = SSA_NAME_VAR (ssa_name);
> + if (!exp || VAR_P (exp))
> + reduce_bit_field = false;
That needs an explanation. Can we do this and related changes
as prerequesite instead?
> goto expand_decl_rtl;
>
> case VAR_DECL:
> @@ -10961,6 +10963,13 @@ expand_expr_real_1 (tree exp, rtx target
> temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> MEM_ALIGN (temp), NULL_RTX, NULL);
>
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_MEMORY
> + && modifier != EXPAND_WRITE
> + && modifier != EXPAND_CONST_ADDRESS)
> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
I wonder how much work it would be to "lower" 'reduce_bit_field' earlier
on GIMPLE...
> return temp;
> }
>
> @@ -11007,9 +11016,23 @@ expand_expr_real_1 (tree exp, rtx target
> temp = gen_lowpart_SUBREG (mode, decl_rtl);
> SUBREG_PROMOTED_VAR_P (temp) = 1;
> SUBREG_PROMOTED_SET (temp, unsignedp);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_MEMORY
> + && modifier != EXPAND_WRITE
> + && modifier != EXPAND_CONST_ADDRESS)
> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
> return temp;
> }
>
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_MEMORY
> + && modifier != EXPAND_WRITE
> + && modifier != EXPAND_CONST_ADDRESS)
> + return reduce_to_bit_field_precision (decl_rtl, NULL_RTX, type);
> return decl_rtl;
>
> case INTEGER_CST:
> @@ -11192,6 +11215,13 @@ expand_expr_real_1 (tree exp, rtx target
> && align < GET_MODE_ALIGNMENT (mode))
> temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> align, NULL_RTX, NULL);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_WRITE
> + && modifier != EXPAND_MEMORY
> + && modifier != EXPAND_CONST_ADDRESS)
> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
so this is quite repetitive, I suppose the checks ensure we apply
it to rvalues only, but I don't really get why we only reduce
BITINT_TYPE, esp. as we are not considering BLKmode here?
> return temp;
> }
>
> @@ -11253,18 +11283,21 @@ expand_expr_real_1 (tree exp, rtx target
> set_mem_addr_space (temp, as);
> if (TREE_THIS_VOLATILE (exp))
> MEM_VOLATILE_P (temp) = 1;
> - if (modifier != EXPAND_WRITE
> - && modifier != EXPAND_MEMORY
> - && !inner_reference_p
> + if (modifier == EXPAND_WRITE || modifier == EXPAND_MEMORY)
> + return temp;
> + if (!inner_reference_p
> && mode != BLKmode
> && align < GET_MODE_ALIGNMENT (mode))
> temp = expand_misaligned_mem_ref (temp, mode, unsignedp, align,
> modifier == EXPAND_STACK_PARM
> ? NULL_RTX : target, alt_rtl);
> - if (reverse
> - && modifier != EXPAND_MEMORY
> - && modifier != EXPAND_WRITE)
> + if (reverse)
the above two look like a useful prerequesite, OK to push separately.
> temp = flip_storage_order (mode, temp);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_CONST_ADDRESS)
> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
> return temp;
> }
>
> @@ -11817,6 +11850,14 @@ expand_expr_real_1 (tree exp, rtx target
> && modifier != EXPAND_WRITE)
> op0 = flip_storage_order (mode1, op0);
>
> + if (TREE_CODE (type) == BITINT_TYPE
> + && reduce_bit_field
> + && mode != BLKmode
> + && modifier != EXPAND_MEMORY
> + && modifier != EXPAND_WRITE
> + && modifier != EXPAND_CONST_ADDRESS)
> + op0 = reduce_to_bit_field_precision (op0, NULL_RTX, type);
> +
> if (mode == mode1 || mode1 == BLKmode || mode1 == tmode
> || modifier == EXPAND_CONST_ADDRESS
> || modifier == EXPAND_INITIALIZER)
> --- gcc/fold-const.cc.jj 2023-07-19 10:01:17.404467659 +0200
> +++ gcc/fold-const.cc 2023-07-27 15:03:24.294233613 +0200
> @@ -2557,7 +2557,7 @@ fold_convert_loc (location_t loc, tree t
> /* fall through */
>
> case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
> - case OFFSET_TYPE:
> + case OFFSET_TYPE: case BITINT_TYPE:
> if (TREE_CODE (arg) == INTEGER_CST)
> {
> tem = fold_convert_const (NOP_EXPR, type, arg);
> @@ -2597,7 +2597,7 @@ fold_convert_loc (location_t loc, tree t
>
> switch (TREE_CODE (orig))
> {
> - case INTEGER_TYPE:
> + case INTEGER_TYPE: case BITINT_TYPE:
> case BOOLEAN_TYPE: case ENUMERAL_TYPE:
> case POINTER_TYPE: case REFERENCE_TYPE:
> return fold_build1_loc (loc, FLOAT_EXPR, type, arg);
> @@ -2632,6 +2632,7 @@ fold_convert_loc (location_t loc, tree t
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> case REAL_TYPE:
> + case BITINT_TYPE:
> return fold_build1_loc (loc, FIXED_CONVERT_EXPR, type, arg);
>
> case COMPLEX_TYPE:
> @@ -2645,7 +2646,7 @@ fold_convert_loc (location_t loc, tree t
> case COMPLEX_TYPE:
> switch (TREE_CODE (orig))
> {
> - case INTEGER_TYPE:
> + case INTEGER_TYPE: case BITINT_TYPE:
> case BOOLEAN_TYPE: case ENUMERAL_TYPE:
> case POINTER_TYPE: case REFERENCE_TYPE:
> case REAL_TYPE:
> @@ -5324,6 +5325,8 @@ make_range_step (location_t loc, enum tr
> equiv_type
> = lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type),
> TYPE_SATURATING (arg0_type));
> + else if (TREE_CODE (arg0_type) == BITINT_TYPE)
> + equiv_type = arg0_type;
> else
> equiv_type
> = lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type), 1);
> @@ -6850,10 +6853,19 @@ extract_muldiv_1 (tree t, tree c, enum t
> {
> tree type = TREE_TYPE (t);
> enum tree_code tcode = TREE_CODE (t);
> - tree ctype = (wide_type != 0
> - && (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
> - > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
> - ? wide_type : type);
> + tree ctype = type;
> + if (wide_type)
> + {
> + if (TREE_CODE (type) == BITINT_TYPE
> + || TREE_CODE (wide_type) == BITINT_TYPE)
> + {
> + if (TYPE_PRECISION (wide_type) > TYPE_PRECISION (type))
> + ctype = wide_type;
> + }
> + else if (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
> + > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
> + ctype = wide_type;
> + }
> tree t1, t2;
> bool same_p = tcode == code;
> tree op0 = NULL_TREE, op1 = NULL_TREE;
> @@ -7714,7 +7726,29 @@ static int
> native_encode_int (const_tree expr, unsigned char *ptr, int len, int off)
> {
> tree type = TREE_TYPE (expr);
> - int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
> + int total_bytes;
> + if (TREE_CODE (type) == BITINT_TYPE)
> + {
> + struct bitint_info info;
> + gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
> + &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
> + {
> + total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
> + /* More work is needed when adding _BitInt support to PDP endian
> + if limb is smaller than word, or if _BitInt limb ordering doesn't
> + match target endianity here. */
> + gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
> + && (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
> + || (GET_MODE_SIZE (limb_mode)
> + >= UNITS_PER_WORD)));
> + }
> + else
> + total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
> + }
> + else
> + total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
> int byte, offset, word, words;
> unsigned char value;
>
> @@ -8622,7 +8656,29 @@ native_encode_initializer (tree init, un
> static tree
> native_interpret_int (tree type, const unsigned char *ptr, int len)
> {
> - int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
> + int total_bytes;
> + if (TREE_CODE (type) == BITINT_TYPE)
> + {
> + struct bitint_info info;
> + gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
> + &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
> + {
> + total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
> + /* More work is needed when adding _BitInt support to PDP endian
> + if limb is smaller than word, or if _BitInt limb ordering doesn't
> + match target endianity here. */
> + gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
> + && (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
> + || (GET_MODE_SIZE (limb_mode)
> + >= UNITS_PER_WORD)));
> + }
> + else
> + total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
> + }
> + else
> + total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
>
> if (total_bytes > len
> || total_bytes * BITS_PER_UNIT > HOST_BITS_PER_DOUBLE_INT)
> @@ -8824,6 +8880,7 @@ native_interpret_expr (tree type, const
> case POINTER_TYPE:
> case REFERENCE_TYPE:
> case OFFSET_TYPE:
> + case BITINT_TYPE:
> return native_interpret_int (type, ptr, len);
>
> case REAL_TYPE:
> --- gcc/gimple-expr.cc.jj 2023-05-20 15:31:09.197661517 +0200
> +++ gcc/gimple-expr.cc 2023-07-27 15:03:24.219234661 +0200
> @@ -111,6 +111,15 @@ useless_type_conversion_p (tree outer_ty
> && TYPE_PRECISION (outer_type) != 1)
> return false;
>
> + /* Preserve conversions to/from BITINT_TYPE. While we don't
> + need to care that much about such conversions within a function's
> + body, we need to prevent changing BITINT_TYPE to INTEGER_TYPE
> + of the same precision or vice versa when passed to functions,
> + especially for varargs. */
> + if ((TREE_CODE (inner_type) == BITINT_TYPE)
> + != (TREE_CODE (outer_type) == BITINT_TYPE))
> + return false;
> +
> /* We don't need to preserve changes in the types minimum or
> maximum value in general as these do not generate code
> unless the types precisions are different. */
> --- gcc/gimple-fold.cc.jj 2023-07-24 17:48:26.491040563 +0200
> +++ gcc/gimple-fold.cc 2023-07-27 15:03:24.257234131 +0200
> @@ -1475,8 +1475,9 @@ gimple_fold_builtin_memset (gimple_stmt_
> if (TREE_CODE (etype) == ARRAY_TYPE)
> etype = TREE_TYPE (etype);
>
> - if (!INTEGRAL_TYPE_P (etype)
> - && !POINTER_TYPE_P (etype))
> + if ((!INTEGRAL_TYPE_P (etype)
> + && !POINTER_TYPE_P (etype))
> + || TREE_CODE (etype) == BITINT_TYPE)
> return NULL_TREE;
>
> if (! var_decl_component_p (var))
> --- gcc/gimple-lower-bitint.cc.jj 2023-07-27 15:03:24.299233543 +0200
> +++ gcc/gimple-lower-bitint.cc 2023-07-27 15:30:57.839090959 +0200
> @@ -0,0 +1,5495 @@
> +/* Lower _BitInt(N) operations to scalar operations.
> + Copyright (C) 2023 Free Software Foundation, Inc.
> + Contributed by Jakub Jelinek <jakub@redhat.com>.
> +
> +This file is part of GCC.
> +
> +GCC is free software; you can redistribute it and/or modify it
> +under the terms of the GNU General Public License as published by the
> +Free Software Foundation; either version 3, or (at your option) any
> +later version.
> +
> +GCC is distributed in the hope that it will be useful, but WITHOUT
> +ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
> +for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with GCC; see the file COPYING3. If not see
> +<http://www.gnu.org/licenses/>. */
> +
> +#include "config.h"
> +#include "system.h"
> +#include "coretypes.h"
> +#include "backend.h"
> +#include "rtl.h"
> +#include "tree.h"
> +#include "gimple.h"
> +#include "cfghooks.h"
> +#include "tree-pass.h"
> +#include "ssa.h"
> +#include "fold-const.h"
> +#include "gimplify.h"
> +#include "gimple-iterator.h"
> +#include "tree-cfg.h"
> +#include "tree-dfa.h"
> +#include "cfgloop.h"
> +#include "cfganal.h"
> +#include "target.h"
> +#include "tree-ssa-live.h"
> +#include "tree-ssa-coalesce.h"
> +#include "domwalk.h"
> +#include "memmodel.h"
> +#include "optabs.h"
> +#include "varasm.h"
> +#include "gimple-range.h"
> +#include "value-range.h"
> +#include "langhooks.h"
> +#include "gimplify-me.h"
> +#include "diagnostic-core.h"
> +#include "tree-eh.h"
> +#include "tree-pretty-print.h"
> +#include "alloc-pool.h"
> +#include "tree-into-ssa.h"
> +#include "tree-cfgcleanup.h"
> +#include "tree-switch-conversion.h"
> +#include "ubsan.h"
> +#include "gimple-lower-bitint.h"
> +
> +/* Split BITINT_TYPE precisions in 4 categories. Small _BitInt, where
> + target hook says it is a single limb, middle _BitInt which per ABI
> + does not, but there is some INTEGER_TYPE in which arithmetics can be
> + performed (operations on such _BitInt are lowered to casts to that
> + arithmetic type and cast back; e.g. on x86_64 limb is DImode, but
> + target supports TImode, so _BitInt(65) to _BitInt(128) are middle
> + ones), large _BitInt which should by straight line code and
> + finally huge _BitInt which should be handled by loops over the limbs. */
> +
> +enum bitint_prec_kind {
> + bitint_prec_small,
> + bitint_prec_middle,
> + bitint_prec_large,
> + bitint_prec_huge
> +};
> +
> +/* Caches to speed up bitint_precision_kind. */
> +
> +static int small_max_prec, mid_min_prec, large_min_prec, huge_min_prec;
> +static int limb_prec;
I would appreciate the lowering pass to be in a separate patch in
case we need to iterate on it.
> +/* Categorize _BitInt(PREC) as small, middle, large or huge. */
> +
> +static bitint_prec_kind
> +bitint_precision_kind (int prec)
> +{
> + if (prec <= small_max_prec)
> + return bitint_prec_small;
> + if (huge_min_prec && prec >= huge_min_prec)
> + return bitint_prec_huge;
> + if (large_min_prec && prec >= large_min_prec)
> + return bitint_prec_large;
> + if (mid_min_prec && prec >= mid_min_prec)
> + return bitint_prec_middle;
> +
> + struct bitint_info info;
> + gcc_assert (targetm.c.bitint_type_info (prec, &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + if (prec <= GET_MODE_PRECISION (limb_mode))
> + {
> + small_max_prec = prec;
> + return bitint_prec_small;
> + }
> + scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
> + ? TImode : DImode);
> + if (!large_min_prec
> + && GET_MODE_PRECISION (arith_mode) > GET_MODE_PRECISION (limb_mode))
> + large_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> + if (!limb_prec)
> + limb_prec = GET_MODE_PRECISION (limb_mode);
> + if (!huge_min_prec)
> + {
> + if (4 * limb_prec >= GET_MODE_PRECISION (arith_mode))
> + huge_min_prec = 4 * limb_prec;
> + else
> + huge_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> + }
> + if (prec <= GET_MODE_PRECISION (arith_mode))
> + {
> + if (!mid_min_prec || prec < mid_min_prec)
> + mid_min_prec = prec;
> + return bitint_prec_middle;
> + }
> + if (large_min_prec && prec <= large_min_prec)
> + return bitint_prec_large;
> + return bitint_prec_huge;
> +}
> +
> +/* Same for a TYPE. */
> +
> +static bitint_prec_kind
> +bitint_precision_kind (tree type)
> +{
> + return bitint_precision_kind (TYPE_PRECISION (type));
> +}
> +
> +/* Return minimum precision needed to describe INTEGER_CST
> + CST. All bits above that precision up to precision of
> + TREE_TYPE (CST) are cleared if EXT is set to 0, or set
> + if EXT is set to -1. */
> +
> +static unsigned
> +bitint_min_cst_precision (tree cst, int &ext)
> +{
> + ext = tree_int_cst_sgn (cst) < 0 ? -1 : 0;
> + wide_int w = wi::to_wide (cst);
> + unsigned min_prec = wi::min_precision (w, TYPE_SIGN (TREE_TYPE (cst)));
> + /* For signed values, we don't need to count the sign bit,
> + we'll use constant 0 or -1 for the upper bits. */
> + if (!TYPE_UNSIGNED (TREE_TYPE (cst)))
> + --min_prec;
> + else
> + {
> + /* For unsigned values, also try signed min_precision
> + in case the constant has lots of most significant bits set. */
> + unsigned min_prec2 = wi::min_precision (w, SIGNED) - 1;
> + if (min_prec2 < min_prec)
> + {
> + ext = -1;
> + return min_prec2;
> + }
> + }
> + return min_prec;
> +}
> +
> +namespace {
> +
> +/* If OP is middle _BitInt, cast it to corresponding INTEGER_TYPE
> + cached in TYPE and return it. */
> +
> +tree
> +maybe_cast_middle_bitint (gimple_stmt_iterator *gsi, tree op, tree &type)
> +{
> + if (op == NULL_TREE
> + || TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
> + || bitint_precision_kind (TREE_TYPE (op)) != bitint_prec_middle)
> + return op;
> +
> + int prec = TYPE_PRECISION (TREE_TYPE (op));
> + int uns = TYPE_UNSIGNED (TREE_TYPE (op));
> + if (type == NULL_TREE
> + || TYPE_PRECISION (type) != prec
> + || TYPE_UNSIGNED (type) != uns)
> + type = build_nonstandard_integer_type (prec, uns);
> +
> + if (TREE_CODE (op) != SSA_NAME)
> + {
> + tree nop = fold_convert (type, op);
> + if (is_gimple_val (nop))
> + return nop;
> + }
> +
> + tree nop = make_ssa_name (type);
> + gimple *g = gimple_build_assign (nop, NOP_EXPR, op);
> + gsi_insert_before (gsi, g, GSI_SAME_STMT);
> + return nop;
> +}
> +
> +/* Return true if STMT can be handled in a loop from least to most
> + significant limb together with its dependencies. */
> +
> +bool
> +mergeable_op (gimple *stmt)
> +{
> + if (!is_gimple_assign (stmt))
> + return false;
> + switch (gimple_assign_rhs_code (stmt))
> + {
> + case PLUS_EXPR:
> + case MINUS_EXPR:
> + case NEGATE_EXPR:
> + case BIT_AND_EXPR:
> + case BIT_IOR_EXPR:
> + case BIT_XOR_EXPR:
> + case BIT_NOT_EXPR:
> + case SSA_NAME:
> + case INTEGER_CST:
> + return true;
> + case LSHIFT_EXPR:
> + {
> + tree cnt = gimple_assign_rhs2 (stmt);
> + if (tree_fits_uhwi_p (cnt)
> + && tree_to_uhwi (cnt) < (unsigned HOST_WIDE_INT) limb_prec)
> + return true;
> + }
> + break;
> + CASE_CONVERT:
> + case VIEW_CONVERT_EXPR:
> + {
> + tree lhs_type = TREE_TYPE (gimple_assign_lhs (stmt));
> + tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
> + if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
> + && TREE_CODE (lhs_type) == BITINT_TYPE
> + && TREE_CODE (rhs_type) == BITINT_TYPE
> + && bitint_precision_kind (lhs_type) >= bitint_prec_large
> + && bitint_precision_kind (rhs_type) >= bitint_prec_large
> + && tree_int_cst_equal (TYPE_SIZE (lhs_type), TYPE_SIZE (rhs_type)))
> + {
> + if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type))
> + return true;
> + if ((unsigned) TYPE_PRECISION (lhs_type) % (2 * limb_prec) != 0)
> + return true;
> + if (bitint_precision_kind (lhs_type) == bitint_prec_large)
> + return true;
> + }
> + break;
> + }
> + default:
> + break;
> + }
> + return false;
> +}
> +
> +/* Return non-zero if stmt is .{ADD,SUB,MUL}_OVERFLOW call with
> + _Complex large/huge _BitInt lhs which has at most two immediate uses,
> + at most one use in REALPART_EXPR stmt in the same bb and exactly one
> + IMAGPART_EXPR use in the same bb with a single use which casts it to
> + non-BITINT_TYPE integral type. If there is a REALPART_EXPR use,
> + return 2. Such cases (most common uses of those builtins) can be
> + optimized by marking their lhs and lhs of IMAGPART_EXPR and maybe lhs
> + of REALPART_EXPR as not needed to be backed up by a stack variable.
> + For .UBSAN_CHECK_{ADD,SUB,MUL} return 3. */
> +
> +int
> +optimizable_arith_overflow (gimple *stmt)
> +{
> + bool is_ubsan = false;
> + if (!is_gimple_call (stmt) || !gimple_call_internal_p (stmt))
> + return false;
> + switch (gimple_call_internal_fn (stmt))
> + {
> + case IFN_ADD_OVERFLOW:
> + case IFN_SUB_OVERFLOW:
> + case IFN_MUL_OVERFLOW:
> + break;
> + case IFN_UBSAN_CHECK_ADD:
> + case IFN_UBSAN_CHECK_SUB:
> + case IFN_UBSAN_CHECK_MUL:
> + is_ubsan = true;
> + break;
> + default:
> + return 0;
> + }
> + tree lhs = gimple_call_lhs (stmt);
> + if (!lhs)
> + return 0;
> + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs))
> + return 0;
> + tree type = is_ubsan ? TREE_TYPE (lhs) : TREE_TYPE (TREE_TYPE (lhs));
> + if (TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) < bitint_prec_large)
> + return 0;
> +
> + if (is_ubsan)
> + {
> + use_operand_p use_p;
> + gimple *use_stmt;
> + if (!single_imm_use (lhs, &use_p, &use_stmt)
> + || gimple_bb (use_stmt) != gimple_bb (stmt)
> + || !gimple_store_p (use_stmt)
> + || !is_gimple_assign (use_stmt)
> + || gimple_has_volatile_ops (use_stmt)
> + || stmt_ends_bb_p (use_stmt))
> + return 0;
> + return 3;
> + }
> +
> + imm_use_iterator ui;
> + use_operand_p use_p;
> + int seen = 0;
> + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> + {
> + gimple *g = USE_STMT (use_p);
> + if (is_gimple_debug (g))
> + continue;
> + if (!is_gimple_assign (g) || gimple_bb (g) != gimple_bb (stmt))
> + return 0;
> + if (gimple_assign_rhs_code (g) == REALPART_EXPR)
> + {
> + if ((seen & 1) != 0)
> + return 0;
> + seen |= 1;
> + }
> + else if (gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> + {
> + if ((seen & 2) != 0)
> + return 0;
> + seen |= 2;
> +
> + use_operand_p use2_p;
> + gimple *use_stmt;
> + tree lhs2 = gimple_assign_lhs (g);
> + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs2))
> + return 0;
> + if (!single_imm_use (lhs2, &use2_p, &use_stmt)
> + || gimple_bb (use_stmt) != gimple_bb (stmt)
> + || !gimple_assign_cast_p (use_stmt))
> + return 0;
> +
> + lhs2 = gimple_assign_lhs (use_stmt);
> + if (!INTEGRAL_TYPE_P (TREE_TYPE (lhs2))
> + || TREE_CODE (TREE_TYPE (lhs2)) == BITINT_TYPE)
> + return 0;
> + }
> + else
> + return 0;
> + }
> + if ((seen & 2) == 0)
> + return 0;
> + return seen == 3 ? 2 : 1;
> +}
> +
> +/* If STMT is some kind of comparison (GIMPLE_COND, comparison
> + assignment or COND_EXPR) comparing large/huge _BitInt types,
> + return the comparison code and if non-NULL fill in the comparison
> + operands to *POP1 and *POP2. */
> +
> +tree_code
> +comparison_op (gimple *stmt, tree *pop1, tree *pop2)
> +{
> + tree op1 = NULL_TREE, op2 = NULL_TREE;
> + tree_code code = ERROR_MARK;
> + if (gimple_code (stmt) == GIMPLE_COND)
> + {
> + code = gimple_cond_code (stmt);
> + op1 = gimple_cond_lhs (stmt);
> + op2 = gimple_cond_rhs (stmt);
> + }
> + else if (is_gimple_assign (stmt))
> + {
> + code = gimple_assign_rhs_code (stmt);
> + op1 = gimple_assign_rhs1 (stmt);
> + if (TREE_CODE_CLASS (code) == tcc_comparison
> + || TREE_CODE_CLASS (code) == tcc_binary)
> + op2 = gimple_assign_rhs2 (stmt);
> + switch (code)
> + {
> + default:
> + break;
> + case COND_EXPR:
> + tree cond = gimple_assign_rhs1 (stmt);
> + code = TREE_CODE (cond);
> + op1 = TREE_OPERAND (cond, 0);
> + op2 = TREE_OPERAND (cond, 1);
this should ICE, COND_EXPRs now have is_gimple_reg conditions.
> + break;
> + }
> + }
> + if (TREE_CODE_CLASS (code) != tcc_comparison)
> + return ERROR_MARK;
> + tree type = TREE_TYPE (op1);
> + if (TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) < bitint_prec_large)
> + return ERROR_MARK;
> + if (pop1)
> + {
> + *pop1 = op1;
> + *pop2 = op2;
> + }
> + return code;
> +}
> +
> +/* Class used during large/huge _BitInt lowering containing all the
> + state for the methods. */
> +
> +struct bitint_large_huge
> +{
> + bitint_large_huge ()
> + : m_names (NULL), m_loads (NULL), m_preserved (NULL),
> + m_single_use_names (NULL), m_map (NULL), m_vars (NULL),
> + m_limb_type (NULL_TREE), m_data (vNULL) {}
> +
> + ~bitint_large_huge ();
> +
> + void insert_before (gimple *);
> + tree limb_access_type (tree, tree);
> + tree limb_access (tree, tree, tree, bool);
> + tree handle_operand (tree, tree);
> + tree prepare_data_in_out (tree, tree, tree *);
> + tree add_cast (tree, tree);
> + tree handle_plus_minus (tree_code, tree, tree, tree);
> + tree handle_lshift (tree, tree, tree);
> + tree handle_cast (tree, tree, tree);
> + tree handle_stmt (gimple *, tree);
> + tree handle_operand_addr (tree, gimple *, int *, int *);
> + tree create_loop (tree, tree *);
> + tree lower_mergeable_stmt (gimple *, tree_code &, tree, tree);
> + tree lower_comparison_stmt (gimple *, tree_code &, tree, tree);
> + void lower_shift_stmt (tree, gimple *);
> + void lower_muldiv_stmt (tree, gimple *);
> + void lower_float_conv_stmt (tree, gimple *);
> + tree arith_overflow_extract_bits (unsigned int, unsigned int, tree,
> + unsigned int, bool);
> + void finish_arith_overflow (tree, tree, tree, tree, tree, tree, gimple *,
> + tree_code);
> + void lower_addsub_overflow (tree, gimple *);
> + void lower_mul_overflow (tree, gimple *);
> + void lower_cplxpart_stmt (tree, gimple *);
> + void lower_complexexpr_stmt (gimple *);
> + void lower_call (tree, gimple *);
> + void lower_asm (gimple *);
> + void lower_stmt (gimple *);
> +
> + /* Bitmap of large/huge _BitInt SSA_NAMEs except those can be
> + merged with their uses. */
> + bitmap m_names;
> + /* Subset of those for lhs of load statements. These will be
> + cleared in m_names if the loads will be mergeable with all
> + their uses. */
> + bitmap m_loads;
> + /* Bitmap of large/huge _BitInt SSA_NAMEs that should survive
> + to later passes (arguments or return values of calls). */
> + bitmap m_preserved;
> + /* Subset of m_names which have a single use. As the lowering
> + can replace various original statements with their lowered
> + form even before it is done iterating over all basic blocks,
> + testing has_single_use for the purpose of emitting clobbers
> + doesn't work properly. */
> + bitmap m_single_use_names;
> + /* Used for coalescing/partitioning of large/huge _BitInt SSA_NAMEs
> + set in m_names. */
> + var_map m_map;
> + /* Mapping of the partitions to corresponding decls. */
> + tree *m_vars;
> + /* Unsigned integer type with limb precision. */
> + tree m_limb_type;
> + /* Its TYPE_SIZE_UNIT. */
> + unsigned HOST_WIDE_INT m_limb_size;
> + /* Location of a gimple stmt which is being currently lowered. */
> + location_t m_loc;
> + /* Current stmt iterator where code is being lowered currently. */
> + gimple_stmt_iterator m_gsi;
> + /* Statement after which any clobbers should be added if non-NULL. */
> + gimple *m_after_stmt;
> + /* Set when creating loops to the loop header bb and its preheader. */
> + basic_block m_bb, m_preheader_bb;
> + /* Stmt iterator after which initialization statements should be emitted. */
> + gimple_stmt_iterator m_init_gsi;
> + /* Decl into which a mergeable statement stores result. */
> + tree m_lhs;
> + /* handle_operand/handle_stmt can be invoked in various ways.
> +
> + lower_mergeable_stmt for large _BitInt calls those with constant
> + idx only, expanding to straight line code, for huge _BitInt
> + emits a loop from least significant limb upwards, where each loop
> + iteration handles 2 limbs, plus there can be up to one full limb
> + and one partial limb processed after the loop, where handle_operand
> + and/or handle_stmt are called with constant idx. m_upwards_2limb
> + is set for this case, false otherwise.
> +
> + Another way is used by lower_comparison_stmt, which walks limbs
> + from most significant to least significant, partial limb if any
> + processed first with constant idx and then loop processing a single
> + limb per iteration with non-constant idx.
> +
> + Another way is used in lower_shift_stmt, where for LSHIFT_EXPR
> + destination limbs are processed from most significant to least
> + significant or for RSHIFT_EXPR the other way around, in loops or
> + straight line code, but idx usually is non-constant (so from
> + handle_operand/handle_stmt POV random access). The LSHIFT_EXPR
> + handling there can access even partial limbs using non-constant
> + idx (then m_var_msb should be true, for all the other cases
> + including lower_mergeable_stmt/lower_comparison_stmt that is
> + not the case and so m_var_msb should be false.
> +
> + m_first should be set the first time handle_operand/handle_stmt
> + is called and clear when it is called for some other limb with
> + the same argument. If the lowering of an operand (e.g. INTEGER_CST)
> + or statement (e.g. +/-/<< with < limb_prec constant) needs some
> + state between the different calls, when m_first is true it should
> + push some trees to m_data vector and also make sure m_data_cnt is
> + incremented by how many trees were pushed, and when m_first is
> + false, it can use the m_data[m_data_cnt] etc. data or update them,
> + just needs to bump m_data_cnt by the same amount as when it was
> + called with m_first set. The toplevel calls to
> + handle_operand/handle_stmt should set m_data_cnt to 0 and truncate
> + m_data vector when setting m_first to true. */
> + bool m_first;
> + bool m_var_msb;
> + unsigned m_upwards_2limb;
> + vec<tree> m_data;
> + unsigned int m_data_cnt;
> +};
> +
> +bitint_large_huge::~bitint_large_huge ()
> +{
> + BITMAP_FREE (m_names);
> + BITMAP_FREE (m_loads);
> + BITMAP_FREE (m_preserved);
> + BITMAP_FREE (m_single_use_names);
> + if (m_map)
> + delete_var_map (m_map);
> + XDELETEVEC (m_vars);
> + m_data.release ();
> +}
> +
> +/* Insert gimple statement G before current location
> + and set its gimple_location. */
> +
> +void
> +bitint_large_huge::insert_before (gimple *g)
> +{
> + gimple_set_location (g, m_loc);
> + gsi_insert_before (&m_gsi, g, GSI_SAME_STMT);
> +}
> +
> +/* Return type for accessing limb IDX of BITINT_TYPE TYPE.
> + This is normally m_limb_type, except for a partial most
> + significant limb if any. */
> +
> +tree
> +bitint_large_huge::limb_access_type (tree type, tree idx)
> +{
> + if (type == NULL_TREE)
> + return m_limb_type;
> + unsigned HOST_WIDE_INT i = tree_to_uhwi (idx);
> + unsigned int prec = TYPE_PRECISION (type);
> + gcc_assert (i * limb_prec < prec);
> + if ((i + 1) * limb_prec <= prec)
> + return m_limb_type;
> + else
> + return build_nonstandard_integer_type (prec % limb_prec,
> + TYPE_UNSIGNED (type));
> +}
> +
> +/* Return a tree how to access limb IDX of VAR corresponding to BITINT_TYPE
> + TYPE. If WRITE_P is true, it will be a store, otherwise a read. */
> +
> +tree
> +bitint_large_huge::limb_access (tree type, tree var, tree idx, bool write_p)
> +{
> + tree atype = (tree_fits_uhwi_p (idx)
> + ? limb_access_type (type, idx) : m_limb_type);
> + tree ret;
> + if (DECL_P (var) && tree_fits_uhwi_p (idx))
> + {
> + tree ptype = build_pointer_type (strip_array_types (TREE_TYPE (var)));
> + unsigned HOST_WIDE_INT off = tree_to_uhwi (idx) * m_limb_size;
> + ret = build2 (MEM_REF, m_limb_type,
> + build_fold_addr_expr (var),
> + build_int_cst (ptype, off));
> + if (TREE_THIS_VOLATILE (var) || TREE_THIS_VOLATILE (TREE_TYPE (var)))
> + TREE_THIS_VOLATILE (ret) = 1;
Note if we have
volatile int i;
x = *(int *)&i;
we get a non-volatile load from 'i', likewise in the reverse case
where we get a volatile load from a non-volatile decl. The above
gets this wrong - the volatileness should be derived from the
original reference with just TREE_THIS_VOLATILE checking
(and not on the type).
You possibly also want to set TREE_SIDE_EFFECTS (not sure when
that was exactly set), forwprop for example makes sure to copy
that (and also TREE_THIS_NOTRAP in some cases).
How do "volatile" _BitInt(n) work? People expect 'volatile'
objects to be operated on in whole, thus a 'volatile int'
load not split into two, etc. I guess if we split a volatile
_BitInt access it's reasonable to remove the 'volatile'?
> + }
> + else if (TREE_CODE (var) == MEM_REF && tree_fits_uhwi_p (idx))
> + {
> + ret
> + = build2 (MEM_REF, m_limb_type, TREE_OPERAND (var, 0),
> + size_binop (PLUS_EXPR, TREE_OPERAND (var, 1),
> + build_int_cst (TREE_TYPE (TREE_OPERAND (var, 1)),
> + tree_to_uhwi (idx)
> + * m_limb_size)));
> + if (TREE_THIS_VOLATILE (var))
> + TREE_THIS_VOLATILE (ret) = 1;
> + }
> + else
> + {
> + var = unshare_expr (var);
> + if (TREE_CODE (TREE_TYPE (var)) != ARRAY_TYPE
> + || !useless_type_conversion_p (m_limb_type,
> + TREE_TYPE (TREE_TYPE (var))))
> + {
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (type)) / limb_prec;
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + var = build1 (VIEW_CONVERT_EXPR, atype, var);
> + }
> + ret = build4 (ARRAY_REF, m_limb_type, var, idx, NULL_TREE, NULL_TREE);
> + }
maybe the volatile handling can be commonized here?
> + if (!write_p && !useless_type_conversion_p (atype, m_limb_type))
> + {
> + gimple *g = gimple_build_assign (make_ssa_name (m_limb_type), ret);
> + insert_before (g);
> + ret = gimple_assign_lhs (g);
> + ret = build1 (NOP_EXPR, atype, ret);
> + }
> + return ret;
> +}
> +
> +/* Emit code to access limb IDX from OP. */
> +
> +tree
> +bitint_large_huge::handle_operand (tree op, tree idx)
> +{
> + switch (TREE_CODE (op))
> + {
> + case SSA_NAME:
> + if (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
> + {
> + if (gimple_code (SSA_NAME_DEF_STMT (op)) == GIMPLE_NOP)
SSA_NAME_IS_DEFAULT_DEF
> + {
> + if (m_first)
> + {
> + tree v = create_tmp_var (m_limb_type);
create_tmp_reg?
> + if (SSA_NAME_VAR (op) && VAR_P (SSA_NAME_VAR (op)))
> + {
> + DECL_NAME (v) = DECL_NAME (SSA_NAME_VAR (op));
> + DECL_SOURCE_LOCATION (v)
> + = DECL_SOURCE_LOCATION (SSA_NAME_VAR (op));
> + }
> + v = get_or_create_ssa_default_def (cfun, v);
> + m_data.safe_push (v);
> + }
> + tree ret = m_data[m_data_cnt];
> + m_data_cnt++;
> + if (tree_fits_uhwi_p (idx))
> + {
> + tree type = limb_access_type (TREE_TYPE (op), idx);
> + ret = add_cast (type, ret);
> + }
> + return ret;
> + }
> + location_t loc_save = m_loc;
> + m_loc = gimple_location (SSA_NAME_DEF_STMT (op));
> + tree ret = handle_stmt (SSA_NAME_DEF_STMT (op), idx);
> + m_loc = loc_save;
> + return ret;
> + }
> + int p;
> + gimple *g;
> + tree t;
> + p = var_to_partition (m_map, op);
> + gcc_assert (m_vars[p] != NULL_TREE);
> + t = limb_access (TREE_TYPE (op), m_vars[p], idx, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
> + insert_before (g);
> + t = gimple_assign_lhs (g);
> + if (m_first
> + && m_single_use_names
> + && m_vars[p] != m_lhs
> + && m_after_stmt
> + && bitmap_bit_p (m_single_use_names, SSA_NAME_VERSION (op)))
> + {
> + tree clobber = build_clobber (TREE_TYPE (m_vars[p]), CLOBBER_EOL);
> + g = gimple_build_assign (m_vars[p], clobber);
> + gimple_stmt_iterator gsi = gsi_for_stmt (m_after_stmt);
> + gsi_insert_after (&gsi, g, GSI_SAME_STMT);
> + }
> + return t;
> + case INTEGER_CST:
> + if (tree_fits_uhwi_p (idx))
> + {
> + tree c, type = limb_access_type (TREE_TYPE (op), idx);
> + unsigned HOST_WIDE_INT i = tree_to_uhwi (idx);
> + if (m_first)
> + {
> + m_data.safe_push (NULL_TREE);
> + m_data.safe_push (NULL_TREE);
> + }
> + if (limb_prec != HOST_BITS_PER_WIDE_INT)
> + {
> + wide_int w = wi::rshift (wi::to_wide (op), i * limb_prec,
> + TYPE_SIGN (TREE_TYPE (op)));
> + c = wide_int_to_tree (type,
> + wide_int::from (w, TYPE_PRECISION (type),
> + UNSIGNED));
> + }
> + else if (i >= TREE_INT_CST_EXT_NUNITS (op))
> + c = build_int_cst (type,
> + tree_int_cst_sgn (op) < 0 ? -1 : 0);
> + else
> + c = build_int_cst (type, TREE_INT_CST_ELT (op, i));
> + m_data_cnt += 2;
> + return c;
> + }
> + if (m_first
> + || (m_data[m_data_cnt] == NULL_TREE
> + && m_data[m_data_cnt + 1] == NULL_TREE))
> + {
> + unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
> + unsigned int rem = prec % (2 * limb_prec);
> + int ext;
> + unsigned min_prec = bitint_min_cst_precision (op, ext);
> + if (m_first)
> + {
> + m_data.safe_push (NULL_TREE);
> + m_data.safe_push (NULL_TREE);
> + }
> + if (integer_zerop (op))
> + {
> + tree c = build_zero_cst (m_limb_type);
> + m_data[m_data_cnt] = c;
> + m_data[m_data_cnt + 1] = c;
> + }
> + else if (integer_all_onesp (op))
> + {
> + tree c = build_all_ones_cst (m_limb_type);
> + m_data[m_data_cnt] = c;
> + m_data[m_data_cnt + 1] = c;
> + }
> + else if (m_upwards_2limb && min_prec <= (unsigned) limb_prec)
> + {
> + /* Single limb constant. Use a phi with that limb from
> + the preheader edge and 0 or -1 constant from the other edge
> + and for the second limb in the loop. */
> + tree out;
> + gcc_assert (m_first);
> + m_data.pop ();
> + m_data.pop ();
> + prepare_data_in_out (fold_convert (m_limb_type, op), idx, &out);
> + g = gimple_build_assign (m_data[m_data_cnt + 1],
> + build_int_cst (m_limb_type, ext));
> + insert_before (g);
> + m_data[m_data_cnt + 1] = gimple_assign_rhs1 (g);
> + }
> + else if (min_prec > prec - rem - 2 * limb_prec)
> + {
> + /* Constant which has enough significant bits that it isn't
> + worth trying to save .rodata space by extending from smaller
> + number. */
> + tree type;
> + if (m_var_msb)
> + type = TREE_TYPE (op);
> + else
> + /* If we have a guarantee the most significant partial limb
> + (if any) will be only accessed through handle_operand
> + with INTEGER_CST idx, we don't need to include the partial
> + limb in .rodata. */
> + type = build_bitint_type (prec - rem, 1);
> + tree c = tree_output_constant_def (fold_convert (type, op));
> + m_data[m_data_cnt] = c;
> + m_data[m_data_cnt + 1] = NULL_TREE;
> + }
> + else if (m_upwards_2limb)
> + {
> + /* Constant with smaller number of bits. Trade conditional
> + code for .rodata space by extending from smaller number. */
> + min_prec = CEIL (min_prec, 2 * limb_prec) * (2 * limb_prec);
> + tree type = build_bitint_type (min_prec, 1);
> + tree c = tree_output_constant_def (fold_convert (type, op));
> + tree idx2 = make_ssa_name (sizetype);
> + g = gimple_build_assign (idx2, PLUS_EXPR, idx, size_one_node);
> + insert_before (g);
> + g = gimple_build_cond (GE_EXPR, idx,
> + size_int (min_prec / limb_prec),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
> + e3->probability = profile_probability::likely ();
> + if (min_prec >= (prec - rem) / 2)
> + e3->probability = e3->probability.invert ();
> + e1->flags = EDGE_FALSE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + tree c1 = limb_access (TREE_TYPE (op), c, idx, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (c1)), c1);
> + insert_before (g);
> + c1 = gimple_assign_lhs (g);
> + tree c2 = limb_access (TREE_TYPE (op), c, idx2, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (c2)), c2);
> + insert_before (g);
> + c2 = gimple_assign_lhs (g);
> + tree c3 = build_int_cst (m_limb_type, ext);
> + m_gsi = gsi_after_labels (e2->dest);
> + m_data[m_data_cnt] = make_ssa_name (m_limb_type);
> + m_data[m_data_cnt + 1] = make_ssa_name (m_limb_type);
> + gphi *phi = create_phi_node (m_data[m_data_cnt], e2->dest);
> + add_phi_arg (phi, c1, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, c3, e3, UNKNOWN_LOCATION);
> + phi = create_phi_node (m_data[m_data_cnt + 1], e2->dest);
> + add_phi_arg (phi, c2, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, c3, e3, UNKNOWN_LOCATION);
> + }
> + else
> + {
> + /* Constant with smaller number of bits. Trade conditional
> + code for .rodata space by extending from smaller number.
> + Version for loops with random access to the limbs or
> + downwards loops. */
> + min_prec = CEIL (min_prec, limb_prec) * limb_prec;
> + tree c;
> + if (min_prec <= (unsigned) limb_prec)
> + c = fold_convert (m_limb_type, op);
> + else
> + {
> + tree type = build_bitint_type (min_prec, 1);
> + c = tree_output_constant_def (fold_convert (type, op));
> + }
> + m_data[m_data_cnt] = c;
> + m_data[m_data_cnt + 1] = integer_type_node;
> + }
> + t = m_data[m_data_cnt];
> + if (m_data[m_data_cnt + 1] == NULL_TREE)
> + {
> + t = limb_access (TREE_TYPE (op), t, idx, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
> + insert_before (g);
> + t = gimple_assign_lhs (g);
> + }
> + }
> + else if (m_data[m_data_cnt + 1] == NULL_TREE)
> + {
> + t = limb_access (TREE_TYPE (op), m_data[m_data_cnt], idx, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
> + insert_before (g);
> + t = gimple_assign_lhs (g);
> + }
> + else
> + t = m_data[m_data_cnt + 1];
> + if (m_data[m_data_cnt + 1] == integer_type_node)
> + {
> + unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
> + unsigned rem = prec % (2 * limb_prec);
> + int ext = tree_int_cst_sgn (op) < 0 ? -1 : 0;
> + tree c = m_data[m_data_cnt];
> + unsigned min_prec = TYPE_PRECISION (TREE_TYPE (c));
> + g = gimple_build_cond (GE_EXPR, idx,
> + size_int (min_prec / limb_prec),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
> + e3->probability = profile_probability::likely ();
> + if (min_prec >= (prec - rem) / 2)
> + e3->probability = e3->probability.invert ();
> + e1->flags = EDGE_FALSE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + if (min_prec > (unsigned) limb_prec)
> + {
> + c = limb_access (TREE_TYPE (op), c, idx, false);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (c)), c);
> + insert_before (g);
> + c = gimple_assign_lhs (g);
> + }
> + tree c2 = build_int_cst (m_limb_type, ext);
> + m_gsi = gsi_after_labels (e2->dest);
> + t = make_ssa_name (m_limb_type);
> + gphi *phi = create_phi_node (t, e2->dest);
> + add_phi_arg (phi, c, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, c2, e3, UNKNOWN_LOCATION);
Not sure if I get to see more than the two cases above but maybe
a helper to emit a (half-)diamond for N values (PHI results) would be
helpful (possibly indicating the fallthru edge truth value if any)?
> + }
> + m_data_cnt += 2;
> + return t;
> + default:
> + gcc_unreachable ();
> + }
> +}
> +
> +/* Helper method, add a PHI node with VAL from preheader edge if
> + inside of a loop and m_first. Keep state in a pair of m_data
> + elements. */
> +
> +tree
> +bitint_large_huge::prepare_data_in_out (tree val, tree idx, tree *data_out)
> +{
> + if (!m_first)
> + {
> + *data_out = tree_fits_uhwi_p (idx) ? NULL_TREE : m_data[m_data_cnt + 1];
> + return m_data[m_data_cnt];
> + }
> +
> + *data_out = NULL_TREE;
> + if (tree_fits_uhwi_p (idx))
> + {
> + m_data.safe_push (val);
> + m_data.safe_push (NULL_TREE);
> + return val;
> + }
> +
> + tree in = make_ssa_name (TREE_TYPE (val));
> + gphi *phi = create_phi_node (in, m_bb);
> + edge e1 = find_edge (m_preheader_bb, m_bb);
> + edge e2 = EDGE_PRED (m_bb, 0);
> + if (e1 == e2)
> + e2 = EDGE_PRED (m_bb, 1);
> + add_phi_arg (phi, val, e1, UNKNOWN_LOCATION);
> + tree out = make_ssa_name (TREE_TYPE (val));
> + add_phi_arg (phi, out, e2, UNKNOWN_LOCATION);
> + m_data.safe_push (in);
> + m_data.safe_push (out);
> + return in;
> +}
> +
> +/* Return VAL cast to TYPE. If VAL is INTEGER_CST, just
> + convert it without emitting any code, otherwise emit
> + the conversion statement before the current location. */
> +
> +tree
> +bitint_large_huge::add_cast (tree type, tree val)
> +{
> + if (TREE_CODE (val) == INTEGER_CST)
> + return fold_convert (type, val);
> +
> + tree lhs = make_ssa_name (type);
> + gimple *g = gimple_build_assign (lhs, NOP_EXPR, val);
> + insert_before (g);
> + return lhs;
> +}
> +
> +/* Helper of handle_stmt method, handle PLUS_EXPR or MINUS_EXPR. */
> +
> +tree
> +bitint_large_huge::handle_plus_minus (tree_code code, tree rhs1, tree rhs2,
> + tree idx)
> +{
> + tree lhs, data_out, ctype;
> + tree rhs1_type = TREE_TYPE (rhs1);
> + gimple *g;
> + tree data_in = prepare_data_in_out (build_zero_cst (m_limb_type), idx,
> + &data_out);
> +
> + if (optab_handler (code == PLUS_EXPR ? uaddc5_optab : usubc5_optab,
> + TYPE_MODE (m_limb_type)) != CODE_FOR_nothing)
> + {
> + ctype = build_complex_type (m_limb_type);
> + if (!types_compatible_p (rhs1_type, m_limb_type))
> + {
> + if (!TYPE_UNSIGNED (rhs1_type))
> + {
> + tree type = unsigned_type_for (rhs1_type);
> + rhs1 = add_cast (type, rhs1);
> + rhs2 = add_cast (type, rhs2);
> + }
> + rhs1 = add_cast (m_limb_type, rhs1);
> + rhs2 = add_cast (m_limb_type, rhs2);
> + }
> + lhs = make_ssa_name (ctype);
> + g = gimple_build_call_internal (code == PLUS_EXPR
> + ? IFN_UADDC : IFN_USUBC,
> + 3, rhs1, rhs2, data_in);
> + gimple_call_set_lhs (g, lhs);
> + insert_before (g);
> + if (data_out == NULL_TREE)
> + data_out = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (data_out, IMAGPART_EXPR,
> + build1 (IMAGPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + }
> + else if (types_compatible_p (rhs1_type, m_limb_type))
> + {
> + ctype = build_complex_type (m_limb_type);
> + lhs = make_ssa_name (ctype);
> + g = gimple_build_call_internal (code == PLUS_EXPR
> + ? IFN_ADD_OVERFLOW : IFN_SUB_OVERFLOW,
> + 2, rhs1, rhs2);
> + gimple_call_set_lhs (g, lhs);
> + insert_before (g);
> + if (data_out == NULL_TREE)
> + data_out = make_ssa_name (m_limb_type);
> + if (!integer_zerop (data_in))
> + {
> + rhs1 = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (rhs1, REALPART_EXPR,
> + build1 (REALPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + rhs2 = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (rhs2, IMAGPART_EXPR,
> + build1 (IMAGPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + lhs = make_ssa_name (ctype);
> + g = gimple_build_call_internal (code == PLUS_EXPR
> + ? IFN_ADD_OVERFLOW
> + : IFN_SUB_OVERFLOW,
> + 2, rhs1, data_in);
> + gimple_call_set_lhs (g, lhs);
> + insert_before (g);
> + data_in = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (data_in, IMAGPART_EXPR,
> + build1 (IMAGPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + g = gimple_build_assign (data_out, PLUS_EXPR, rhs2, data_in);
> + insert_before (g);
> + }
> + else
> + {
> + g = gimple_build_assign (data_out, IMAGPART_EXPR,
> + build1 (IMAGPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + }
> + }
> + else
> + {
> + tree in = add_cast (rhs1_type, data_in);
> + lhs = make_ssa_name (rhs1_type);
> + g = gimple_build_assign (lhs, code, rhs1, rhs2);
> + insert_before (g);
> + rhs1 = make_ssa_name (rhs1_type);
> + g = gimple_build_assign (rhs1, code, lhs, in);
> + insert_before (g);
I'll just note there's now gimple_build overloads inserting at an
iterator:
extern tree gimple_build (gimple_stmt_iterator *, bool,
enum gsi_iterator_update,
location_t, code_helper, tree, tree, tree);
I guess there's not much folding possibilities during the building,
but it would allow to write
rhs1 = gimple_build (&gsi, true, GSI_SAME_STMT, m_loc, code, rhs1_type,
lhs, in);
instead of
> + rhs1 = make_ssa_name (rhs1_type);
> + g = gimple_build_assign (rhs1, code, lhs, in);
> + insert_before (g);
just in case you forgot about those. I think we're missing some
gimple-build "state" class to keep track of common arguments, like
gimple_build gb (&gsi, true, GSI_SAME_STMT, m_loc);
rhs1 = gb.build (code, rhs1_type, lhs, in);
...
anyway, just wanted to note this - no need to change the patch.
> + m_data[m_data_cnt] = NULL_TREE;
> + m_data_cnt += 2;
> + return rhs1;
> + }
> + rhs1 = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (rhs1, REALPART_EXPR,
> + build1 (REALPART_EXPR, m_limb_type, lhs));
> + insert_before (g);
> + if (!types_compatible_p (rhs1_type, m_limb_type))
> + rhs1 = add_cast (rhs1_type, rhs1);
> + m_data[m_data_cnt] = data_out;
> + m_data_cnt += 2;
> + return rhs1;
> +}
> +
> +/* Helper function for handle_stmt method, handle LSHIFT_EXPR by
> + count in [0, limb_prec - 1] range. */
> +
> +tree
> +bitint_large_huge::handle_lshift (tree rhs1, tree rhs2, tree idx)
> +{
> + unsigned HOST_WIDE_INT cnt = tree_to_uhwi (rhs2);
> + gcc_checking_assert (cnt < (unsigned) limb_prec);
> + if (cnt == 0)
> + return rhs1;
> +
> + tree lhs, data_out, rhs1_type = TREE_TYPE (rhs1);
> + gimple *g;
> + tree data_in = prepare_data_in_out (build_zero_cst (m_limb_type), idx,
> + &data_out);
> +
> + if (!integer_zerop (data_in))
> + {
> + lhs = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (lhs, RSHIFT_EXPR, data_in,
> + build_int_cst (unsigned_type_node,
> + limb_prec - cnt));
> + insert_before (g);
> + if (!types_compatible_p (rhs1_type, m_limb_type))
> + lhs = add_cast (rhs1_type, lhs);
> + data_in = lhs;
> + }
> + if (types_compatible_p (rhs1_type, m_limb_type))
> + {
> + if (data_out == NULL_TREE)
> + data_out = make_ssa_name (m_limb_type);
> + g = gimple_build_assign (data_out, rhs1);
> + insert_before (g);
> + }
> + if (cnt < (unsigned) TYPE_PRECISION (rhs1_type))
> + {
> + lhs = make_ssa_name (rhs1_type);
> + g = gimple_build_assign (lhs, LSHIFT_EXPR, rhs1, rhs2);
> + insert_before (g);
> + if (!integer_zerop (data_in))
> + {
> + rhs1 = lhs;
> + lhs = make_ssa_name (rhs1_type);
> + g = gimple_build_assign (lhs, BIT_IOR_EXPR, rhs1, data_in);
> + insert_before (g);
> + }
> + }
> + else
> + lhs = data_in;
> + m_data[m_data_cnt] = data_out;
> + m_data_cnt += 2;
> + return lhs;
> +}
> +
> +/* Helper function for handle_stmt method, handle an integral
> + to integral conversion. */
> +
> +tree
> +bitint_large_huge::handle_cast (tree lhs_type, tree rhs1, tree idx)
> +{
> + tree rhs_type = TREE_TYPE (rhs1);
> + gimple *g;
> + if (TREE_CODE (rhs1) == SSA_NAME
> + && TREE_CODE (lhs_type) == BITINT_TYPE
> + && TREE_CODE (rhs_type) == BITINT_TYPE
> + && bitint_precision_kind (lhs_type) >= bitint_prec_large
> + && bitint_precision_kind (rhs_type) >= bitint_prec_large)
> + {
> + if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type)
> + /* If lhs has bigger precision than rhs, we can use
> + the simple case only if there is a guarantee that
> + the most significant limb is handled in straight
> + line code. If m_var_msb (on left shifts) or
> + if m_upwards_2limb * limb_prec is equal to
> + lhs precision that is not the case. */
> + || (!m_var_msb
> + && tree_int_cst_equal (TYPE_SIZE (rhs_type),
> + TYPE_SIZE (lhs_type))
> + && (!m_upwards_2limb
> + || (m_upwards_2limb * limb_prec
> + < TYPE_PRECISION (lhs_type)))))
> + {
> + rhs1 = handle_operand (rhs1, idx);
> + if (tree_fits_uhwi_p (idx))
> + {
> + tree type = limb_access_type (lhs_type, idx);
> + if (!types_compatible_p (type, TREE_TYPE (rhs1)))
> + rhs1 = add_cast (type, rhs1);
> + }
> + return rhs1;
> + }
> + tree t;
> + /* Indexes lower than this don't need any special processing. */
> + unsigned low = ((unsigned) TYPE_PRECISION (rhs_type)
> + - !TYPE_UNSIGNED (rhs_type)) / limb_prec;
> + /* Indexes >= than this always contain an extension. */
> + unsigned high = CEIL ((unsigned) TYPE_PRECISION (rhs_type), limb_prec);
> + bool save_first = m_first;
> + if (m_first)
> + {
> + m_data.safe_push (NULL_TREE);
> + m_data.safe_push (NULL_TREE);
> + m_data.safe_push (NULL_TREE);
> + if (TYPE_UNSIGNED (rhs_type))
> + /* No need to keep state between iterations. */
> + ;
> + else if (!m_upwards_2limb)
> + {
> + unsigned save_data_cnt = m_data_cnt;
> + gimple_stmt_iterator save_gsi = m_gsi;
> + m_gsi = m_init_gsi;
> + if (gsi_end_p (m_gsi))
> + m_gsi = gsi_after_labels (gsi_bb (m_gsi));
> + else
> + gsi_next (&m_gsi);
> + m_data_cnt = save_data_cnt + 3;
> + t = handle_operand (rhs1, size_int (low));
> + m_first = false;
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + m_data_cnt = save_data_cnt;
> + t = add_cast (signed_type_for (m_limb_type), t);
> + tree lpm1 = build_int_cst (unsigned_type_node, limb_prec - 1);
> + tree n = make_ssa_name (TREE_TYPE (t));
> + g = gimple_build_assign (n, RSHIFT_EXPR, t, lpm1);
> + insert_before (g);
> + m_data[save_data_cnt + 1] = add_cast (m_limb_type, n);
> + m_gsi = save_gsi;
> + }
> + else if (m_upwards_2limb * limb_prec < TYPE_PRECISION (rhs_type))
> + /* We need to keep state between iterations, but
> + fortunately not within the loop, only afterwards. */
> + ;
> + else
> + {
> + tree out;
> + m_data.truncate (m_data_cnt);
> + prepare_data_in_out (build_zero_cst (m_limb_type), idx, &out);
> + m_data.safe_push (NULL_TREE);
> + }
> + }
> +
> + unsigned save_data_cnt = m_data_cnt;
> + m_data_cnt += 3;
> + if (!tree_fits_uhwi_p (idx))
> + {
> + if (m_upwards_2limb
> + && (m_upwards_2limb * limb_prec
> + <= ((unsigned) TYPE_PRECISION (rhs_type)
> + - !TYPE_UNSIGNED (rhs_type))))
> + {
> + rhs1 = handle_operand (rhs1, idx);
> + if (m_first)
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + m_first = save_first;
> + return rhs1;
> + }
> + bool single_comparison
> + = low == high || (m_upwards_2limb && (low & 1) == m_first);
> + g = gimple_build_cond (single_comparison ? LT_EXPR : LE_EXPR,
> + idx, size_int (low), NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + edge e4 = NULL;
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e1->dest);
> + g = gimple_build_cond (EQ_EXPR, idx, size_int (low),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e2 = split_block (gsi_bb (m_gsi), g);
> + basic_block bb = create_empty_bb (e2->dest);
> + add_bb_to_loop (bb, e2->dest->loop_father);
> + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> + e4->probability = profile_probability::unlikely ();
> + e2->flags = EDGE_FALSE_VALUE;
> + e2->probability = e4->probability.invert ();
> + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> + e2 = find_edge (e2->dest, e3->dest);
> + }
> + m_gsi = gsi_after_labels (e2->src);
> + tree t1 = handle_operand (rhs1, idx), t2 = NULL_TREE;
> + if (m_first)
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + tree ext = NULL_TREE;
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e4->src);
> + m_first = false;
> + m_data_cnt = save_data_cnt + 3;
> + t2 = handle_operand (rhs1, size_int (low));
> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (t2)))
> + t2 = add_cast (m_limb_type, t2);
> + if (!TYPE_UNSIGNED (rhs_type) && m_upwards_2limb)
> + {
> + ext = add_cast (signed_type_for (m_limb_type), t2);
> + tree lpm1 = build_int_cst (unsigned_type_node,
> + limb_prec - 1);
> + tree n = make_ssa_name (TREE_TYPE (ext));
> + g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
> + insert_before (g);
> + ext = add_cast (m_limb_type, n);
> + }
> + }
> + tree t3;
> + if (TYPE_UNSIGNED (rhs_type))
> + t3 = build_zero_cst (m_limb_type);
> + else if (m_upwards_2limb && (save_first || ext != NULL_TREE))
> + t3 = m_data[save_data_cnt];
> + else
> + t3 = m_data[save_data_cnt + 1];
> + m_gsi = gsi_after_labels (e2->dest);
> + t = make_ssa_name (m_limb_type);
> + gphi *phi = create_phi_node (t, e2->dest);
> + add_phi_arg (phi, t1, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, t3, e3, UNKNOWN_LOCATION);
> + if (e4)
> + add_phi_arg (phi, t2, e4, UNKNOWN_LOCATION);
> + if (ext)
> + {
> + tree t4 = make_ssa_name (m_limb_type);
> + phi = create_phi_node (t4, e2->dest);
> + add_phi_arg (phi, build_zero_cst (m_limb_type), e2,
> + UNKNOWN_LOCATION);
> + add_phi_arg (phi, m_data[save_data_cnt], e3, UNKNOWN_LOCATION);
> + add_phi_arg (phi, ext, e4, UNKNOWN_LOCATION);
> + g = gimple_build_assign (m_data[save_data_cnt + 1], t4);
> + insert_before (g);
> + }
> + m_first = save_first;
> + return t;
> + }
> + else
> + {
> + if (tree_to_uhwi (idx) < low)
> + {
> + t = handle_operand (rhs1, idx);
> + if (m_first)
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + }
> + else if (tree_to_uhwi (idx) < high)
> + {
> + t = handle_operand (rhs1, size_int (low));
> + if (m_first)
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (t)))
> + t = add_cast (m_limb_type, t);
> + tree ext = NULL_TREE;
> + if (!TYPE_UNSIGNED (rhs_type) && m_upwards_2limb)
> + {
> + ext = add_cast (signed_type_for (m_limb_type), t);
> + tree lpm1 = build_int_cst (unsigned_type_node,
> + limb_prec - 1);
> + tree n = make_ssa_name (TREE_TYPE (ext));
> + g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
> + insert_before (g);
> + ext = add_cast (m_limb_type, n);
> + m_data[save_data_cnt + 1] = ext;
> + }
> + }
> + else
> + {
> + if (TYPE_UNSIGNED (rhs_type) && m_first)
> + {
> + handle_operand (rhs1, size_zero_node);
> + m_data[save_data_cnt + 2]
> + = build_int_cst (NULL_TREE, m_data_cnt);
> + }
> + else
> + m_data_cnt = tree_to_uhwi (m_data[save_data_cnt + 2]);
> + if (TYPE_UNSIGNED (rhs_type))
> + t = build_zero_cst (m_limb_type);
> + else
> + t = m_data[save_data_cnt + 1];
> + }
> + tree type = limb_access_type (lhs_type, idx);
> + if (!useless_type_conversion_p (type, m_limb_type))
> + t = add_cast (type, t);
> + m_first = save_first;
> + return t;
> + }
> + }
> + else if (TREE_CODE (lhs_type) == BITINT_TYPE
> + && bitint_precision_kind (lhs_type) >= bitint_prec_large
> + && INTEGRAL_TYPE_P (rhs_type))
> + {
> + /* Add support for 3 or more limbs filled in from normal integral
> + type if this assert fails. If no target chooses limb mode smaller
> + than half of largest supported normal integral type, this will not
> + be needed. */
> + gcc_assert (TYPE_PRECISION (rhs_type) <= 2 * limb_prec);
> + tree r1 = NULL_TREE, r2 = NULL_TREE, rext = NULL_TREE;
> + if (m_first)
> + {
> + gimple_stmt_iterator save_gsi = m_gsi;
> + m_gsi = m_init_gsi;
> + if (gsi_end_p (m_gsi))
> + m_gsi = gsi_after_labels (gsi_bb (m_gsi));
> + else
> + gsi_next (&m_gsi);
> + if (TREE_CODE (rhs_type) == BITINT_TYPE
> + && bitint_precision_kind (rhs_type) == bitint_prec_middle)
> + {
> + tree type = NULL_TREE;
> + rhs1 = maybe_cast_middle_bitint (&m_gsi, rhs1, type);
> + rhs_type = TREE_TYPE (rhs1);
> + }
> + r1 = rhs1;
> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
> + r1 = add_cast (m_limb_type, rhs1);
> + if (TYPE_PRECISION (rhs_type) > limb_prec)
> + {
> + g = gimple_build_assign (make_ssa_name (rhs_type),
> + RSHIFT_EXPR, rhs1,
> + build_int_cst (unsigned_type_node,
> + limb_prec));
> + insert_before (g);
> + r2 = add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + if (TYPE_UNSIGNED (rhs_type))
> + rext = build_zero_cst (m_limb_type);
> + else
> + {
> + rext = add_cast (signed_type_for (m_limb_type), r2 ? r2 : r1);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rext)),
> + RSHIFT_EXPR, rext,
> + build_int_cst (unsigned_type_node,
> + limb_prec - 1));
> + insert_before (g);
> + rext = add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + m_gsi = save_gsi;
> + }
> + tree t;
> + if (m_upwards_2limb)
> + {
> + if (m_first)
> + {
> + tree out1, out2;
> + prepare_data_in_out (r1, idx, &out1);
> + g = gimple_build_assign (m_data[m_data_cnt + 1], rext);
> + insert_before (g);
> + if (TYPE_PRECISION (rhs_type) > limb_prec)
> + {
> + prepare_data_in_out (r2, idx, &out2);
> + g = gimple_build_assign (m_data[m_data_cnt + 3], rext);
> + insert_before (g);
> + m_data.pop ();
> + t = m_data.pop ();
> + m_data[m_data_cnt + 1] = t;
> + }
> + else
> + m_data[m_data_cnt + 1] = rext;
> + m_data.safe_push (rext);
> + t = m_data[m_data_cnt];
> + }
> + else if (!tree_fits_uhwi_p (idx))
> + t = m_data[m_data_cnt + 1];
> + else
> + {
> + tree type = limb_access_type (lhs_type, idx);
> + t = m_data[m_data_cnt + 2];
> + if (!useless_type_conversion_p (type, m_limb_type))
> + t = add_cast (type, t);
> + }
> + m_data_cnt += 3;
> + return t;
> + }
> + else if (m_first)
> + {
> + m_data.safe_push (r1);
> + m_data.safe_push (r2);
> + m_data.safe_push (rext);
> + }
> + if (tree_fits_uhwi_p (idx))
> + {
> + tree type = limb_access_type (lhs_type, idx);
> + if (integer_zerop (idx))
> + t = m_data[m_data_cnt];
> + else if (TYPE_PRECISION (rhs_type) > limb_prec
> + && integer_onep (idx))
> + t = m_data[m_data_cnt + 1];
> + else
> + t = m_data[m_data_cnt + 2];
> + if (!useless_type_conversion_p (type, m_limb_type))
> + t = add_cast (type, t);
> + m_data_cnt += 3;
> + return t;
> + }
> + g = gimple_build_cond (EQ_EXPR, idx, size_zero_node,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
> + edge e4 = NULL;
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_FALSE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + if (m_data[m_data_cnt + 1])
> + {
> + m_gsi = gsi_after_labels (e1->dest);
> + g = gimple_build_cond (EQ_EXPR, idx, size_one_node,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e5 = split_block (gsi_bb (m_gsi), g);
> + e4 = make_edge (e5->src, e2->dest, EDGE_TRUE_VALUE);
> + e2 = find_edge (e5->dest, e2->dest);
> + e4->probability = profile_probability::unlikely ();
> + e5->flags = EDGE_FALSE_VALUE;
> + e5->probability = e4->probability.invert ();
> + }
> + m_gsi = gsi_after_labels (e2->dest);
> + t = make_ssa_name (m_limb_type);
> + gphi *phi = create_phi_node (t, e2->dest);
> + add_phi_arg (phi, m_data[m_data_cnt + 2], e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, m_data[m_data_cnt], e3, UNKNOWN_LOCATION);
> + if (e4)
> + add_phi_arg (phi, m_data[m_data_cnt + 1], e4, UNKNOWN_LOCATION);
> + m_data_cnt += 3;
> + return t;
> + }
> + return NULL_TREE;
> +}
> +
> +/* Return a limb IDX from a mergeable statement STMT. */
> +
> +tree
> +bitint_large_huge::handle_stmt (gimple *stmt, tree idx)
> +{
> + tree lhs, rhs1, rhs2 = NULL_TREE;
> + gimple *g;
> + switch (gimple_code (stmt))
> + {
> + case GIMPLE_ASSIGN:
> + if (gimple_assign_load_p (stmt))
> + {
> + rhs1 = gimple_assign_rhs1 (stmt);
so TREE_THIS_VOLATILE/TREE_SIDE_EFFECTS (rhs1) would be the thing
to eventually preserve
> + tree rhs_type = TREE_TYPE (rhs1);
> + bool eh = stmt_ends_bb_p (stmt);
> + /* Use write_p = true for loads with EH edges to make
> + sure limb_access doesn't add a cast as separate
> + statement after it. */
> + rhs1 = limb_access (rhs_type, rhs1, idx, eh);
> + lhs = make_ssa_name (TREE_TYPE (rhs1));
> + g = gimple_build_assign (lhs, rhs1);
> + insert_before (g);
> + if (eh)
> + {
> + maybe_duplicate_eh_stmt (g, stmt);
> + edge e1;
> + edge_iterator ei;
> + basic_block bb = gimple_bb (stmt);
> +
> + FOR_EACH_EDGE (e1, ei, bb->succs)
> + if (e1->flags & EDGE_EH)
> + break;
> + if (e1)
> + {
> + edge e2 = split_block (gsi_bb (m_gsi), g);
> + m_gsi = gsi_after_labels (e2->dest);
> + make_edge (e2->src, e1->dest, EDGE_EH)->probability
> + = profile_probability::very_unlikely ();
> + }
> + if (tree_fits_uhwi_p (idx))
> + {
> + tree atype = limb_access_type (rhs_type, idx);
> + if (!useless_type_conversion_p (atype, TREE_TYPE (rhs1)))
> + lhs = add_cast (atype, lhs);
> + }
> + }
> + return lhs;
> + }
> + switch (gimple_assign_rhs_code (stmt))
> + {
> + case BIT_AND_EXPR:
> + case BIT_IOR_EXPR:
> + case BIT_XOR_EXPR:
> + rhs2 = handle_operand (gimple_assign_rhs2 (stmt), idx);
> + /* FALLTHRU */
> + case BIT_NOT_EXPR:
> + rhs1 = handle_operand (gimple_assign_rhs1 (stmt), idx);
> + lhs = make_ssa_name (TREE_TYPE (rhs1));
> + g = gimple_build_assign (lhs, gimple_assign_rhs_code (stmt),
> + rhs1, rhs2);
> + insert_before (g);
> + return lhs;
> + case PLUS_EXPR:
> + case MINUS_EXPR:
> + rhs1 = handle_operand (gimple_assign_rhs1 (stmt), idx);
> + rhs2 = handle_operand (gimple_assign_rhs2 (stmt), idx);
> + return handle_plus_minus (gimple_assign_rhs_code (stmt),
> + rhs1, rhs2, idx);
> + case NEGATE_EXPR:
> + rhs2 = handle_operand (gimple_assign_rhs1 (stmt), idx);
> + rhs1 = build_zero_cst (TREE_TYPE (rhs2));
> + return handle_plus_minus (MINUS_EXPR, rhs1, rhs2, idx);
> + case LSHIFT_EXPR:
> + return handle_lshift (handle_operand (gimple_assign_rhs1 (stmt),
> + idx),
> + gimple_assign_rhs2 (stmt), idx);
> + case SSA_NAME:
> + case INTEGER_CST:
> + return handle_operand (gimple_assign_rhs1 (stmt), idx);
> + CASE_CONVERT:
> + case VIEW_CONVERT_EXPR:
> + return handle_cast (TREE_TYPE (gimple_assign_lhs (stmt)),
> + gimple_assign_rhs1 (stmt), idx);
> + default:
> + break;
> + }
> + break;
> + default:
> + break;
> + }
> + gcc_unreachable ();
> +}
> +
> +/* Return minimum precision of OP at STMT.
> + Positive value is minimum precision above which all bits
> + are zero, negative means all bits above negation of the
> + value are copies of the sign bit. */
> +
> +static int
> +range_to_prec (tree op, gimple *stmt)
> +{
> + int_range_max r;
> + wide_int w;
> + tree type = TREE_TYPE (op);
> + unsigned int prec = TYPE_PRECISION (type);
> +
> + if (!optimize
> + || !get_range_query (cfun)->range_of_expr (r, op, stmt))
> + {
> + if (TYPE_UNSIGNED (type))
> + return prec;
> + else
> + return -prec;
> + }
> +
> + if (!TYPE_UNSIGNED (TREE_TYPE (op)))
> + {
> + w = r.lower_bound ();
> + if (wi::neg_p (w))
> + {
> + int min_prec1 = wi::min_precision (w, SIGNED);
> + w = r.upper_bound ();
> + int min_prec2 = wi::min_precision (w, SIGNED);
> + int min_prec = MAX (min_prec1, min_prec2);
> + return MIN (-min_prec, -2);
> + }
> + }
> +
> + w = r.upper_bound ();
> + int min_prec = wi::min_precision (w, UNSIGNED);
> + return MAX (min_prec, 1);
> +}
> +
> +/* Return address of the first limb of OP and write into *PREC
> + its precision. If positive, the operand is zero extended
> + from that precision, if it is negative, the operand is sign-extended
> + from -*PREC. If PREC_STORED is NULL, it is the toplevel call,
> + otherwise *PREC_STORED is prec from the innermost call without
> + range optimizations. */
> +
> +tree
> +bitint_large_huge::handle_operand_addr (tree op, gimple *stmt,
> + int *prec_stored, int *prec)
> +{
> + wide_int w;
> + location_t loc_save = m_loc;
> + if ((TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
> + || bitint_precision_kind (TREE_TYPE (op)) < bitint_prec_large)
> + && TREE_CODE (op) != INTEGER_CST)
> + {
> + do_int:
> + *prec = range_to_prec (op, stmt);
> + bitint_prec_kind kind = bitint_prec_small;
> + gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (op)));
> + if (TREE_CODE (TREE_TYPE (op)) == BITINT_TYPE)
> + kind = bitint_precision_kind (TREE_TYPE (op));
> + if (kind == bitint_prec_middle)
> + {
> + tree type = NULL_TREE;
> + op = maybe_cast_middle_bitint (&m_gsi, op, type);
> + }
> + tree op_type = TREE_TYPE (op);
> + unsigned HOST_WIDE_INT nelts
> + = CEIL (TYPE_PRECISION (op_type), limb_prec);
> + /* Add support for 3 or more limbs filled in from normal
> + integral type if this assert fails. If no target chooses
> + limb mode smaller than half of largest supported normal
> + integral type, this will not be needed. */
> + gcc_assert (nelts <= 2);
> + if (prec_stored)
> + *prec_stored = (TYPE_UNSIGNED (op_type)
> + ? TYPE_PRECISION (op_type)
> + : -TYPE_PRECISION (op_type));
> + if (*prec <= limb_prec && *prec >= -limb_prec)
> + {
> + nelts = 1;
> + if (prec_stored)
> + {
> + if (TYPE_UNSIGNED (op_type))
> + {
> + if (*prec_stored > limb_prec)
> + *prec_stored = limb_prec;
> + }
> + else if (*prec_stored < -limb_prec)
> + *prec_stored = -limb_prec;
> + }
> + }
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + tree var = create_tmp_var (atype);
> + tree t1 = op;
> + if (!useless_type_conversion_p (m_limb_type, op_type))
> + t1 = add_cast (m_limb_type, t1);
> + tree v = build4 (ARRAY_REF, m_limb_type, var, size_zero_node,
> + NULL_TREE, NULL_TREE);
> + gimple *g = gimple_build_assign (v, t1);
> + insert_before (g);
> + if (nelts > 1)
> + {
> + tree lp = build_int_cst (unsigned_type_node, limb_prec);
> + g = gimple_build_assign (make_ssa_name (op_type),
> + RSHIFT_EXPR, op, lp);
> + insert_before (g);
> + tree t2 = gimple_assign_lhs (g);
> + t2 = add_cast (m_limb_type, t2);
> + v = build4 (ARRAY_REF, m_limb_type, var, size_one_node,
> + NULL_TREE, NULL_TREE);
> + g = gimple_build_assign (v, t2);
> + insert_before (g);
> + }
> + tree ret = build_fold_addr_expr (var);
> + if (!stmt_ends_bb_p (gsi_stmt (m_gsi)))
> + {
> + tree clobber = build_clobber (atype, CLOBBER_EOL);
> + g = gimple_build_assign (var, clobber);
> + gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
> + }
> + m_loc = loc_save;
> + return ret;
> + }
> + switch (TREE_CODE (op))
> + {
> + case SSA_NAME:
> + if (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
> + {
> + gimple *g = SSA_NAME_DEF_STMT (op);
> + tree ret;
> + m_loc = gimple_location (g);
> + if (gimple_assign_load_p (g))
> + {
> + *prec = range_to_prec (op, NULL);
> + if (prec_stored)
> + *prec_stored = (TYPE_UNSIGNED (TREE_TYPE (op))
> + ? TYPE_PRECISION (TREE_TYPE (op))
> + : -TYPE_PRECISION (TREE_TYPE (op)));
> + ret = build_fold_addr_expr (gimple_assign_rhs1 (g));
> + ret = force_gimple_operand_gsi (&m_gsi, ret, true,
> + NULL_TREE, true, GSI_SAME_STMT);
> + }
> + else if (gimple_code (g) == GIMPLE_NOP)
> + {
> + tree var = create_tmp_var (m_limb_type);
> + TREE_ADDRESSABLE (var) = 1;
> + ret = build_fold_addr_expr (var);
> + if (!stmt_ends_bb_p (gsi_stmt (m_gsi)))
> + {
> + tree clobber = build_clobber (m_limb_type, CLOBBER_EOL);
> + g = gimple_build_assign (var, clobber);
> + gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
> + }
> + }
> + else
> + {
> + gcc_assert (gimple_assign_cast_p (g));
> + tree rhs1 = gimple_assign_rhs1 (g);
> + bitint_prec_kind kind = bitint_prec_small;
> + gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (rhs1)));
> + if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE)
> + kind = bitint_precision_kind (TREE_TYPE (rhs1));
> + if (kind >= bitint_prec_large)
> + {
> + tree lhs_type = TREE_TYPE (op);
> + tree rhs_type = TREE_TYPE (rhs1);
> + int prec_stored_val = 0;
> + ret = handle_operand_addr (rhs1, g, &prec_stored_val, prec);
> + if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (rhs_type))
> + {
> + if (TYPE_UNSIGNED (lhs_type)
> + && !TYPE_UNSIGNED (rhs_type))
> + gcc_assert (*prec >= 0 || prec_stored == NULL);
> + }
> + else
> + {
> + if (*prec > 0 && *prec < TYPE_PRECISION (lhs_type))
> + ;
> + else if (TYPE_UNSIGNED (lhs_type))
> + {
> + gcc_assert (*prec > 0
> + || prec_stored_val > 0
> + || (-prec_stored_val
> + >= TYPE_PRECISION (lhs_type)));
> + *prec = TYPE_PRECISION (lhs_type);
> + }
> + else if (*prec < 0 && -*prec < TYPE_PRECISION (lhs_type))
> + ;
> + else
> + *prec = -TYPE_PRECISION (lhs_type);
> + }
> + }
> + else
> + {
> + op = rhs1;
> + stmt = g;
> + goto do_int;
> + }
> + }
> + m_loc = loc_save;
> + return ret;
> + }
> + else
> + {
> + int p = var_to_partition (m_map, op);
> + gcc_assert (m_vars[p] != NULL_TREE);
> + *prec = range_to_prec (op, stmt);
> + if (prec_stored)
> + *prec_stored = (TYPE_UNSIGNED (TREE_TYPE (op))
> + ? TYPE_PRECISION (TREE_TYPE (op))
> + : -TYPE_PRECISION (TREE_TYPE (op)));
> + return build_fold_addr_expr (m_vars[p]);
> + }
> + case INTEGER_CST:
> + unsigned int min_prec, mp;
> + tree type;
> + w = wi::to_wide (op);
> + if (tree_int_cst_sgn (op) >= 0)
> + {
> + min_prec = wi::min_precision (w, UNSIGNED);
> + *prec = MAX (min_prec, 1);
> + }
> + else
> + {
> + min_prec = wi::min_precision (w, SIGNED);
> + *prec = MIN ((int) -min_prec, -2);
> + }
> + mp = CEIL (min_prec, limb_prec) * limb_prec;
> + if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op)))
> + type = TREE_TYPE (op);
> + else
> + type = build_bitint_type (mp, 1);
> + if (TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) == bitint_prec_small)
> + {
> + if (TYPE_PRECISION (type) <= limb_prec)
> + type = m_limb_type;
> + else
> + /* This case is for targets which e.g. have 64-bit
> + limb but categorize up to 128-bits _BitInts as
> + small. We could use type of m_limb_type[2] and
> + similar instead to save space. */
> + type = build_bitint_type (mid_min_prec, 1);
> + }
> + if (prec_stored)
> + {
> + if (tree_int_cst_sgn (op) >= 0)
> + *prec_stored = MAX (TYPE_PRECISION (type), 1);
> + else
> + *prec_stored = MIN ((int) -TYPE_PRECISION (type), -2);
> + }
> + op = tree_output_constant_def (fold_convert (type, op));
> + return build_fold_addr_expr (op);
> + default:
> + gcc_unreachable ();
> + }
> +}
> +
> +/* Helper function, create a loop before the current location,
> + start with sizetype INIT value from the preheader edge. Return
> + a PHI result and set *IDX_NEXT to SSA_NAME it creates and uses
> + from the latch edge. */
> +
> +tree
> +bitint_large_huge::create_loop (tree init, tree *idx_next)
> +{
> + if (!gsi_end_p (m_gsi))
> + gsi_prev (&m_gsi);
> + else
> + m_gsi = gsi_last_bb (gsi_bb (m_gsi));
> + edge e1 = split_block (gsi_bb (m_gsi), gsi_stmt (m_gsi));
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->dest, e1->dest, EDGE_TRUE_VALUE);
> + e3->probability = profile_probability::very_unlikely ();
> + e2->flags = EDGE_FALSE_VALUE;
> + e2->probability = e3->probability.invert ();
> + tree idx = make_ssa_name (sizetype);
maybe you want integer_type_node instead?
> + gphi *phi = create_phi_node (idx, e1->dest);
> + add_phi_arg (phi, init, e1, UNKNOWN_LOCATION);
> + *idx_next = make_ssa_name (sizetype);
> + add_phi_arg (phi, *idx_next, e3, UNKNOWN_LOCATION);
> + m_gsi = gsi_after_labels (e1->dest);
> + m_bb = e1->dest;
> + m_preheader_bb = e1->src;
> + class loop *loop = alloc_loop ();
> + loop->header = e1->dest;
> + add_loop (loop, e1->src->loop_father);
There is create_empty_loop_on_edge, it does a little bit more
than the above though.
> + return idx;
> +}
> +
> +/* Lower large/huge _BitInt statement mergeable or similar STMT which can be
> + lowered using iteration from the least significant limb up to the most
> + significant limb. For large _BitInt it is emitted as straight line code
> + before current location, for huge _BitInt as a loop handling two limbs
> + at once, followed by handling up to limbs in straight line code (at most
> + one full and one partial limb). It can also handle EQ_EXPR/NE_EXPR
> + comparisons, in that case CMP_CODE should be the comparison code and
> + CMP_OP1/CMP_OP2 the comparison operands. */
> +
> +tree
> +bitint_large_huge::lower_mergeable_stmt (gimple *stmt, tree_code &cmp_code,
> + tree cmp_op1, tree cmp_op2)
> +{
> + bool eq_p = cmp_code != ERROR_MARK;
> + tree type;
> + if (eq_p)
> + type = TREE_TYPE (cmp_op1);
> + else
> + type = TREE_TYPE (gimple_assign_lhs (stmt));
> + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> + bitint_prec_kind kind = bitint_precision_kind (type);
> + gcc_assert (kind >= bitint_prec_large);
> + gimple *g;
> + tree lhs = gimple_get_lhs (stmt);
> + tree rhs1, lhs_type = lhs ? TREE_TYPE (lhs) : NULL_TREE;
> + if (lhs
> + && TREE_CODE (lhs) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> + {
> + int p = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[p] != NULL_TREE);
> + m_lhs = lhs = m_vars[p];
> + }
> + unsigned cnt, rem = 0, end = 0, prec = TYPE_PRECISION (type);
> + bool sext = false;
> + tree ext = NULL_TREE, store_operand = NULL_TREE;
> + bool eh = false;
> + basic_block eh_pad = NULL;
> + if (gimple_store_p (stmt))
> + {
> + store_operand = gimple_assign_rhs1 (stmt);
> + eh = stmt_ends_bb_p (stmt);
> + if (eh)
> + {
> + edge e;
> + edge_iterator ei;
> + basic_block bb = gimple_bb (stmt);
> +
> + FOR_EACH_EDGE (e, ei, bb->succs)
> + if (e->flags & EDGE_EH)
> + {
> + eh_pad = e->dest;
> + break;
> + }
> + }
> + }
> + if ((store_operand
> + && TREE_CODE (store_operand) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (store_operand)))
> + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (store_operand)))
> + || gimple_assign_cast_p (stmt))
> + {
> + rhs1 = gimple_assign_rhs1 (store_operand
> + ? SSA_NAME_DEF_STMT (store_operand)
> + : stmt);
> + /* Optimize mergeable ops ending with widening cast to _BitInt
> + (or followed by store). We can lower just the limbs of the
> + cast operand and widen afterwards. */
> + if (TREE_CODE (rhs1) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1)))
> + && TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> + && (CEIL ((unsigned) TYPE_PRECISION (TREE_TYPE (rhs1)),
> + limb_prec) < CEIL (prec, limb_prec)
> + || (kind == bitint_prec_huge
> + && TYPE_PRECISION (TREE_TYPE (rhs1)) < prec)))
> + {
> + store_operand = rhs1;
> + prec = TYPE_PRECISION (TREE_TYPE (rhs1));
> + kind = bitint_precision_kind (TREE_TYPE (rhs1));
> + if (!TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> + sext = true;
> + }
> + }
> + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> + if (kind == bitint_prec_large)
> + cnt = CEIL (prec, limb_prec);
> + else
> + {
> + rem = (prec % (2 * limb_prec));
> + end = (prec - rem) / limb_prec;
> + cnt = 2 + CEIL (rem, limb_prec);
> + idx = idx_first = create_loop (size_zero_node, &idx_next);
> + }
> +
> + basic_block edge_bb = NULL;
> + if (eq_p)
> + {
> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> + gsi_prev (&gsi);
> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> + edge_bb = e->src;
> + if (kind == bitint_prec_large)
> + {
> + m_gsi = gsi_last_bb (edge_bb);
> + if (!gsi_end_p (m_gsi))
> + gsi_next (&m_gsi);
> + }
> + }
> + else
> + m_after_stmt = stmt;
> + if (kind != bitint_prec_large)
> + m_upwards_2limb = end;
> +
> + for (unsigned i = 0; i < cnt; i++)
> + {
> + m_data_cnt = 0;
> + if (kind == bitint_prec_large)
> + idx = size_int (i);
> + else if (i >= 2)
> + idx = size_int (end + (i > 2));
> + if (eq_p)
> + {
> + rhs1 = handle_operand (cmp_op1, idx);
> + tree rhs2 = handle_operand (cmp_op2, idx);
> + g = gimple_build_cond (NE_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + e1->flags = EDGE_FALSE_VALUE;
> + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> + e1->probability = profile_probability::unlikely ();
> + e2->probability = e1->probability.invert ();
> + if (i == 0)
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + }
> + else
> + {
> + if (store_operand)
> + rhs1 = handle_operand (store_operand, idx);
> + else
> + rhs1 = handle_stmt (stmt, idx);
> + tree l = limb_access (lhs_type, lhs, idx, true);
> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> + rhs1 = add_cast (TREE_TYPE (l), rhs1);
> + if (sext && i == cnt - 1)
> + ext = rhs1;
> + g = gimple_build_assign (l, rhs1);
> + insert_before (g);
> + if (eh)
> + {
> + maybe_duplicate_eh_stmt (g, stmt);
> + if (eh_pad)
> + {
> + edge e = split_block (gsi_bb (m_gsi), g);
> + m_gsi = gsi_after_labels (e->dest);
> + make_edge (e->src, eh_pad, EDGE_EH)->probability
> + = profile_probability::very_unlikely ();
> + }
> + }
> + }
> + m_first = false;
> + if (kind == bitint_prec_huge && i <= 1)
> + {
> + if (i == 0)
> + {
> + idx = make_ssa_name (sizetype);
> + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> + size_one_node);
> + insert_before (g);
> + }
> + else
> + {
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> + size_int (2));
> + insert_before (g);
> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + if (eq_p)
> + m_gsi = gsi_after_labels (edge_bb);
> + else
> + m_gsi = gsi_for_stmt (stmt);
> + }
> + }
> + }
> +
> + if (prec != (unsigned) TYPE_PRECISION (type)
> + && (CEIL ((unsigned) TYPE_PRECISION (type), limb_prec)
> + > CEIL (prec, limb_prec)))
> + {
> + if (sext)
> + {
> + ext = add_cast (signed_type_for (m_limb_type), ext);
> + tree lpm1 = build_int_cst (unsigned_type_node,
> + limb_prec - 1);
> + tree n = make_ssa_name (TREE_TYPE (ext));
> + g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
> + insert_before (g);
> + ext = add_cast (m_limb_type, n);
> + }
> + else
> + ext = build_zero_cst (m_limb_type);
> + kind = bitint_precision_kind (type);
> + unsigned start = CEIL (prec, limb_prec);
> + prec = TYPE_PRECISION (type);
> + idx = idx_first = idx_next = NULL_TREE;
> + if (prec <= (start + 2) * limb_prec)
> + kind = bitint_prec_large;
> + if (kind == bitint_prec_large)
> + cnt = CEIL (prec, limb_prec) - start;
> + else
> + {
> + rem = prec % limb_prec;
> + end = (prec - rem) / limb_prec;
> + cnt = 1 + (rem != 0);
> + idx = create_loop (size_int (start), &idx_next);
> + }
> + for (unsigned i = 0; i < cnt; i++)
> + {
> + if (kind == bitint_prec_large)
> + idx = size_int (start + i);
> + else if (i == 1)
> + idx = size_int (end);
> + rhs1 = ext;
> + tree l = limb_access (lhs_type, lhs, idx, true);
> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> + rhs1 = add_cast (TREE_TYPE (l), rhs1);
> + g = gimple_build_assign (l, rhs1);
> + insert_before (g);
> + if (kind == bitint_prec_huge && i == 0)
> + {
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> + size_one_node);
> + insert_before (g);
> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + m_gsi = gsi_for_stmt (stmt);
> + }
> + }
> + }
> +
> + if (gimple_store_p (stmt))
> + {
> + unlink_stmt_vdef (stmt);
> + release_ssa_name (gimple_vdef (stmt));
> + gsi_remove (&m_gsi, true);
> + }
> + if (eq_p)
> + {
> + lhs = make_ssa_name (boolean_type_node);
> + basic_block bb = gimple_bb (stmt);
> + gphi *phi = create_phi_node (lhs, bb);
> + edge e = find_edge (gsi_bb (m_gsi), bb);
> + unsigned int n = EDGE_COUNT (bb->preds);
> + for (unsigned int i = 0; i < n; i++)
> + {
> + edge e2 = EDGE_PRED (bb, i);
> + add_phi_arg (phi, e == e2 ? boolean_true_node : boolean_false_node,
> + e2, UNKNOWN_LOCATION);
> + }
> + cmp_code = cmp_code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
> + return lhs;
> + }
> + else
> + return NULL_TREE;
> +}
> +
> +/* Handle a large/huge _BitInt comparison statement STMT other than
> + EQ_EXPR/NE_EXPR. CMP_CODE, CMP_OP1 and CMP_OP2 meaning is like in
> + lower_mergeable_stmt. The {GT,GE,LT,LE}_EXPR comparisons are
> + lowered by iteration from the most significant limb downwards to
> + the least significant one, for large _BitInt in straight line code,
> + otherwise with most significant limb handled in
> + straight line code followed by a loop handling one limb at a time.
> + Comparisons with unsigned huge _BitInt with precisions which are
> + multiples of limb precision can use just the loop and don't need to
> + handle most significant limb before the loop. The loop or straight
> + line code jumps to final basic block if a particular pair of limbs
> + is not equal. */
> +
> +tree
> +bitint_large_huge::lower_comparison_stmt (gimple *stmt, tree_code &cmp_code,
> + tree cmp_op1, tree cmp_op2)
> +{
> + tree type = TREE_TYPE (cmp_op1);
> + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> + bitint_prec_kind kind = bitint_precision_kind (type);
> + gcc_assert (kind >= bitint_prec_large);
> + gimple *g;
> + if (!TYPE_UNSIGNED (type)
> + && integer_zerop (cmp_op2)
> + && (cmp_code == GE_EXPR || cmp_code == LT_EXPR))
> + {
> + unsigned end = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec) - 1;
> + tree idx = size_int (end);
> + m_data_cnt = 0;
> + tree rhs1 = handle_operand (cmp_op1, idx);
> + if (TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> + {
> + tree stype = signed_type_for (TREE_TYPE (rhs1));
> + rhs1 = add_cast (stype, rhs1);
> + }
> + tree lhs = make_ssa_name (boolean_type_node);
> + g = gimple_build_assign (lhs, cmp_code, rhs1,
> + build_zero_cst (TREE_TYPE (rhs1)));
> + insert_before (g);
> + cmp_code = NE_EXPR;
> + return lhs;
> + }
> +
> + unsigned cnt, rem = 0, end = 0;
> + tree idx = NULL_TREE, idx_next = NULL_TREE;
> + if (kind == bitint_prec_large)
> + cnt = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec);
> + else
> + {
> + rem = ((unsigned) TYPE_PRECISION (type) % limb_prec);
> + if (rem == 0 && !TYPE_UNSIGNED (type))
> + rem = limb_prec;
> + end = ((unsigned) TYPE_PRECISION (type) - rem) / limb_prec;
> + cnt = 1 + (rem != 0);
> + }
> +
> + basic_block edge_bb = NULL;
> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> + gsi_prev (&gsi);
> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> + edge_bb = e->src;
> + m_gsi = gsi_last_bb (edge_bb);
> + if (!gsi_end_p (m_gsi))
> + gsi_next (&m_gsi);
> +
> + edge *edges = XALLOCAVEC (edge, cnt * 2);
> + for (unsigned i = 0; i < cnt; i++)
> + {
> + m_data_cnt = 0;
> + if (kind == bitint_prec_large)
> + idx = size_int (cnt - i - 1);
> + else if (i == cnt - 1)
> + idx = create_loop (size_int (end - 1), &idx_next);
> + else
> + idx = size_int (end);
> + tree rhs1 = handle_operand (cmp_op1, idx);
> + tree rhs2 = handle_operand (cmp_op2, idx);
> + if (i == 0
> + && !TYPE_UNSIGNED (type)
> + && TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> + {
> + tree stype = signed_type_for (TREE_TYPE (rhs1));
> + rhs1 = add_cast (stype, rhs1);
> + rhs2 = add_cast (stype, rhs2);
> + }
> + g = gimple_build_cond (GT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + e1->flags = EDGE_FALSE_VALUE;
> + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> + e1->probability = profile_probability::likely ();
> + e2->probability = e1->probability.invert ();
> + if (i == 0)
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + edges[2 * i] = e2;
> + g = gimple_build_cond (LT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e1 = split_block (gsi_bb (m_gsi), g);
> + e1->flags = EDGE_FALSE_VALUE;
> + e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> + e1->probability = profile_probability::unlikely ();
> + e2->probability = e1->probability.invert ();
> + m_gsi = gsi_after_labels (e1->dest);
> + edges[2 * i + 1] = e2;
> + m_first = false;
> + if (kind == bitint_prec_huge && i == cnt - 1)
> + {
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> + insert_before (g);
> + g = gimple_build_cond (NE_EXPR, idx, size_zero_node,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge true_edge, false_edge;
> + extract_true_false_edges_from_block (gsi_bb (m_gsi),
> + &true_edge, &false_edge);
> + m_gsi = gsi_after_labels (false_edge->dest);
> + }
> + }
> +
> + tree lhs = make_ssa_name (boolean_type_node);
> + basic_block bb = gimple_bb (stmt);
> + gphi *phi = create_phi_node (lhs, bb);
> + for (unsigned int i = 0; i < cnt * 2; i++)
> + {
> + tree val = ((cmp_code == GT_EXPR || cmp_code == GE_EXPR)
> + ^ (i & 1)) ? boolean_true_node : boolean_false_node;
> + add_phi_arg (phi, val, edges[i], UNKNOWN_LOCATION);
> + }
> + add_phi_arg (phi, (cmp_code == GE_EXPR || cmp_code == LE_EXPR)
> + ? boolean_true_node : boolean_false_node,
> + find_edge (gsi_bb (m_gsi), bb), UNKNOWN_LOCATION);
> + cmp_code = NE_EXPR;
> + return lhs;
> +}
> +
> +/* Lower large/huge _BitInt left and right shift except for left
> + shift by < limb_prec constant. */
> +
> +void
> +bitint_large_huge::lower_shift_stmt (tree obj, gimple *stmt)
> +{
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree lhs = gimple_assign_lhs (stmt);
> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> + tree type = TREE_TYPE (rhs1);
> + gimple *final_stmt = gsi_stmt (m_gsi);
> + gcc_assert (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large);
> + int prec = TYPE_PRECISION (type);
> + tree n = gimple_assign_rhs2 (stmt), n1, n2, n3, n4;
> + gimple *g;
> + if (obj == NULL_TREE)
> + {
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + }
> + /* Preparation code common for both left and right shifts.
> + unsigned n1 = n % limb_prec;
> + size_t n2 = n / limb_prec;
> + size_t n3 = n1 != 0;
> + unsigned n4 = (limb_prec - n1) % limb_prec;
> + (for power of 2 limb_prec n4 can be -n1 & (limb_prec)). */
> + if (TREE_CODE (n) == INTEGER_CST)
> + {
> + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> + n1 = int_const_binop (TRUNC_MOD_EXPR, n, lp);
> + n2 = fold_convert (sizetype, int_const_binop (TRUNC_DIV_EXPR, n, lp));
> + n3 = size_int (!integer_zerop (n1));
> + n4 = int_const_binop (TRUNC_MOD_EXPR,
> + int_const_binop (MINUS_EXPR, lp, n1), lp);
> + }
> + else
> + {
> + n1 = make_ssa_name (TREE_TYPE (n));
> + n2 = make_ssa_name (sizetype);
> + n3 = make_ssa_name (sizetype);
> + n4 = make_ssa_name (TREE_TYPE (n));
> + if (pow2p_hwi (limb_prec))
> + {
> + tree lpm1 = build_int_cst (TREE_TYPE (n), limb_prec - 1);
> + g = gimple_build_assign (n1, BIT_AND_EXPR, n, lpm1);
> + insert_before (g);
> + g = gimple_build_assign (useless_type_conversion_p (sizetype,
> + TREE_TYPE (n))
> + ? n2 : make_ssa_name (TREE_TYPE (n)),
> + RSHIFT_EXPR, n,
> + build_int_cst (TREE_TYPE (n),
> + exact_log2 (limb_prec)));
> + insert_before (g);
> + if (gimple_assign_lhs (g) != n2)
> + {
> + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> + insert_before (g);
> + }
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> + NEGATE_EXPR, n1);
> + insert_before (g);
> + g = gimple_build_assign (n4, BIT_AND_EXPR, gimple_assign_lhs (g),
> + lpm1);
> + insert_before (g);
> + }
> + else
> + {
> + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> + g = gimple_build_assign (n1, TRUNC_MOD_EXPR, n, lp);
> + insert_before (g);
> + g = gimple_build_assign (useless_type_conversion_p (sizetype,
> + TREE_TYPE (n))
> + ? n2 : make_ssa_name (TREE_TYPE (n)),
> + TRUNC_DIV_EXPR, n, lp);
> + insert_before (g);
> + if (gimple_assign_lhs (g) != n2)
> + {
> + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> + insert_before (g);
> + }
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> + MINUS_EXPR, lp, n1);
> + insert_before (g);
> + g = gimple_build_assign (n4, TRUNC_MOD_EXPR, gimple_assign_lhs (g),
> + lp);
> + insert_before (g);
> + }
> + g = gimple_build_assign (make_ssa_name (boolean_type_node), NE_EXPR, n1,
> + build_zero_cst (TREE_TYPE (n)));
> + insert_before (g);
> + g = gimple_build_assign (n3, NOP_EXPR, gimple_assign_lhs (g));
> + insert_before (g);
> + }
> + tree p = build_int_cst (sizetype,
> + prec / limb_prec - (prec % limb_prec == 0));
> + if (rhs_code == RSHIFT_EXPR)
> + {
> + /* Lower
> + dst = src >> n;
> + as
> + unsigned n1 = n % limb_prec;
> + size_t n2 = n / limb_prec;
> + size_t n3 = n1 != 0;
> + unsigned n4 = (limb_prec - n1) % limb_prec;
> + size_t idx;
> + size_t p = prec / limb_prec - (prec % limb_prec == 0);
> + int signed_p = (typeof (src) -1) < 0;
> + for (idx = n2; idx < ((!signed_p && (prec % limb_prec == 0))
> + ? p : p - n3); ++idx)
> + dst[idx - n2] = (src[idx] >> n1) | (src[idx + n3] << n4);
> + limb_type ext;
> + if (prec % limb_prec == 0)
> + ext = src[p];
> + else if (signed_p)
> + ext = ((signed limb_type) (src[p] << (limb_prec
> + - (prec % limb_prec))))
> + >> (limb_prec - (prec % limb_prec));
> + else
> + ext = src[p] & (((limb_type) 1 << (prec % limb_prec)) - 1);
> + if (!signed_p && (prec % limb_prec == 0))
> + ;
> + else if (idx < prec / 64)
> + {
> + dst[idx - n2] = (src[idx] >> n1) | (ext << n4);
> + ++idx;
> + }
> + idx -= n2;
> + if (signed_p)
> + {
> + dst[idx] = ((signed limb_type) ext) >> n1;
> + ext = ((signed limb_type) ext) >> (limb_prec - 1);
> + }
> + else
> + {
> + dst[idx] = ext >> n1;
> + ext = 0;
> + }
> + for (++idx; idx <= p; ++idx)
> + dst[idx] = ext; */
> + tree pmn3;
> + if (TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> + pmn3 = p;
> + else if (TREE_CODE (n3) == INTEGER_CST)
> + pmn3 = int_const_binop (MINUS_EXPR, p, n3);
> + else
> + {
> + pmn3 = make_ssa_name (sizetype);
> + g = gimple_build_assign (pmn3, MINUS_EXPR, p, n3);
> + insert_before (g);
> + }
> + g = gimple_build_cond (LT_EXPR, n2, pmn3, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + tree idx_next;
> + tree idx = create_loop (n2, &idx_next);
> + tree idxmn2 = make_ssa_name (sizetype);
> + tree idxpn3 = make_ssa_name (sizetype);
> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> + insert_before (g);
> + g = gimple_build_assign (idxpn3, PLUS_EXPR, idx, n3);
> + insert_before (g);
> + m_data_cnt = 0;
> + tree t1 = handle_operand (rhs1, idx);
> + m_first = false;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + RSHIFT_EXPR, t1, n1);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + if (!integer_zerop (n3))
> + {
> + m_data_cnt = 0;
> + tree t2 = handle_operand (rhs1, idxpn3);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, t2, n4);
> + insert_before (g);
> + t2 = gimple_assign_lhs (g);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + BIT_IOR_EXPR, t1, t2);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + }
> + tree l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> + g = gimple_build_assign (l, t1);
> + insert_before (g);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> + insert_before (g);
> + g = gimple_build_cond (LT_EXPR, idx_next, pmn3, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + idx = make_ssa_name (sizetype);
> + m_gsi = gsi_for_stmt (final_stmt);
> + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> + add_phi_arg (phi, n2, e1, UNKNOWN_LOCATION);
> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> + m_data_cnt = 0;
> + tree ms = handle_operand (rhs1, p);
> + tree ext = ms;
> + if (!types_compatible_p (TREE_TYPE (ms), m_limb_type))
> + ext = add_cast (m_limb_type, ms);
> + if (!(TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> + && !integer_zerop (n3))
> + {
> + g = gimple_build_cond (LT_EXPR, idx, p, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e1 = split_block (gsi_bb (m_gsi), g);
> + e2 = split_block (e1->dest, (gimple *) NULL);
> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + m_data_cnt = 0;
> + t1 = handle_operand (rhs1, idx);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + RSHIFT_EXPR, t1, n1);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, ext, n4);
> + insert_before (g);
> + tree t2 = gimple_assign_lhs (g);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + BIT_IOR_EXPR, t1, t2);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + idxmn2 = make_ssa_name (sizetype);
> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> + insert_before (g);
> + l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> + g = gimple_build_assign (l, t1);
> + insert_before (g);
> + idx_next = make_ssa_name (sizetype);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> + insert_before (g);
> + m_gsi = gsi_for_stmt (final_stmt);
> + tree nidx = make_ssa_name (sizetype);
> + phi = create_phi_node (nidx, gsi_bb (m_gsi));
> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> + idx = nidx;
> + }
> + g = gimple_build_assign (make_ssa_name (sizetype), MINUS_EXPR, idx, n2);
> + insert_before (g);
> + idx = gimple_assign_lhs (g);
> + tree sext = ext;
> + if (!TYPE_UNSIGNED (type))
> + sext = add_cast (signed_type_for (m_limb_type), ext);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> + RSHIFT_EXPR, sext, n1);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + if (!TYPE_UNSIGNED (type))
> + {
> + t1 = add_cast (m_limb_type, t1);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> + RSHIFT_EXPR, sext,
> + build_int_cst (TREE_TYPE (n),
> + limb_prec - 1));
> + insert_before (g);
> + ext = add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + else
> + ext = build_zero_cst (m_limb_type);
> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> + g = gimple_build_assign (l, t1);
> + insert_before (g);
> + g = gimple_build_assign (make_ssa_name (sizetype), PLUS_EXPR, idx,
> + size_one_node);
> + insert_before (g);
> + idx = gimple_assign_lhs (g);
> + g = gimple_build_cond (LE_EXPR, idx, p, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e1 = split_block (gsi_bb (m_gsi), g);
> + e2 = split_block (e1->dest, (gimple *) NULL);
> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + idx = create_loop (idx, &idx_next);
> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> + g = gimple_build_assign (l, ext);
> + insert_before (g);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> + insert_before (g);
> + g = gimple_build_cond (LE_EXPR, idx_next, p, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + }
> + else
> + {
> + /* Lower
> + dst = src << n;
> + as
> + unsigned n1 = n % limb_prec;
> + size_t n2 = n / limb_prec;
> + size_t n3 = n1 != 0;
> + unsigned n4 = (limb_prec - n1) % limb_prec;
> + size_t idx;
> + size_t p = prec / limb_prec - (prec % limb_prec == 0);
> + for (idx = p; (ssize_t) idx >= (ssize_t) (n2 + n3); --idx)
> + dst[idx] = (src[idx - n2] << n1) | (src[idx - n2 - n3] >> n4);
> + if (n1)
> + {
> + dst[idx] = src[idx - n2] << n1;
> + --idx;
> + }
> + for (; (ssize_t) idx >= 0; --idx)
> + dst[idx] = 0; */
> + tree n2pn3;
> + if (TREE_CODE (n2) == INTEGER_CST && TREE_CODE (n3) == INTEGER_CST)
> + n2pn3 = int_const_binop (PLUS_EXPR, n2, n3);
> + else
> + {
> + n2pn3 = make_ssa_name (sizetype);
> + g = gimple_build_assign (n2pn3, PLUS_EXPR, n2, n3);
> + insert_before (g);
> + }
> + /* For LSHIFT_EXPR, we can use handle_operand with non-INTEGER_CST
> + idx even to access the most significant partial limb. */
> + m_var_msb = true;
> + if (integer_zerop (n3))
> + /* For n3 == 0 p >= n2 + n3 is always true for all valid shift
> + counts. Emit if (true) condition that can be optimized later. */
> + g = gimple_build_cond (NE_EXPR, boolean_true_node, boolean_false_node,
> + NULL_TREE, NULL_TREE);
> + else
> + g = gimple_build_cond (LE_EXPR, n2pn3, p, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + tree idx_next;
> + tree idx = create_loop (p, &idx_next);
> + tree idxmn2 = make_ssa_name (sizetype);
> + tree idxmn2mn3 = make_ssa_name (sizetype);
> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> + insert_before (g);
> + g = gimple_build_assign (idxmn2mn3, MINUS_EXPR, idxmn2, n3);
> + insert_before (g);
> + m_data_cnt = 0;
> + tree t1 = handle_operand (rhs1, idxmn2);
> + m_first = false;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, t1, n1);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + if (!integer_zerop (n3))
> + {
> + m_data_cnt = 0;
> + tree t2 = handle_operand (rhs1, idxmn2mn3);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + RSHIFT_EXPR, t2, n4);
> + insert_before (g);
> + t2 = gimple_assign_lhs (g);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + BIT_IOR_EXPR, t1, t2);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + }
> + tree l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> + g = gimple_build_assign (l, t1);
> + insert_before (g);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> + insert_before (g);
> + tree sn2pn3 = add_cast (ssizetype, n2pn3);
> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next), sn2pn3,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + idx = make_ssa_name (sizetype);
> + m_gsi = gsi_for_stmt (final_stmt);
> + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> + add_phi_arg (phi, p, e1, UNKNOWN_LOCATION);
> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> + m_data_cnt = 0;
> + if (!integer_zerop (n3))
> + {
> + g = gimple_build_cond (NE_EXPR, n3, size_zero_node,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e1 = split_block (gsi_bb (m_gsi), g);
> + e2 = split_block (e1->dest, (gimple *) NULL);
> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + idxmn2 = make_ssa_name (sizetype);
> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> + insert_before (g);
> + m_data_cnt = 0;
> + t1 = handle_operand (rhs1, idxmn2);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, t1, n1);
> + insert_before (g);
> + t1 = gimple_assign_lhs (g);
> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> + g = gimple_build_assign (l, t1);
> + insert_before (g);
> + idx_next = make_ssa_name (sizetype);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> + insert_before (g);
> + m_gsi = gsi_for_stmt (final_stmt);
> + tree nidx = make_ssa_name (sizetype);
> + phi = create_phi_node (nidx, gsi_bb (m_gsi));
> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> + idx = nidx;
> + }
> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx),
> + ssize_int (0), NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e1 = split_block (gsi_bb (m_gsi), g);
> + e2 = split_block (e1->dest, (gimple *) NULL);
> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + idx = create_loop (idx, &idx_next);
> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> + g = gimple_build_assign (l, build_zero_cst (m_limb_type));
> + insert_before (g);
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> + insert_before (g);
> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next),
> + ssize_int (0), NULL_TREE, NULL_TREE);
> + insert_before (g);
> + }
> +}
> +
> +/* Lower large/huge _BitInt multiplication or division. */
> +
> +void
> +bitint_large_huge::lower_muldiv_stmt (tree obj, gimple *stmt)
> +{
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree rhs2 = gimple_assign_rhs2 (stmt);
> + tree lhs = gimple_assign_lhs (stmt);
> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> + tree type = TREE_TYPE (rhs1);
> + gcc_assert (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large);
> + int prec = TYPE_PRECISION (type), prec1, prec2;
> + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec1);
> + rhs2 = handle_operand_addr (rhs2, stmt, NULL, &prec2);
> + if (obj == NULL_TREE)
> + {
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + lhs = build_fold_addr_expr (obj);
> + }
> + else
> + {
> + lhs = build_fold_addr_expr (obj);
> + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> + NULL_TREE, true, GSI_SAME_STMT);
> + }
> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> + gimple *g;
> + switch (rhs_code)
> + {
> + case MULT_EXPR:
> + g = gimple_build_call_internal (IFN_MULBITINT, 6,
> + lhs, build_int_cst (sitype, prec),
> + rhs1, build_int_cst (sitype, prec1),
> + rhs2, build_int_cst (sitype, prec2));
> + insert_before (g);
> + break;
> + case TRUNC_DIV_EXPR:
> + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8,
> + lhs, build_int_cst (sitype, prec),
> + null_pointer_node,
> + build_int_cst (sitype, 0),
> + rhs1, build_int_cst (sitype, prec1),
> + rhs2, build_int_cst (sitype, prec2));
> + if (!stmt_ends_bb_p (stmt))
> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> + insert_before (g);
> + break;
> + case TRUNC_MOD_EXPR:
> + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8, null_pointer_node,
> + build_int_cst (sitype, 0),
> + lhs, build_int_cst (sitype, prec),
> + rhs1, build_int_cst (sitype, prec1),
> + rhs2, build_int_cst (sitype, prec2));
> + if (!stmt_ends_bb_p (stmt))
> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> + insert_before (g);
> + break;
> + default:
> + gcc_unreachable ();
> + }
> + if (stmt_ends_bb_p (stmt))
> + {
> + maybe_duplicate_eh_stmt (g, stmt);
> + edge e1;
> + edge_iterator ei;
> + basic_block bb = gimple_bb (stmt);
> +
> + FOR_EACH_EDGE (e1, ei, bb->succs)
> + if (e1->flags & EDGE_EH)
> + break;
> + if (e1)
> + {
> + edge e2 = split_block (gsi_bb (m_gsi), g);
> + m_gsi = gsi_after_labels (e2->dest);
> + make_edge (e2->src, e1->dest, EDGE_EH)->probability
> + = profile_probability::very_unlikely ();
> + }
> + }
> +}
> +
> +/* Lower large/huge _BitInt conversion to/from floating point. */
> +
> +void
> +bitint_large_huge::lower_float_conv_stmt (tree obj, gimple *stmt)
> +{
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree lhs = gimple_assign_lhs (stmt);
> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> + if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (rhs1)))
> + || DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (lhs))))
> + {
> + sorry_at (gimple_location (stmt),
> + "unsupported conversion between %<_BitInt(%d)%> and %qT",
> + rhs_code == FIX_TRUNC_EXPR
> + ? TYPE_PRECISION (TREE_TYPE (lhs))
> + : TYPE_PRECISION (TREE_TYPE (rhs1)),
> + rhs_code == FIX_TRUNC_EXPR
> + ? TREE_TYPE (rhs1) : TREE_TYPE (lhs));
> + if (rhs_code == FLOAT_EXPR)
> + {
> + gimple *g
> + = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> + gsi_replace (&m_gsi, g, true);
> + }
> + return;
> + }
> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> + gimple *g;
> + if (rhs_code == FIX_TRUNC_EXPR)
> + {
> + int prec = TYPE_PRECISION (TREE_TYPE (lhs));
> + if (!TYPE_UNSIGNED (TREE_TYPE (lhs)))
> + prec = -prec;
> + if (obj == NULL_TREE)
> + {
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + lhs = build_fold_addr_expr (obj);
> + }
> + else
> + {
> + lhs = build_fold_addr_expr (obj);
> + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> + NULL_TREE, true, GSI_SAME_STMT);
> + }
> + scalar_mode from_mode
> + = as_a <scalar_mode> (TYPE_MODE (TREE_TYPE (rhs1)));
> +#ifdef HAVE_SFmode
> + /* IEEE single is a full superset of both IEEE half and
> + bfloat formats, convert to float first and then to _BitInt
> + to avoid the need of another 2 library routines. */
> + if ((REAL_MODE_FORMAT (from_mode) == &arm_bfloat_half_format
> + || REAL_MODE_FORMAT (from_mode) == &ieee_half_format)
> + && REAL_MODE_FORMAT (SFmode) == &ieee_single_format)
> + {
> + tree type = lang_hooks.types.type_for_mode (SFmode, 0);
> + if (type)
> + rhs1 = add_cast (type, rhs1);
> + }
> +#endif
> + g = gimple_build_call_internal (IFN_FLOATTOBITINT, 3,
> + lhs, build_int_cst (sitype, prec),
> + rhs1);
> + insert_before (g);
> + }
> + else
> + {
> + int prec;
> + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec);
> + g = gimple_build_call_internal (IFN_BITINTTOFLOAT, 2,
> + rhs1, build_int_cst (sitype, prec));
> + gimple_call_set_lhs (g, lhs);
> + if (!stmt_ends_bb_p (stmt))
> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> + gsi_replace (&m_gsi, g, true);
> + }
> +}
> +
> +/* Helper method for lower_addsub_overflow and lower_mul_overflow.
> + If check_zero is true, caller wants to check if all bits in [start, end)
> + are zero, otherwise if bits in [start, end) are either all zero or
> + all ones. L is the limb with index LIMB, START and END are measured
> + in bits. */
> +
> +tree
> +bitint_large_huge::arith_overflow_extract_bits (unsigned int start,
> + unsigned int end, tree l,
> + unsigned int limb,
> + bool check_zero)
> +{
> + unsigned startlimb = start / limb_prec;
> + unsigned endlimb = (end - 1) / limb_prec;
> + gimple *g;
> +
> + if ((start % limb_prec) == 0 && (end % limb_prec) == 0)
> + return l;
> + if (startlimb == endlimb && limb == startlimb)
> + {
> + if (check_zero)
> + {
> + wide_int w = wi::shifted_mask (start % limb_prec,
> + end - start, false, limb_prec);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + BIT_AND_EXPR, l,
> + wide_int_to_tree (m_limb_type, w));
> + insert_before (g);
> + return gimple_assign_lhs (g);
> + }
> + unsigned int shift = start % limb_prec;
> + if ((end % limb_prec) != 0)
> + {
> + unsigned int lshift = (-end) % limb_prec;
> + shift += lshift;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, l,
> + build_int_cst (unsigned_type_node,
> + lshift));
> + insert_before (g);
> + l = gimple_assign_lhs (g);
> + }
> + l = add_cast (signed_type_for (m_limb_type), l);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> + RSHIFT_EXPR, l,
> + build_int_cst (unsigned_type_node, shift));
> + insert_before (g);
> + return add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + else if (limb == startlimb)
> + {
> + if ((start % limb_prec) == 0)
> + return l;
> + if (!check_zero)
> + l = add_cast (signed_type_for (m_limb_type), l);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> + RSHIFT_EXPR, l,
> + build_int_cst (unsigned_type_node,
> + start % limb_prec));
> + insert_before (g);
> + l = gimple_assign_lhs (g);
> + if (!check_zero)
> + l = add_cast (m_limb_type, l);
> + return l;
> + }
> + else if (limb == endlimb)
> + {
> + if ((end % limb_prec) == 0)
> + return l;
> + if (check_zero)
> + {
> + wide_int w = wi::mask (end % limb_prec, false, limb_prec);
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + BIT_AND_EXPR, l,
> + wide_int_to_tree (m_limb_type, w));
> + insert_before (g);
> + return gimple_assign_lhs (g);
> + }
> + unsigned int shift = (-end) % limb_prec;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + LSHIFT_EXPR, l,
> + build_int_cst (unsigned_type_node, shift));
> + insert_before (g);
> + l = add_cast (signed_type_for (m_limb_type), gimple_assign_lhs (g));
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> + RSHIFT_EXPR, l,
> + build_int_cst (unsigned_type_node, shift));
> + insert_before (g);
> + return add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + return l;
> +}
> +
> +/* Helper method for lower_addsub_overflow and lower_mul_overflow. Store
> + result including overflow flag into the right locations. */
> +
> +void
> +bitint_large_huge::finish_arith_overflow (tree var, tree obj, tree type,
> + tree ovf, tree lhs, tree orig_obj,
> + gimple *stmt, tree_code code)
> +{
> + gimple *g;
> +
> + if (obj == NULL_TREE
> + && (TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) < bitint_prec_large))
> + {
> + /* Add support for 3 or more limbs filled in from normal integral
> + type if this assert fails. If no target chooses limb mode smaller
> + than half of largest supported normal integral type, this will not
> + be needed. */
> + gcc_assert (TYPE_PRECISION (type) <= 2 * limb_prec);
> + tree lhs_type = type;
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) == bitint_prec_middle)
> + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (type),
> + TYPE_UNSIGNED (type));
> + tree r1 = limb_access (NULL_TREE, var, size_int (0), true);
> + g = gimple_build_assign (make_ssa_name (m_limb_type), r1);
> + insert_before (g);
> + r1 = gimple_assign_lhs (g);
> + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> + r1 = add_cast (lhs_type, r1);
> + if (TYPE_PRECISION (lhs_type) > limb_prec)
> + {
> + tree r2 = limb_access (NULL_TREE, var, size_int (1), true);
> + g = gimple_build_assign (make_ssa_name (m_limb_type), r2);
> + insert_before (g);
> + r2 = gimple_assign_lhs (g);
> + r2 = add_cast (lhs_type, r2);
> + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> + build_int_cst (unsigned_type_node,
> + limb_prec));
> + insert_before (g);
> + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> + gimple_assign_lhs (g));
> + insert_before (g);
> + r1 = gimple_assign_lhs (g);
> + }
> + if (lhs_type != type)
> + r1 = add_cast (type, r1);
> + ovf = add_cast (lhs_type, ovf);
> + if (lhs_type != type)
> + ovf = add_cast (type, ovf);
> + g = gimple_build_assign (lhs, COMPLEX_EXPR, r1, ovf);
> + m_gsi = gsi_for_stmt (stmt);
> + gsi_replace (&m_gsi, g, true);
> + }
> + else
> + {
> + unsigned HOST_WIDE_INT nelts = 0;
> + tree atype = NULL_TREE;
> + if (obj)
> + {
> + nelts = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> + if (orig_obj == NULL_TREE)
> + nelts >>= 1;
> + atype = build_array_type_nelts (m_limb_type, nelts);
> + }
> + if (var && obj)
> + {
> + tree v1, v2;
> + tree zero;
> + if (orig_obj == NULL_TREE)
> + {
> + zero = build_zero_cst (build_pointer_type (TREE_TYPE (obj)));
> + v1 = build2 (MEM_REF, atype,
> + build_fold_addr_expr (unshare_expr (obj)), zero);
> + }
> + else if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> + v1 = build1 (VIEW_CONVERT_EXPR, atype, unshare_expr (obj));
> + else
> + v1 = unshare_expr (obj);
> + zero = build_zero_cst (build_pointer_type (TREE_TYPE (var)));
> + v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), zero);
> + g = gimple_build_assign (v1, v2);
> + insert_before (g);
> + }
> + if (orig_obj == NULL_TREE && obj)
> + {
> + ovf = add_cast (m_limb_type, ovf);
> + tree l = limb_access (NULL_TREE, obj, size_int (nelts), true);
> + g = gimple_build_assign (l, ovf);
> + insert_before (g);
> + if (nelts > 1)
> + {
> + atype = build_array_type_nelts (m_limb_type, nelts - 1);
> + tree off = build_int_cst (build_pointer_type (TREE_TYPE (obj)),
> + (nelts + 1) * m_limb_size);
> + tree v1 = build2 (MEM_REF, atype,
> + build_fold_addr_expr (unshare_expr (obj)),
> + off);
> + g = gimple_build_assign (v1, build_zero_cst (atype));
> + insert_before (g);
> + }
> + }
> + else if (TREE_CODE (TREE_TYPE (lhs)) == COMPLEX_TYPE)
> + {
> + imm_use_iterator ui;
> + use_operand_p use_p;
> + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> + {
> + g = USE_STMT (use_p);
> + if (!is_gimple_assign (g)
> + || gimple_assign_rhs_code (g) != IMAGPART_EXPR)
> + continue;
> + tree lhs2 = gimple_assign_lhs (g);
> + gimple *use_stmt;
> + single_imm_use (lhs2, &use_p, &use_stmt);
> + lhs2 = gimple_assign_lhs (use_stmt);
> + gimple_stmt_iterator gsi = gsi_for_stmt (use_stmt);
> + if (useless_type_conversion_p (TREE_TYPE (lhs2), TREE_TYPE (ovf)))
> + g = gimple_build_assign (lhs2, ovf);
> + else
> + g = gimple_build_assign (lhs2, NOP_EXPR, ovf);
> + gsi_replace (&gsi, g, true);
> + break;
> + }
> + }
> + else if (ovf != boolean_false_node)
> + {
> + g = gimple_build_cond (NE_EXPR, ovf, boolean_false_node,
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::very_likely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + tree zero = build_zero_cst (TREE_TYPE (lhs));
> + tree fn = ubsan_build_overflow_builtin (code, m_loc,
> + TREE_TYPE (lhs),
> + zero, zero, NULL);
> + force_gimple_operand_gsi (&m_gsi, fn, true, NULL_TREE,
> + true, GSI_SAME_STMT);
> + m_gsi = gsi_after_labels (e2->dest);
> + }
> + }
> + if (var)
> + {
> + tree clobber = build_clobber (TREE_TYPE (var), CLOBBER_EOL);
> + g = gimple_build_assign (var, clobber);
> + gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
> + }
> +}
> +
> +/* Helper function for lower_addsub_overflow and lower_mul_overflow.
> + Given precisions of result TYPE (PREC), argument 0 precision PREC0,
> + argument 1 precision PREC1 and minimum precision for the result
> + PREC2, compute *START, *END, *CHECK_ZERO and return OVF. */
> +
> +static tree
> +arith_overflow (tree_code code, tree type, int prec, int prec0, int prec1,
> + int prec2, unsigned *start, unsigned *end, bool *check_zero)
> +{
> + *start = 0;
> + *end = 0;
> + *check_zero = true;
> + /* Ignore this special rule for subtraction, even if both
> + prec0 >= 0 and prec1 >= 0, their subtraction can be negative
> + in infinite precision. */
> + if (code != MINUS_EXPR && prec0 >= 0 && prec1 >= 0)
> + {
> + /* Result in [0, prec2) is unsigned, if prec > prec2,
> + all bits above it will be zero. */
> + if ((prec - !TYPE_UNSIGNED (type)) >= prec2)
> + return boolean_false_node;
> + else
> + {
> + /* ovf if any of bits in [start, end) is non-zero. */
> + *start = prec - !TYPE_UNSIGNED (type);
> + *end = prec2;
> + }
> + }
> + else if (TYPE_UNSIGNED (type))
> + {
> + /* If result in [0, prec2) is signed and if prec > prec2,
> + all bits above it will be sign bit copies. */
> + if (prec >= prec2)
> + {
> + /* ovf if bit prec - 1 is non-zero. */
> + *start = prec - 1;
> + *end = prec;
> + }
> + else
> + {
> + /* ovf if any of bits in [start, end) is non-zero. */
> + *start = prec;
> + *end = prec2;
> + }
> + }
> + else if (prec >= prec2)
> + return boolean_false_node;
> + else
> + {
> + /* ovf if [start, end) bits aren't all zeros or all ones. */
> + *start = prec - 1;
> + *end = prec2;
> + *check_zero = false;
> + }
> + return NULL_TREE;
> +}
> +
> +/* Lower a .{ADD,SUB}_OVERFLOW call with at least one large/huge _BitInt
> + argument or return type _Complex large/huge _BitInt. */
> +
> +void
> +bitint_large_huge::lower_addsub_overflow (tree obj, gimple *stmt)
> +{
> + tree arg0 = gimple_call_arg (stmt, 0);
> + tree arg1 = gimple_call_arg (stmt, 1);
> + tree lhs = gimple_call_lhs (stmt);
> + gimple *g;
> +
> + if (!lhs)
> + {
> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> + gsi_remove (&gsi, true);
> + return;
> + }
> + gimple *final_stmt = gsi_stmt (m_gsi);
> + tree type = TREE_TYPE (lhs);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + int prec = TYPE_PRECISION (type);
> + int prec0 = range_to_prec (arg0, stmt);
> + int prec1 = range_to_prec (arg1, stmt);
> + int prec2 = ((prec0 < 0) == (prec1 < 0)
> + ? MAX (prec0 < 0 ? -prec0 : prec0,
> + prec1 < 0 ? -prec1 : prec1) + 1
> + : MAX (prec0 < 0 ? -prec0 : prec0 + 1,
> + prec1 < 0 ? -prec1 : prec1 + 1) + 1);
> + int prec3 = MAX (prec0 < 0 ? -prec0 : prec0,
> + prec1 < 0 ? -prec1 : prec1);
> + prec3 = MAX (prec3, prec);
> + tree var = NULL_TREE;
> + tree orig_obj = obj;
> + if (obj == NULL_TREE
> + && TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large
> + && m_names
> + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> + {
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + if (TREE_TYPE (lhs) == type)
> + orig_obj = obj;
> + }
> + if (TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) < bitint_prec_large)
> + {
> + unsigned HOST_WIDE_INT nelts = CEIL (prec, limb_prec);
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + var = create_tmp_var (atype);
> + }
> +
> + enum tree_code code;
> + switch (gimple_call_internal_fn (stmt))
> + {
> + case IFN_ADD_OVERFLOW:
> + case IFN_UBSAN_CHECK_ADD:
> + code = PLUS_EXPR;
> + break;
> + case IFN_SUB_OVERFLOW:
> + case IFN_UBSAN_CHECK_SUB:
> + code = MINUS_EXPR;
> + break;
> + default:
> + gcc_unreachable ();
> + }
> + unsigned start, end;
> + bool check_zero;
> + tree ovf = arith_overflow (code, type, prec, prec0, prec1, prec2,
> + &start, &end, &check_zero);
> +
> + unsigned startlimb, endlimb;
> + if (ovf)
> + {
> + startlimb = ~0U;
> + endlimb = ~0U;
> + }
> + else
> + {
> + startlimb = start / limb_prec;
> + endlimb = (end - 1) / limb_prec;
> + }
> +
> + int prec4 = ovf != NULL_TREE ? prec : prec3;
> + bitint_prec_kind kind = bitint_precision_kind (prec4);
> + unsigned cnt, rem = 0, fin = 0;
> + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> + bool last_ovf = (ovf == NULL_TREE
> + && CEIL (prec2, limb_prec) > CEIL (prec3, limb_prec));
> + if (kind != bitint_prec_huge)
> + cnt = CEIL (prec4, limb_prec) + last_ovf;
> + else
> + {
> + rem = (prec4 % (2 * limb_prec));
> + fin = (prec4 - rem) / limb_prec;
> + cnt = 2 + CEIL (rem, limb_prec) + last_ovf;
> + idx = idx_first = create_loop (size_zero_node, &idx_next);
> + }
> +
> + if (kind == bitint_prec_huge)
> + m_upwards_2limb = fin;
> +
> + tree type0 = TREE_TYPE (arg0);
> + tree type1 = TREE_TYPE (arg1);
> + if (TYPE_PRECISION (type0) < prec3)
> + {
> + type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
> + if (TREE_CODE (arg0) == INTEGER_CST)
> + arg0 = fold_convert (type0, arg0);
> + }
> + if (TYPE_PRECISION (type1) < prec3)
> + {
> + type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
> + if (TREE_CODE (arg1) == INTEGER_CST)
> + arg1 = fold_convert (type1, arg1);
> + }
> + unsigned int data_cnt = 0;
> + tree last_rhs1 = NULL_TREE, last_rhs2 = NULL_TREE;
> + tree cmp = build_zero_cst (m_limb_type);
> + unsigned prec_limbs = CEIL ((unsigned) prec, limb_prec);
> + tree ovf_out = NULL_TREE, cmp_out = NULL_TREE;
> + for (unsigned i = 0; i < cnt; i++)
> + {
> + m_data_cnt = 0;
> + tree rhs1, rhs2;
> + if (kind != bitint_prec_huge)
> + idx = size_int (i);
> + else if (i >= 2)
> + idx = size_int (fin + (i > 2));
> + if (!last_ovf || i < cnt - 1)
> + {
> + if (type0 != TREE_TYPE (arg0))
> + rhs1 = handle_cast (type0, arg0, idx);
> + else
> + rhs1 = handle_operand (arg0, idx);
> + if (type1 != TREE_TYPE (arg1))
> + rhs2 = handle_cast (type1, arg1, idx);
> + else
> + rhs2 = handle_operand (arg1, idx);
> + if (i == 0)
> + data_cnt = m_data_cnt;
> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
> + rhs1 = add_cast (m_limb_type, rhs1);
> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs2)))
> + rhs2 = add_cast (m_limb_type, rhs2);
> + last_rhs1 = rhs1;
> + last_rhs2 = rhs2;
> + }
> + else
> + {
> + m_data_cnt = data_cnt;
> + if (TYPE_UNSIGNED (type0))
> + rhs1 = build_zero_cst (m_limb_type);
> + else
> + {
> + rhs1 = add_cast (signed_type_for (m_limb_type), last_rhs1);
> + if (TREE_CODE (rhs1) == INTEGER_CST)
> + rhs1 = build_int_cst (m_limb_type,
> + tree_int_cst_sgn (rhs1) < 0 ? -1 : 0);
> + else
> + {
> + tree lpm1 = build_int_cst (unsigned_type_node,
> + limb_prec - 1);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> + RSHIFT_EXPR, rhs1, lpm1);
> + insert_before (g);
> + rhs1 = add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + }
> + if (TYPE_UNSIGNED (type1))
> + rhs2 = build_zero_cst (m_limb_type);
> + else
> + {
> + rhs2 = add_cast (signed_type_for (m_limb_type), last_rhs2);
> + if (TREE_CODE (rhs2) == INTEGER_CST)
> + rhs2 = build_int_cst (m_limb_type,
> + tree_int_cst_sgn (rhs2) < 0 ? -1 : 0);
> + else
> + {
> + tree lpm1 = build_int_cst (unsigned_type_node,
> + limb_prec - 1);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs2)),
> + RSHIFT_EXPR, rhs2, lpm1);
> + insert_before (g);
> + rhs2 = add_cast (m_limb_type, gimple_assign_lhs (g));
> + }
> + }
> + }
> + tree rhs = handle_plus_minus (code, rhs1, rhs2, idx);
> + if (ovf != boolean_false_node)
> + {
> + if (tree_fits_uhwi_p (idx))
> + {
> + unsigned limb = tree_to_uhwi (idx);
> + if (limb >= startlimb && limb <= endlimb)
> + {
> + tree l = arith_overflow_extract_bits (start, end, rhs,
> + limb, check_zero);
> + tree this_ovf = make_ssa_name (boolean_type_node);
> + if (ovf == NULL_TREE && !check_zero)
> + {
> + cmp = l;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + PLUS_EXPR, l,
> + build_int_cst (m_limb_type, 1));
> + insert_before (g);
> + g = gimple_build_assign (this_ovf, GT_EXPR,
> + gimple_assign_lhs (g),
> + build_int_cst (m_limb_type, 1));
> + }
> + else
> + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> + insert_before (g);
> + if (ovf == NULL_TREE)
> + ovf = this_ovf;
> + else
> + {
> + tree b = make_ssa_name (boolean_type_node);
> + g = gimple_build_assign (b, BIT_IOR_EXPR, ovf, this_ovf);
> + insert_before (g);
> + ovf = b;
> + }
> + }
> + }
> + else if (startlimb < fin)
> + {
> + if (m_first && startlimb + 2 < fin)
> + {
> + tree data_out;
> + ovf = prepare_data_in_out (boolean_false_node, idx, &data_out);
> + ovf_out = m_data.pop ();
> + m_data.pop ();
> + if (!check_zero)
> + {
> + cmp = prepare_data_in_out (cmp, idx, &data_out);
> + cmp_out = m_data.pop ();
> + m_data.pop ();
> + }
> + }
> + if (i != 0 || startlimb != fin - 1)
> + {
> + tree_code cmp_code;
> + bool single_comparison
> + = (startlimb + 2 >= fin || (startlimb & 1) != (i & 1));
> + if (!single_comparison)
> + {
> + cmp_code = GE_EXPR;
> + if (!check_zero && (start % limb_prec) == 0)
> + single_comparison = true;
> + }
> + else if ((startlimb & 1) == (i & 1))
> + cmp_code = EQ_EXPR;
> + else
> + cmp_code = GT_EXPR;
> + g = gimple_build_cond (cmp_code, idx, size_int (startlimb),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + edge e4 = NULL;
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e1->dest);
> + g = gimple_build_cond (EQ_EXPR, idx,
> + size_int (startlimb), NULL_TREE,
> + NULL_TREE);
> + insert_before (g);
> + e2 = split_block (gsi_bb (m_gsi), g);
> + basic_block bb = create_empty_bb (e2->dest);
> + add_bb_to_loop (bb, e2->dest->loop_father);
> + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> + e4->probability = profile_probability::unlikely ();
> + e2->flags = EDGE_FALSE_VALUE;
> + e2->probability = e4->probability.invert ();
> + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> + e2 = find_edge (e2->dest, e3->dest);
> + }
> + m_gsi = gsi_after_labels (e2->src);
> + unsigned tidx = startlimb + (cmp_code == GT_EXPR);
> + tree l = arith_overflow_extract_bits (start, end, rhs, tidx,
> + check_zero);
> + tree this_ovf = make_ssa_name (boolean_type_node);
> + if (cmp_code != GT_EXPR && !check_zero)
> + {
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + PLUS_EXPR, l,
> + build_int_cst (m_limb_type, 1));
> + insert_before (g);
> + g = gimple_build_assign (this_ovf, GT_EXPR,
> + gimple_assign_lhs (g),
> + build_int_cst (m_limb_type, 1));
> + }
> + else
> + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> + insert_before (g);
> + if (cmp_code == GT_EXPR)
> + {
> + tree t = make_ssa_name (boolean_type_node);
> + g = gimple_build_assign (t, BIT_IOR_EXPR, ovf, this_ovf);
> + insert_before (g);
> + this_ovf = t;
> + }
> + tree this_ovf2 = NULL_TREE;
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e4->src);
> + tree t = make_ssa_name (boolean_type_node);
> + g = gimple_build_assign (t, NE_EXPR, rhs, cmp);
> + insert_before (g);
> + this_ovf2 = make_ssa_name (boolean_type_node);
> + g = gimple_build_assign (this_ovf2, BIT_IOR_EXPR,
> + ovf, t);
> + insert_before (g);
> + }
> + m_gsi = gsi_after_labels (e2->dest);
> + tree t;
> + if (i == 1 && ovf_out)
> + t = ovf_out;
> + else
> + t = make_ssa_name (boolean_type_node);
> + gphi *phi = create_phi_node (t, e2->dest);
> + add_phi_arg (phi, this_ovf, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, ovf ? ovf
> + : boolean_false_node, e3,
> + UNKNOWN_LOCATION);
> + if (e4)
> + add_phi_arg (phi, this_ovf2, e4, UNKNOWN_LOCATION);
> + ovf = t;
> + if (!check_zero && cmp_code != GT_EXPR)
> + {
> + t = cmp_out ? cmp_out : make_ssa_name (m_limb_type);
> + phi = create_phi_node (t, e2->dest);
> + add_phi_arg (phi, l, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, cmp, e3, UNKNOWN_LOCATION);
> + if (e4)
> + add_phi_arg (phi, cmp, e4, UNKNOWN_LOCATION);
> + cmp = t;
> + }
> + }
> + }
> + }
> +
> + if (var || obj)
> + {
> + if (tree_fits_uhwi_p (idx) && tree_to_uhwi (idx) >= prec_limbs)
> + ;
> + else if (!tree_fits_uhwi_p (idx)
> + && (unsigned) prec < (fin - (i == 0)) * limb_prec)
> + {
> + bool single_comparison
> + = (((unsigned) prec % limb_prec) == 0
> + || prec_limbs + 1 >= fin
> + || (prec_limbs & 1) == (i & 1));
> + g = gimple_build_cond (LE_EXPR, idx, size_int (prec_limbs - 1),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + edge e4 = NULL;
> + e3->probability = profile_probability::unlikely ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e1->dest);
> + g = gimple_build_cond (LT_EXPR, idx,
> + size_int (prec_limbs - 1),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + e2 = split_block (gsi_bb (m_gsi), g);
> + basic_block bb = create_empty_bb (e2->dest);
> + add_bb_to_loop (bb, e2->dest->loop_father);
> + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> + e4->probability = profile_probability::unlikely ();
> + e2->flags = EDGE_FALSE_VALUE;
> + e2->probability = e4->probability.invert ();
> + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> + e2 = find_edge (e2->dest, e3->dest);
> + }
> + m_gsi = gsi_after_labels (e2->src);
> + tree l = limb_access (type, var ? var : obj, idx, true);
> + g = gimple_build_assign (l, rhs);
> + insert_before (g);
> + if (!single_comparison)
> + {
> + m_gsi = gsi_after_labels (e4->src);
> + l = limb_access (type, var ? var : obj,
> + size_int (prec_limbs - 1), true);
> + if (!useless_type_conversion_p (TREE_TYPE (l),
> + TREE_TYPE (rhs)))
> + rhs = add_cast (TREE_TYPE (l), rhs);
> + g = gimple_build_assign (l, rhs);
> + insert_before (g);
> + }
> + m_gsi = gsi_after_labels (e2->dest);
> + }
> + else
> + {
> + tree l = limb_access (type, var ? var : obj, idx, true);
> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs)))
> + rhs = add_cast (TREE_TYPE (l), rhs);
> + g = gimple_build_assign (l, rhs);
> + insert_before (g);
> + }
> + }
> + m_first = false;
> + if (kind == bitint_prec_huge && i <= 1)
> + {
> + if (i == 0)
> + {
> + idx = make_ssa_name (sizetype);
> + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> + size_one_node);
> + insert_before (g);
> + }
> + else
> + {
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> + size_int (2));
> + insert_before (g);
> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (fin),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + m_gsi = gsi_for_stmt (final_stmt);
> + }
> + }
> + }
> +
> + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, code);
> +}
> +
> +/* Lower a .MUL_OVERFLOW call with at least one large/huge _BitInt
> + argument or return type _Complex large/huge _BitInt. */
> +
> +void
> +bitint_large_huge::lower_mul_overflow (tree obj, gimple *stmt)
> +{
> + tree arg0 = gimple_call_arg (stmt, 0);
> + tree arg1 = gimple_call_arg (stmt, 1);
> + tree lhs = gimple_call_lhs (stmt);
> + if (!lhs)
> + {
> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> + gsi_remove (&gsi, true);
> + return;
> + }
> + gimple *final_stmt = gsi_stmt (m_gsi);
> + tree type = TREE_TYPE (lhs);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + int prec = TYPE_PRECISION (type), prec0, prec1;
> + arg0 = handle_operand_addr (arg0, stmt, NULL, &prec0);
> + arg1 = handle_operand_addr (arg1, stmt, NULL, &prec1);
> + int prec2 = ((prec0 < 0 ? -prec0 : prec0)
> + + (prec1 < 0 ? -prec1 : prec1)
> + + ((prec0 < 0) != (prec1 < 0)));
> + tree var = NULL_TREE;
> + tree orig_obj = obj;
> + bool force_var = false;
> + if (obj == NULL_TREE
> + && TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large
> + && m_names
> + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> + {
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + if (TREE_TYPE (lhs) == type)
> + orig_obj = obj;
> + }
> + else if (obj != NULL_TREE && DECL_P (obj))
> + {
> + for (int i = 0; i < 2; ++i)
> + {
> + tree arg = i ? arg1 : arg0;
> + if (TREE_CODE (arg) == ADDR_EXPR)
> + arg = TREE_OPERAND (arg, 0);
> + if (get_base_address (arg) == obj)
> + {
> + force_var = true;
> + break;
> + }
> + }
> + }
> + if (obj == NULL_TREE
> + || force_var
> + || TREE_CODE (type) != BITINT_TYPE
> + || bitint_precision_kind (type) < bitint_prec_large
> + || prec2 > (CEIL (prec, limb_prec) * limb_prec * (orig_obj ? 1 : 2)))
> + {
> + unsigned HOST_WIDE_INT nelts = CEIL (MAX (prec, prec2), limb_prec);
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + var = create_tmp_var (atype);
> + }
> + tree addr = build_fold_addr_expr (var ? var : obj);
> + addr = force_gimple_operand_gsi (&m_gsi, addr, true,
> + NULL_TREE, true, GSI_SAME_STMT);
> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> + gimple *g
> + = gimple_build_call_internal (IFN_MULBITINT, 6,
> + addr, build_int_cst (sitype,
> + MAX (prec2, prec)),
> + arg0, build_int_cst (sitype, prec0),
> + arg1, build_int_cst (sitype, prec1));
> + insert_before (g);
> +
> + unsigned start, end;
> + bool check_zero;
> + tree ovf = arith_overflow (MULT_EXPR, type, prec, prec0, prec1, prec2,
> + &start, &end, &check_zero);
> + if (ovf == NULL_TREE)
> + {
> + unsigned startlimb = start / limb_prec;
> + unsigned endlimb = (end - 1) / limb_prec;
> + unsigned cnt;
> + bool use_loop = false;
> + if (startlimb == endlimb)
> + cnt = 1;
> + else if (startlimb + 1 == endlimb)
> + cnt = 2;
> + else if ((end % limb_prec) == 0)
> + {
> + cnt = 2;
> + use_loop = true;
> + }
> + else
> + {
> + cnt = 3;
> + use_loop = startlimb + 2 < endlimb;
> + }
> + if (cnt == 1)
> + {
> + tree l = limb_access (NULL_TREE, var ? var : obj,
> + size_int (startlimb), true);
> + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> + insert_before (g);
> + l = arith_overflow_extract_bits (start, end, gimple_assign_lhs (g),
> + startlimb, check_zero);
> + ovf = make_ssa_name (boolean_type_node);
> + if (check_zero)
> + g = gimple_build_assign (ovf, NE_EXPR, l,
> + build_zero_cst (m_limb_type));
> + else
> + {
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + PLUS_EXPR, l,
> + build_int_cst (m_limb_type, 1));
> + insert_before (g);
> + g = gimple_build_assign (ovf, GT_EXPR, gimple_assign_lhs (g),
> + build_int_cst (m_limb_type, 1));
> + }
> + insert_before (g);
> + }
> + else
> + {
> + basic_block edge_bb = NULL;
> + gimple_stmt_iterator gsi = m_gsi;
> + gsi_prev (&gsi);
> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> + edge_bb = e->src;
> + m_gsi = gsi_last_bb (edge_bb);
> + if (!gsi_end_p (m_gsi))
> + gsi_next (&m_gsi);
> +
> + tree cmp = build_zero_cst (m_limb_type);
> + for (unsigned i = 0; i < cnt; i++)
> + {
> + tree idx, idx_next = NULL_TREE;
> + if (i == 0)
> + idx = size_int (startlimb);
> + else if (i == 2)
> + idx = size_int (endlimb);
> + else if (use_loop)
> + idx = create_loop (size_int (startlimb + 1), &idx_next);
> + else
> + idx = size_int (startlimb + 1);
> + tree l = limb_access (NULL_TREE, var ? var : obj, idx, true);
> + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> + insert_before (g);
> + l = gimple_assign_lhs (g);
> + if (i == 0 || i == 2)
> + l = arith_overflow_extract_bits (start, end, l,
> + tree_to_uhwi (idx),
> + check_zero);
> + if (i == 0 && !check_zero)
> + {
> + cmp = l;
> + g = gimple_build_assign (make_ssa_name (m_limb_type),
> + PLUS_EXPR, l,
> + build_int_cst (m_limb_type, 1));
> + insert_before (g);
> + g = gimple_build_cond (GT_EXPR, gimple_assign_lhs (g),
> + build_int_cst (m_limb_type, 1),
> + NULL_TREE, NULL_TREE);
> + }
> + else
> + g = gimple_build_cond (NE_EXPR, l, cmp, NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge e1 = split_block (gsi_bb (m_gsi), g);
> + e1->flags = EDGE_FALSE_VALUE;
> + edge e2 = make_edge (e1->src, gimple_bb (final_stmt),
> + EDGE_TRUE_VALUE);
> + e1->probability = profile_probability::likely ();
> + e2->probability = e1->probability.invert ();
> + if (i == 0)
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> + m_gsi = gsi_after_labels (e1->dest);
> + if (i == 1 && use_loop)
> + {
> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> + size_one_node);
> + insert_before (g);
> + g = gimple_build_cond (NE_EXPR, idx_next,
> + size_int (endlimb + (cnt == 1)),
> + NULL_TREE, NULL_TREE);
> + insert_before (g);
> + edge true_edge, false_edge;
> + extract_true_false_edges_from_block (gsi_bb (m_gsi),
> + &true_edge,
> + &false_edge);
> + m_gsi = gsi_after_labels (false_edge->dest);
> + }
> + }
> +
> + ovf = make_ssa_name (boolean_type_node);
> + basic_block bb = gimple_bb (final_stmt);
> + gphi *phi = create_phi_node (ovf, bb);
> + edge e1 = find_edge (gsi_bb (m_gsi), bb);
> + edge_iterator ei;
> + FOR_EACH_EDGE (e, ei, bb->preds)
> + {
> + tree val = e == e1 ? boolean_false_node : boolean_true_node;
> + add_phi_arg (phi, val, e, UNKNOWN_LOCATION);
> + }
> + m_gsi = gsi_for_stmt (final_stmt);
> + }
> + }
> +
> + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, MULT_EXPR);
> +}
> +
> +/* Lower REALPART_EXPR or IMAGPART_EXPR stmt extracting part of result from
> + .{ADD,SUB,MUL}_OVERFLOW call. */
> +
> +void
> +bitint_large_huge::lower_cplxpart_stmt (tree obj, gimple *stmt)
> +{
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + rhs1 = TREE_OPERAND (rhs1, 0);
> + if (obj == NULL_TREE)
> + {
> + int part = var_to_partition (m_map, gimple_assign_lhs (stmt));
> + gcc_assert (m_vars[part] != NULL_TREE);
> + obj = m_vars[part];
> + }
> + if (TREE_CODE (rhs1) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> + {
> + lower_call (obj, SSA_NAME_DEF_STMT (rhs1));
> + return;
> + }
> + int part = var_to_partition (m_map, rhs1);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + tree var = m_vars[part];
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> + obj = build1 (VIEW_CONVERT_EXPR, atype, obj);
> + tree off = build_int_cst (build_pointer_type (TREE_TYPE (var)),
> + gimple_assign_rhs_code (stmt) == REALPART_EXPR
> + ? 0 : nelts * m_limb_size);
> + tree v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), off);
> + gimple *g = gimple_build_assign (obj, v2);
> + insert_before (g);
> +}
> +
> +/* Lower COMPLEX_EXPR stmt. */
> +
> +void
> +bitint_large_huge::lower_complexexpr_stmt (gimple *stmt)
> +{
> + tree lhs = gimple_assign_lhs (stmt);
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree rhs2 = gimple_assign_rhs2 (stmt);
> + int part = var_to_partition (m_map, lhs);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + lhs = m_vars[part];
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (rhs1))) / limb_prec;
> + tree atype = build_array_type_nelts (m_limb_type, nelts);
> + tree zero = build_zero_cst (build_pointer_type (TREE_TYPE (lhs)));
> + tree v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), zero);
> + tree v2;
> + if (TREE_CODE (rhs1) == SSA_NAME)
> + {
> + part = var_to_partition (m_map, rhs1);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + v2 = m_vars[part];
> + }
> + else if (integer_zerop (rhs1))
> + v2 = build_zero_cst (atype);
> + else
> + v2 = tree_output_constant_def (rhs1);
> + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> + gimple *g = gimple_build_assign (v1, v2);
> + insert_before (g);
> + tree off = fold_convert (build_pointer_type (TREE_TYPE (lhs)),
> + TYPE_SIZE_UNIT (atype));
> + v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), off);
> + if (TREE_CODE (rhs2) == SSA_NAME)
> + {
> + part = var_to_partition (m_map, rhs2);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + v2 = m_vars[part];
> + }
> + else if (integer_zerop (rhs2))
> + v2 = build_zero_cst (atype);
> + else
> + v2 = tree_output_constant_def (rhs2);
> + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> + g = gimple_build_assign (v1, v2);
> + insert_before (g);
> +}
> +
> +/* Lower a call statement with one or more large/huge _BitInt
> + arguments or large/huge _BitInt return value. */
> +
> +void
> +bitint_large_huge::lower_call (tree obj, gimple *stmt)
> +{
> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> + unsigned int nargs = gimple_call_num_args (stmt);
> + if (gimple_call_internal_p (stmt))
> + switch (gimple_call_internal_fn (stmt))
> + {
> + case IFN_ADD_OVERFLOW:
> + case IFN_SUB_OVERFLOW:
> + case IFN_UBSAN_CHECK_ADD:
> + case IFN_UBSAN_CHECK_SUB:
> + lower_addsub_overflow (obj, stmt);
> + return;
> + case IFN_MUL_OVERFLOW:
> + case IFN_UBSAN_CHECK_MUL:
> + lower_mul_overflow (obj, stmt);
> + return;
> + default:
> + break;
> + }
> + for (unsigned int i = 0; i < nargs; ++i)
> + {
> + tree arg = gimple_call_arg (stmt, i);
> + if (TREE_CODE (arg) != SSA_NAME
> + || TREE_CODE (TREE_TYPE (arg)) != BITINT_TYPE
> + || bitint_precision_kind (TREE_TYPE (arg)) <= bitint_prec_middle)
> + continue;
> + int p = var_to_partition (m_map, arg);
> + tree v = m_vars[p];
> + gcc_assert (v != NULL_TREE);
> + if (!types_compatible_p (TREE_TYPE (arg), TREE_TYPE (v)))
> + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (arg), v);
> + arg = make_ssa_name (TREE_TYPE (arg));
> + gimple *g = gimple_build_assign (arg, v);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_call_set_arg (stmt, i, arg);
> + if (m_preserved == NULL)
> + m_preserved = BITMAP_ALLOC (NULL);
> + bitmap_set_bit (m_preserved, SSA_NAME_VERSION (arg));
> + }
> + tree lhs = gimple_call_lhs (stmt);
> + if (lhs
> + && TREE_CODE (lhs) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> + {
> + int p = var_to_partition (m_map, lhs);
> + tree v = m_vars[p];
> + gcc_assert (v != NULL_TREE);
> + if (!types_compatible_p (TREE_TYPE (lhs), TREE_TYPE (v)))
> + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), v);
> + gimple_call_set_lhs (stmt, v);
> + SSA_NAME_DEF_STMT (lhs) = gimple_build_nop ();
> + }
> + update_stmt (stmt);
> +}
> +
> +/* Lower __asm STMT which involves large/huge _BitInt values. */
> +
> +void
> +bitint_large_huge::lower_asm (gimple *stmt)
> +{
> + gasm *g = as_a <gasm *> (stmt);
> + unsigned noutputs = gimple_asm_noutputs (g);
> + unsigned ninputs = gimple_asm_ninputs (g);
> +
> + for (unsigned i = 0; i < noutputs; ++i)
> + {
> + tree t = gimple_asm_output_op (g, i);
> + tree s = TREE_VALUE (t);
> + if (TREE_CODE (s) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> + {
> + int part = var_to_partition (m_map, s);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + TREE_VALUE (t) = m_vars[part];
> + }
> + }
> + for (unsigned i = 0; i < ninputs; ++i)
> + {
> + tree t = gimple_asm_input_op (g, i);
> + tree s = TREE_VALUE (t);
> + if (TREE_CODE (s) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> + {
> + int part = var_to_partition (m_map, s);
> + gcc_assert (m_vars[part] != NULL_TREE);
> + TREE_VALUE (t) = m_vars[part];
> + }
> + }
> + update_stmt (stmt);
> +}
> +
> +/* Lower statement STMT which involves large/huge _BitInt values
> + into code accessing individual limbs. */
> +
> +void
> +bitint_large_huge::lower_stmt (gimple *stmt)
> +{
> + m_first = true;
> + m_lhs = NULL_TREE;
> + m_data.truncate (0);
> + m_data_cnt = 0;
> + m_gsi = gsi_for_stmt (stmt);
> + m_after_stmt = NULL;
> + m_bb = NULL;
> + m_init_gsi = m_gsi;
> + gsi_prev (&m_init_gsi);
> + m_preheader_bb = NULL;
> + m_upwards_2limb = 0;
> + m_var_msb = false;
> + m_loc = gimple_location (stmt);
> + if (is_gimple_call (stmt))
> + {
> + lower_call (NULL_TREE, stmt);
> + return;
> + }
> + if (gimple_code (stmt) == GIMPLE_ASM)
> + {
> + lower_asm (stmt);
> + return;
> + }
> + tree lhs = NULL_TREE, cmp_op1 = NULL_TREE, cmp_op2 = NULL_TREE;
> + tree_code cmp_code = comparison_op (stmt, &cmp_op1, &cmp_op2);
> + bool eq_p = (cmp_code == EQ_EXPR || cmp_code == NE_EXPR);
> + bool mergeable_cast_p = false;
> + bool final_cast_p = false;
> + if (gimple_assign_cast_p (stmt))
> + {
> + lhs = gimple_assign_lhs (stmt);
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> + && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
> + mergeable_cast_p = true;
> + else if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> + && INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
> + {
> + final_cast_p = true;
> + if (TREE_CODE (rhs1) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> + {
> + gimple *g = SSA_NAME_DEF_STMT (rhs1);
> + if (is_gimple_assign (g)
> + && gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> + {
> + tree rhs2 = TREE_OPERAND (gimple_assign_rhs1 (g), 0);
> + if (TREE_CODE (rhs2) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs2))))
> + {
> + g = SSA_NAME_DEF_STMT (rhs2);
> + int ovf = optimizable_arith_overflow (g);
> + if (ovf == 2)
> + /* If .{ADD,SUB,MUL}_OVERFLOW has both REALPART_EXPR
> + and IMAGPART_EXPR uses, where the latter is cast to
> + non-_BitInt, it will be optimized when handling
> + the REALPART_EXPR. */
> + return;
> + if (ovf == 1)
> + {
> + lower_call (NULL_TREE, g);
> + return;
> + }
> + }
> + }
> + }
> + }
> + }
> + if (gimple_store_p (stmt))
> + {
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + if (TREE_CODE (rhs1) == SSA_NAME
> + && (m_names == NULL
> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> + {
> + gimple *g = SSA_NAME_DEF_STMT (rhs1);
> + m_loc = gimple_location (g);
> + lhs = gimple_assign_lhs (stmt);
> + if (is_gimple_assign (g) && !mergeable_op (g))
> + switch (gimple_assign_rhs_code (g))
> + {
> + case LSHIFT_EXPR:
> + case RSHIFT_EXPR:
> + lower_shift_stmt (lhs, g);
> + handled:
> + m_gsi = gsi_for_stmt (stmt);
> + unlink_stmt_vdef (stmt);
> + release_ssa_name (gimple_vdef (stmt));
> + gsi_remove (&m_gsi, true);
> + return;
> + case MULT_EXPR:
> + case TRUNC_DIV_EXPR:
> + case TRUNC_MOD_EXPR:
> + lower_muldiv_stmt (lhs, g);
> + goto handled;
> + case FIX_TRUNC_EXPR:
> + lower_float_conv_stmt (lhs, g);
> + goto handled;
> + case REALPART_EXPR:
> + case IMAGPART_EXPR:
> + lower_cplxpart_stmt (lhs, g);
> + goto handled;
> + default:
> + break;
> + }
> + else if (optimizable_arith_overflow (g) == 3)
> + {
> + lower_call (lhs, g);
> + goto handled;
> + }
> + m_loc = gimple_location (stmt);
> + }
> + }
> + if (mergeable_op (stmt)
> + || gimple_store_p (stmt)
> + || gimple_assign_load_p (stmt)
> + || eq_p
> + || mergeable_cast_p)
> + {
> + lhs = lower_mergeable_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> + if (!eq_p)
> + return;
> + }
> + else if (cmp_code != ERROR_MARK)
> + lhs = lower_comparison_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> + if (cmp_code != ERROR_MARK)
> + {
> + if (gimple_code (stmt) == GIMPLE_COND)
> + {
> + gcond *cstmt = as_a <gcond *> (stmt);
> + gimple_cond_set_lhs (cstmt, lhs);
> + gimple_cond_set_rhs (cstmt, boolean_false_node);
> + gimple_cond_set_code (cstmt, cmp_code);
> + update_stmt (stmt);
> + return;
> + }
> + if (gimple_assign_rhs_code (stmt) == COND_EXPR)
> + {
> + tree cond = build2 (cmp_code, boolean_type_node, lhs,
> + boolean_false_node);
> + gimple_assign_set_rhs1 (stmt, cond);
> + lhs = gimple_assign_lhs (stmt);
> + gcc_assert (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> + || (bitint_precision_kind (TREE_TYPE (lhs))
> + <= bitint_prec_middle));
> + update_stmt (stmt);
> + return;
> + }
> + gimple_assign_set_rhs1 (stmt, lhs);
> + gimple_assign_set_rhs2 (stmt, boolean_false_node);
> + gimple_assign_set_rhs_code (stmt, cmp_code);
> + update_stmt (stmt);
> + return;
> + }
> + if (final_cast_p)
> + {
> + tree lhs_type = TREE_TYPE (lhs);
> + /* Add support for 3 or more limbs filled in from normal integral
> + type if this assert fails. If no target chooses limb mode smaller
> + than half of largest supported normal integral type, this will not
> + be needed. */
> + gcc_assert (TYPE_PRECISION (lhs_type) <= 2 * limb_prec);
> + gimple *g;
> + if (TREE_CODE (lhs_type) == BITINT_TYPE
> + && bitint_precision_kind (lhs_type) == bitint_prec_middle)
> + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (lhs_type),
> + TYPE_UNSIGNED (lhs_type));
> + m_data_cnt = 0;
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree r1 = handle_operand (rhs1, size_int (0));
> + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> + r1 = add_cast (lhs_type, r1);
> + if (TYPE_PRECISION (lhs_type) > limb_prec)
> + {
> + m_data_cnt = 0;
> + m_first = false;
> + tree r2 = handle_operand (rhs1, size_int (1));
> + r2 = add_cast (lhs_type, r2);
> + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> + build_int_cst (unsigned_type_node,
> + limb_prec));
> + insert_before (g);
> + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> + gimple_assign_lhs (g));
> + insert_before (g);
> + r1 = gimple_assign_lhs (g);
> + }
> + if (lhs_type != TREE_TYPE (lhs))
> + g = gimple_build_assign (lhs, NOP_EXPR, r1);
> + else
> + g = gimple_build_assign (lhs, r1);
> + gsi_replace (&m_gsi, g, true);
> + return;
> + }
> + if (is_gimple_assign (stmt))
> + switch (gimple_assign_rhs_code (stmt))
> + {
> + case LSHIFT_EXPR:
> + case RSHIFT_EXPR:
> + lower_shift_stmt (NULL_TREE, stmt);
> + return;
> + case MULT_EXPR:
> + case TRUNC_DIV_EXPR:
> + case TRUNC_MOD_EXPR:
> + lower_muldiv_stmt (NULL_TREE, stmt);
> + return;
> + case FIX_TRUNC_EXPR:
> + case FLOAT_EXPR:
> + lower_float_conv_stmt (NULL_TREE, stmt);
> + return;
> + case REALPART_EXPR:
> + case IMAGPART_EXPR:
> + lower_cplxpart_stmt (NULL_TREE, stmt);
> + return;
> + case COMPLEX_EXPR:
> + lower_complexexpr_stmt (stmt);
> + return;
> + default:
> + break;
> + }
> + gcc_unreachable ();
> +}
> +
> +/* Helper for walk_non_aliased_vuses. Determine if we arrived at
> + the desired memory state. */
> +
> +void *
> +vuse_eq (ao_ref *, tree vuse1, void *data)
> +{
> + tree vuse2 = (tree) data;
> + if (vuse1 == vuse2)
> + return data;
> +
> + return NULL;
> +}
> +
> +/* Dominator walker used to discover which large/huge _BitInt
> + loads could be sunk into all their uses. */
> +
> +class bitint_dom_walker : public dom_walker
> +{
> +public:
> + bitint_dom_walker (bitmap names, bitmap loads)
> + : dom_walker (CDI_DOMINATORS), m_names (names), m_loads (loads) {}
> +
> + edge before_dom_children (basic_block) final override;
> +
> +private:
> + bitmap m_names, m_loads;
> +};
> +
> +edge
> +bitint_dom_walker::before_dom_children (basic_block bb)
> +{
> + gphi *phi = get_virtual_phi (bb);
> + tree vop;
> + if (phi)
> + vop = gimple_phi_result (phi);
> + else if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
> + vop = NULL_TREE;
> + else
> + vop = (tree) get_immediate_dominator (CDI_DOMINATORS, bb)->aux;
> +
> + auto_vec<tree, 16> worklist;
> + for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
> + !gsi_end_p (gsi); gsi_next (&gsi))
> + {
> + gimple *stmt = gsi_stmt (gsi);
> + if (is_gimple_debug (stmt))
> + continue;
> +
> + if (!vop && gimple_vuse (stmt))
> + vop = gimple_vuse (stmt);
> +
> + tree cvop = vop;
> + if (gimple_vdef (stmt))
> + vop = gimple_vdef (stmt);
> +
> + tree lhs = gimple_get_lhs (stmt);
> + if (lhs
> + && TREE_CODE (lhs) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> + && !bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> + /* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names,
> + it means it will be handled in a loop or straight line code
> + at the location of its (ultimate) immediate use, so for
> + vop checking purposes check these only at the ultimate
> + immediate use. */
> + continue;
> +
> + ssa_op_iter oi;
> + use_operand_p use_p;
> + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, oi, SSA_OP_USE)
> + {
> + tree s = USE_FROM_PTR (use_p);
> + if (TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> + worklist.safe_push (s);
> + }
> +
> + while (worklist.length () > 0)
> + {
> + tree s = worklist.pop ();
> +
> + if (!bitmap_bit_p (m_names, SSA_NAME_VERSION (s)))
> + {
> + FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
> + oi, SSA_OP_USE)
> + {
> + tree s2 = USE_FROM_PTR (use_p);
> + if (TREE_CODE (TREE_TYPE (s2)) == BITINT_TYPE
> + && (bitint_precision_kind (TREE_TYPE (s2))
> + >= bitint_prec_large))
> + worklist.safe_push (s2);
> + }
> + continue;
> + }
> + if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
> + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
> + {
> + tree rhs = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
> + if (TREE_CODE (rhs) == SSA_NAME
> + && bitmap_bit_p (m_loads, SSA_NAME_VERSION (rhs)))
> + s = rhs;
> + else
> + continue;
> + }
> + else if (!bitmap_bit_p (m_loads, SSA_NAME_VERSION (s)))
> + continue;
> +
> + ao_ref ref;
> + ao_ref_init (&ref, gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)));
> + tree lvop = gimple_vuse (SSA_NAME_DEF_STMT (s));
> + unsigned limit = 64;
> + tree vuse = cvop;
> + if (vop != cvop
> + && is_gimple_assign (stmt)
> + && gimple_store_p (stmt)
> + && !operand_equal_p (lhs,
> + gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)),
> + 0))
> + vuse = vop;
> + if (vuse != lvop
> + && walk_non_aliased_vuses (&ref, vuse, false, vuse_eq,
> + NULL, NULL, limit, lvop) == NULL)
> + bitmap_clear_bit (m_loads, SSA_NAME_VERSION (s));
> + }
> + }
> +
> + bb->aux = (void *) vop;
> + return NULL;
> +}
> +
> +}
> +
> +/* Replacement for normal processing of STMT in tree-ssa-coalesce.cc
> + build_ssa_conflict_graph.
> + The differences are:
> + 1) don't process assignments with large/huge _BitInt lhs not in NAMES
> + 2) for large/huge _BitInt multiplication/division/modulo process def
> + only after processing uses rather than before to make uses conflict
> + with the definition
> + 3) for large/huge _BitInt uses not in NAMES mark the uses of their
> + SSA_NAME_DEF_STMT (recursively), because those uses will be sunk into
> + the final statement. */
> +
> +void
> +build_bitint_stmt_ssa_conflicts (gimple *stmt, live_track *live,
> + ssa_conflicts *graph, bitmap names,
> + void (*def) (live_track *, tree,
> + ssa_conflicts *),
> + void (*use) (live_track *, tree))
> +{
> + bool muldiv_p = false;
> + tree lhs = NULL_TREE;
> + if (is_gimple_assign (stmt))
> + {
> + lhs = gimple_assign_lhs (stmt);
> + if (TREE_CODE (lhs) == SSA_NAME
> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> + {
> + if (!bitmap_bit_p (names, SSA_NAME_VERSION (lhs)))
> + return;
> + switch (gimple_assign_rhs_code (stmt))
> + {
> + case MULT_EXPR:
> + case TRUNC_DIV_EXPR:
> + case TRUNC_MOD_EXPR:
> + muldiv_p = true;
> + default:
> + break;
> + }
> + }
> + }
> +
> + ssa_op_iter iter;
> + tree var;
> + if (!muldiv_p)
> + {
> + /* For stmts with more than one SSA_NAME definition pretend all the
> + SSA_NAME outputs but the first one are live at this point, so
> + that conflicts are added in between all those even when they are
> + actually not really live after the asm, because expansion might
> + copy those into pseudos after the asm and if multiple outputs
> + share the same partition, it might overwrite those that should
> + be live. E.g.
> + asm volatile (".." : "=r" (a) : "=r" (b) : "0" (a), "1" (a));
> + return a;
> + See PR70593. */
> + bool first = true;
> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> + if (first)
> + first = false;
> + else
> + use (live, var);
> +
> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> + def (live, var, graph);
> + }
> +
> + auto_vec<tree, 16> worklist;
> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_USE)
> + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> + {
> + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> + use (live, var);
> + else
> + worklist.safe_push (var);
> + }
> +
> + while (worklist.length () > 0)
> + {
> + tree s = worklist.pop ();
> + FOR_EACH_SSA_TREE_OPERAND (var, SSA_NAME_DEF_STMT (s), iter, SSA_OP_USE)
> + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> + {
> + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> + use (live, var);
> + else
> + worklist.safe_push (var);
> + }
> + }
> +
> + if (muldiv_p)
> + def (live, lhs, graph);
> +}
> +
> +/* Entry point for _BitInt(N) operation lowering during optimization. */
> +
> +static unsigned int
> +gimple_lower_bitint (void)
> +{
> + small_max_prec = mid_min_prec = large_min_prec = huge_min_prec = 0;
> + limb_prec = 0;
> +
> + unsigned int i;
> + tree vop = gimple_vop (cfun);
> + for (i = 0; i < num_ssa_names; ++i)
> + {
> + tree s = ssa_name (i);
> + if (s == NULL)
> + continue;
> + tree type = TREE_TYPE (s);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) != bitint_prec_small)
> + break;
> + /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
> + into memory. Such functions could have no large/huge SSA_NAMEs. */
> + if (vop && SSA_NAME_VAR (s) == vop)
SSA_NAME_IS_VIRTUAL_OPERAND (s)
> + {
> + gimple *g = SSA_NAME_DEF_STMT (s);
> + if (is_gimple_assign (g) && gimple_store_p (g))
> + {
what about functions returning large _BitInt<N> where the ABI
specifies it doesn't return by invisible reference?
The other def not handled are ASMs ...
> + tree t = gimple_assign_rhs1 (g);
> + if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
> + && (bitint_precision_kind (TREE_TYPE (t))
> + >= bitint_prec_large))
> + break;
> + }
> + }
> + }
> + if (i == num_ssa_names)
> + return 0;
> +
> + basic_block bb;
> + auto_vec<gimple *, 4> switch_statements;
> + FOR_EACH_BB_FN (bb, cfun)
> + {
> + if (gswitch *swtch = safe_dyn_cast <gswitch *> (*gsi_last_bb (bb)))
> + {
> + tree idx = gimple_switch_index (swtch);
> + if (TREE_CODE (TREE_TYPE (idx)) != BITINT_TYPE
> + || bitint_precision_kind (TREE_TYPE (idx)) < bitint_prec_large)
> + continue;
> +
> + if (optimize)
> + group_case_labels_stmt (swtch);
> + switch_statements.safe_push (swtch);
> + }
> + }
> +
> + if (!switch_statements.is_empty ())
> + {
> + bool expanded = false;
> + gimple *stmt;
> + unsigned int j;
> + i = 0;
> + FOR_EACH_VEC_ELT (switch_statements, j, stmt)
> + {
> + gswitch *swtch = as_a<gswitch *> (stmt);
> + tree_switch_conversion::switch_decision_tree dt (swtch);
> + expanded |= dt.analyze_switch_statement ();
> + }
> +
> + if (expanded)
> + {
> + free_dominance_info (CDI_DOMINATORS);
> + free_dominance_info (CDI_POST_DOMINATORS);
> + mark_virtual_operands_for_renaming (cfun);
> + cleanup_tree_cfg (TODO_update_ssa);
> + }
> + }
> +
> + struct bitint_large_huge large_huge;
> + bool has_large_huge_parm_result = false;
> + bool has_large_huge = false;
> + unsigned int ret = 0, first_large_huge = ~0U;
> + bool edge_insertions = false;
> + for (; i < num_ssa_names; ++i)
the above SSA update could end up re-using a smaller SSA name number,
so I wonder if you can really avoid starting at 1 again.
> + {
> + tree s = ssa_name (i);
> + if (s == NULL)
> + continue;
> + tree type = TREE_TYPE (s);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large)
> + {
> + if (first_large_huge == ~0U)
> + first_large_huge = i;
> + gimple *stmt = SSA_NAME_DEF_STMT (s), *g;
> + gimple_stmt_iterator gsi;
> + tree_code rhs_code;
> + /* Unoptimize certain constructs to simpler alternatives to
> + avoid having to lower all of them. */
> + if (is_gimple_assign (stmt))
> + switch (rhs_code = gimple_assign_rhs_code (stmt))
> + {
> + default:
> + break;
> + case LROTATE_EXPR:
> + case RROTATE_EXPR:
> + {
> + first_large_huge = 0;
> + location_t loc = gimple_location (stmt);
> + gsi = gsi_for_stmt (stmt);
> + tree rhs1 = gimple_assign_rhs1 (stmt);
> + tree type = TREE_TYPE (rhs1);
> + tree n = gimple_assign_rhs2 (stmt), m;
> + tree p = build_int_cst (TREE_TYPE (n),
> + TYPE_PRECISION (type));
> + if (TREE_CODE (n) == INTEGER_CST)
> + m = fold_build2 (MINUS_EXPR, TREE_TYPE (n), p, n);
> + else
> + {
> + m = make_ssa_name (TREE_TYPE (n));
> + g = gimple_build_assign (m, MINUS_EXPR, p, n);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + }
> + if (!TYPE_UNSIGNED (type))
> + {
> + tree utype = build_bitint_type (TYPE_PRECISION (type),
> + 1);
> + if (TREE_CODE (rhs1) == INTEGER_CST)
> + rhs1 = fold_convert (utype, rhs1);
> + else
> + {
> + tree t = make_ssa_name (type);
> + g = gimple_build_assign (t, NOP_EXPR, rhs1);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + }
> + }
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> + rhs_code == LROTATE_EXPR
> + ? LSHIFT_EXPR : RSHIFT_EXPR,
> + rhs1, n);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + tree op1 = gimple_assign_lhs (g);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> + rhs_code == LROTATE_EXPR
> + ? RSHIFT_EXPR : LSHIFT_EXPR,
> + rhs1, m);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + tree op2 = gimple_assign_lhs (g);
> + tree lhs = gimple_assign_lhs (stmt);
> + if (!TYPE_UNSIGNED (type))
> + {
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (op1)),
> + BIT_IOR_EXPR, op1, op2);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + g = gimple_build_assign (lhs, NOP_EXPR,
> + gimple_assign_lhs (g));
> + }
> + else
> + g = gimple_build_assign (lhs, BIT_IOR_EXPR, op1, op2);
> + gsi_replace (&gsi, g, true);
> + gimple_set_location (g, loc);
> + }
> + break;
> + case ABS_EXPR:
> + case ABSU_EXPR:
> + case MIN_EXPR:
> + case MAX_EXPR:
> + case COND_EXPR:
> + first_large_huge = 0;
> + gsi = gsi_for_stmt (stmt);
> + tree lhs = gimple_assign_lhs (stmt);
> + tree rhs1 = gimple_assign_rhs1 (stmt), rhs2 = NULL_TREE;
> + location_t loc = gimple_location (stmt);
> + if (rhs_code == ABS_EXPR)
> + g = gimple_build_cond (LT_EXPR, rhs1,
> + build_zero_cst (TREE_TYPE (rhs1)),
> + NULL_TREE, NULL_TREE);
> + else if (rhs_code == ABSU_EXPR)
> + {
> + rhs2 = make_ssa_name (TREE_TYPE (lhs));
> + g = gimple_build_assign (rhs2, NOP_EXPR, rhs1);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + g = gimple_build_cond (LT_EXPR, rhs1,
> + build_zero_cst (TREE_TYPE (rhs1)),
> + NULL_TREE, NULL_TREE);
> + rhs1 = rhs2;
> + }
> + else if (rhs_code == MIN_EXPR || rhs_code == MAX_EXPR)
> + {
> + rhs2 = gimple_assign_rhs2 (stmt);
> + if (TREE_CODE (rhs1) == INTEGER_CST)
> + std::swap (rhs1, rhs2);
> + g = gimple_build_cond (LT_EXPR, rhs1, rhs2,
> + NULL_TREE, NULL_TREE);
> + if (rhs_code == MAX_EXPR)
> + std::swap (rhs1, rhs2);
> + }
> + else
> + {
> + g = gimple_build_cond (TREE_CODE (rhs1),
> + TREE_OPERAND (rhs1, 0),
> + TREE_OPERAND (rhs1, 1),
> + NULL_TREE, NULL_TREE);
> + rhs1 = gimple_assign_rhs2 (stmt);
> + rhs2 = gimple_assign_rhs3 (stmt);
> + }
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + edge e1 = split_block (gsi_bb (gsi), g);
> + edge e2 = split_block (e1->dest, (gimple *) NULL);
> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> + e3->probability = profile_probability::even ();
> + e1->flags = EDGE_TRUE_VALUE;
> + e1->probability = e3->probability.invert ();
> + if (dom_info_available_p (CDI_DOMINATORS))
> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> + if (rhs_code == ABS_EXPR || rhs_code == ABSU_EXPR)
> + {
> + gsi = gsi_after_labels (e1->dest);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> + NEGATE_EXPR, rhs1);
> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> + gimple_set_location (g, loc);
> + rhs2 = gimple_assign_lhs (g);
> + std::swap (rhs1, rhs2);
> + }
> + gsi = gsi_for_stmt (stmt);
> + gsi_remove (&gsi, true);
> + gphi *phi = create_phi_node (lhs, e2->dest);
> + add_phi_arg (phi, rhs1, e2, UNKNOWN_LOCATION);
> + add_phi_arg (phi, rhs2, e3, UNKNOWN_LOCATION);
> + break;
> + }
> + }
> + /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
> + into memory. Such functions could have no large/huge SSA_NAMEs. */
> + else if (vop && SSA_NAME_VAR (s) == vop)
> + {
> + gimple *g = SSA_NAME_DEF_STMT (s);
> + if (is_gimple_assign (g) && gimple_store_p (g))
> + {
> + tree t = gimple_assign_rhs1 (g);
> + if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
> + && (bitint_precision_kind (TREE_TYPE (t))
> + >= bitint_prec_large))
> + has_large_huge = true;
> + }
> + }
> + }
> + for (i = first_large_huge; i < num_ssa_names; ++i)
> + {
> + tree s = ssa_name (i);
> + if (s == NULL)
> + continue;
> + tree type = TREE_TYPE (s);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large)
> + {
> + use_operand_p use_p;
> + gimple *use_stmt;
> + has_large_huge = true;
> + if (optimize
> + && optimizable_arith_overflow (SSA_NAME_DEF_STMT (s)))
> + continue;
> + /* Ignore large/huge _BitInt SSA_NAMEs which have single use in
> + the same bb and could be handled in the same loop with the
> + immediate use. */
> + if (optimize
> + && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
> + && single_imm_use (s, &use_p, &use_stmt)
> + && gimple_bb (SSA_NAME_DEF_STMT (s)) == gimple_bb (use_stmt))
> + {
> + if (mergeable_op (SSA_NAME_DEF_STMT (s)))
> + {
> + if (mergeable_op (use_stmt))
> + continue;
> + tree_code cmp_code = comparison_op (use_stmt, NULL, NULL);
> + if (cmp_code == EQ_EXPR || cmp_code == NE_EXPR)
> + continue;
> + if (gimple_assign_cast_p (use_stmt))
> + {
> + tree lhs = gimple_assign_lhs (use_stmt);
> + if (INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
> + continue;
> + }
> + else if (gimple_store_p (use_stmt)
> + && is_gimple_assign (use_stmt)
> + && !gimple_has_volatile_ops (use_stmt)
> + && !stmt_ends_bb_p (use_stmt))
> + continue;
> + }
> + if (gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
> + {
> + tree rhs1 = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
> + if (INTEGRAL_TYPE_P (TREE_TYPE (rhs1))
> + && ((is_gimple_assign (use_stmt)
> + && (gimple_assign_rhs_code (use_stmt)
> + != COMPLEX_EXPR))
> + || gimple_code (use_stmt) == GIMPLE_COND)
> + && (!gimple_store_p (use_stmt)
> + || (is_gimple_assign (use_stmt)
> + && !gimple_has_volatile_ops (use_stmt)
> + && !stmt_ends_bb_p (use_stmt)))
> + && (TREE_CODE (rhs1) != SSA_NAME
> + || !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (rhs1)))
> + {
> + if (TREE_CODE (TREE_TYPE (rhs1)) != BITINT_TYPE
> + || (bitint_precision_kind (TREE_TYPE (rhs1))
> + < bitint_prec_large)
> + || (TYPE_PRECISION (TREE_TYPE (rhs1))
> + >= TYPE_PRECISION (TREE_TYPE (s)))
> + || mergeable_op (SSA_NAME_DEF_STMT (s)))
> + continue;
> + /* Prevent merging a widening non-mergeable cast
> + on result of some narrower mergeable op
> + together with later mergeable operations. E.g.
> + result of _BitInt(223) addition shouldn't be
> + sign-extended to _BitInt(513) and have another
> + _BitInt(513) added to it, as handle_plus_minus
> + with its PHI node handling inside of handle_cast
> + will not work correctly. An exception is if
> + use_stmt is a store, this is handled directly
> + in lower_mergeable_stmt. */
> + if (TREE_CODE (rhs1) != SSA_NAME
> + || !has_single_use (rhs1)
> + || (gimple_bb (SSA_NAME_DEF_STMT (rhs1))
> + != gimple_bb (SSA_NAME_DEF_STMT (s)))
> + || !mergeable_op (SSA_NAME_DEF_STMT (rhs1))
> + || gimple_store_p (use_stmt))
> + continue;
> + if (gimple_assign_cast_p (SSA_NAME_DEF_STMT (rhs1)))
> + {
> + /* Another exception is if the widening cast is
> + from mergeable same precision cast from something
> + not mergeable. */
> + tree rhs2
> + = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (rhs1));
> + if (TREE_CODE (TREE_TYPE (rhs2)) == BITINT_TYPE
> + && (TYPE_PRECISION (TREE_TYPE (rhs1))
> + == TYPE_PRECISION (TREE_TYPE (rhs2))))
> + {
> + if (TREE_CODE (rhs2) != SSA_NAME
> + || !has_single_use (rhs2)
> + || (gimple_bb (SSA_NAME_DEF_STMT (rhs2))
> + != gimple_bb (SSA_NAME_DEF_STMT (s)))
> + || !mergeable_op (SSA_NAME_DEF_STMT (rhs2)))
> + continue;
> + }
> + }
> + }
> + }
> + if (is_gimple_assign (SSA_NAME_DEF_STMT (s)))
> + switch (gimple_assign_rhs_code (SSA_NAME_DEF_STMT (s)))
> + {
> + case IMAGPART_EXPR:
> + {
> + tree rhs1 = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
> + rhs1 = TREE_OPERAND (rhs1, 0);
> + if (TREE_CODE (rhs1) == SSA_NAME)
> + {
> + gimple *g = SSA_NAME_DEF_STMT (rhs1);
> + if (optimizable_arith_overflow (g))
> + continue;
> + }
> + }
> + /* FALLTHRU */
> + case LSHIFT_EXPR:
> + case RSHIFT_EXPR:
> + case MULT_EXPR:
> + case TRUNC_DIV_EXPR:
> + case TRUNC_MOD_EXPR:
> + case FIX_TRUNC_EXPR:
> + case REALPART_EXPR:
> + if (gimple_store_p (use_stmt)
> + && is_gimple_assign (use_stmt)
> + && !gimple_has_volatile_ops (use_stmt)
> + && !stmt_ends_bb_p (use_stmt))
> + continue;
> + default:
> + break;
> + }
> + }
> +
> + /* Also ignore uninitialized uses. */
> + if (SSA_NAME_IS_DEFAULT_DEF (s)
> + && (!SSA_NAME_VAR (s) || VAR_P (SSA_NAME_VAR (s))))
> + continue;
> +
> + if (!large_huge.m_names)
> + large_huge.m_names = BITMAP_ALLOC (NULL);
> + bitmap_set_bit (large_huge.m_names, SSA_NAME_VERSION (s));
> + if (has_single_use (s))
> + {
> + if (!large_huge.m_single_use_names)
> + large_huge.m_single_use_names = BITMAP_ALLOC (NULL);
> + bitmap_set_bit (large_huge.m_single_use_names,
> + SSA_NAME_VERSION (s));
> + }
> + if (SSA_NAME_VAR (s)
> + && ((TREE_CODE (SSA_NAME_VAR (s)) == PARM_DECL
> + && SSA_NAME_IS_DEFAULT_DEF (s))
> + || TREE_CODE (SSA_NAME_VAR (s)) == RESULT_DECL))
> + has_large_huge_parm_result = true;
> + if (optimize
> + && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
> + && gimple_assign_load_p (SSA_NAME_DEF_STMT (s))
> + && !gimple_has_volatile_ops (SSA_NAME_DEF_STMT (s))
> + && !stmt_ends_bb_p (SSA_NAME_DEF_STMT (s)))
> + {
> + use_operand_p use_p;
> + imm_use_iterator iter;
> + bool optimizable_load = true;
> + FOR_EACH_IMM_USE_FAST (use_p, iter, s)
> + {
> + gimple *use_stmt = USE_STMT (use_p);
> + if (is_gimple_debug (use_stmt))
> + continue;
> + if (gimple_code (use_stmt) == GIMPLE_PHI
> + || is_gimple_call (use_stmt))
> + {
> + optimizable_load = false;
> + break;
> + }
> + }
> +
> + ssa_op_iter oi;
> + FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
> + oi, SSA_OP_USE)
> + {
> + tree s2 = USE_FROM_PTR (use_p);
> + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s2))
> + {
> + optimizable_load = false;
> + break;
> + }
> + }
> +
> + if (optimizable_load && !stmt_ends_bb_p (SSA_NAME_DEF_STMT (s)))
> + {
> + if (!large_huge.m_loads)
> + large_huge.m_loads = BITMAP_ALLOC (NULL);
> + bitmap_set_bit (large_huge.m_loads, SSA_NAME_VERSION (s));
> + }
> + }
> + }
> + /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
> + into memory. Such functions could have no large/huge SSA_NAMEs. */
> + else if (vop && SSA_NAME_VAR (s) == vop)
> + {
> + gimple *g = SSA_NAME_DEF_STMT (s);
> + if (is_gimple_assign (g) && gimple_store_p (g))
> + {
> + tree t = gimple_assign_rhs1 (g);
> + if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
> + && bitint_precision_kind (TREE_TYPE (t)) >= bitint_prec_large)
> + has_large_huge = true;
> + }
> + }
> + }
> +
> + if (large_huge.m_names || has_large_huge)
> + {
> + ret = TODO_update_ssa_only_virtuals | TODO_cleanup_cfg;
> + calculate_dominance_info (CDI_DOMINATORS);
> + if (optimize)
> + enable_ranger (cfun);
> + if (large_huge.m_loads)
> + {
> + basic_block entry = ENTRY_BLOCK_PTR_FOR_FN (cfun);
> + entry->aux = NULL;
> + bitint_dom_walker (large_huge.m_names,
> + large_huge.m_loads).walk (entry);
> + bitmap_and_compl_into (large_huge.m_names, large_huge.m_loads);
> + clear_aux_for_blocks ();
> + BITMAP_FREE (large_huge.m_loads);
> + }
> + large_huge.m_limb_type = build_nonstandard_integer_type (limb_prec, 1);
> + large_huge.m_limb_size
> + = tree_to_uhwi (TYPE_SIZE_UNIT (large_huge.m_limb_type));
> + }
> + if (large_huge.m_names)
> + {
> + large_huge.m_map
> + = init_var_map (num_ssa_names, NULL, large_huge.m_names);
> + coalesce_ssa_name (large_huge.m_map);
> + partition_view_normal (large_huge.m_map);
> + if (dump_file && (dump_flags & TDF_DETAILS))
> + {
> + fprintf (dump_file, "After Coalescing:\n");
> + dump_var_map (dump_file, large_huge.m_map);
> + }
> + large_huge.m_vars
> + = XCNEWVEC (tree, num_var_partitions (large_huge.m_map));
> + bitmap_iterator bi;
> + if (has_large_huge_parm_result)
> + EXECUTE_IF_SET_IN_BITMAP (large_huge.m_names, 0, i, bi)
> + {
> + tree s = ssa_name (i);
> + if (SSA_NAME_VAR (s)
> + && ((TREE_CODE (SSA_NAME_VAR (s)) == PARM_DECL
> + && SSA_NAME_IS_DEFAULT_DEF (s))
> + || TREE_CODE (SSA_NAME_VAR (s)) == RESULT_DECL))
> + {
> + int p = var_to_partition (large_huge.m_map, s);
> + if (large_huge.m_vars[p] == NULL_TREE)
> + {
> + large_huge.m_vars[p] = SSA_NAME_VAR (s);
> + mark_addressable (SSA_NAME_VAR (s));
> + }
> + }
> + }
> + tree atype = NULL_TREE;
> + EXECUTE_IF_SET_IN_BITMAP (large_huge.m_names, 0, i, bi)
> + {
> + tree s = ssa_name (i);
> + int p = var_to_partition (large_huge.m_map, s);
> + if (large_huge.m_vars[p] != NULL_TREE)
> + continue;
> + if (atype == NULL_TREE
> + || !tree_int_cst_equal (TYPE_SIZE (atype),
> + TYPE_SIZE (TREE_TYPE (s))))
> + {
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (s))) / limb_prec;
> + atype = build_array_type_nelts (large_huge.m_limb_type, nelts);
> + }
> + large_huge.m_vars[p] = create_tmp_var (atype, "bitint");
> + mark_addressable (large_huge.m_vars[p]);
> + }
> + }
> +
> + FOR_EACH_BB_REVERSE_FN (bb, cfun)
is reverse in any way important? (not visiting newly created blocks?)
> + {
> + gimple_stmt_iterator prev;
> + for (gimple_stmt_iterator gsi = gsi_last_bb (bb); !gsi_end_p (gsi);
> + gsi = prev)
> + {
> + prev = gsi;
> + gsi_prev (&prev);
> + ssa_op_iter iter;
> + gimple *stmt = gsi_stmt (gsi);
> + if (is_gimple_debug (stmt))
> + continue;
> + bitint_prec_kind kind = bitint_prec_small;
> + tree t;
> + FOR_EACH_SSA_TREE_OPERAND (t, stmt, iter, SSA_OP_ALL_OPERANDS)
> + if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE)
> + {
> + bitint_prec_kind this_kind
> + = bitint_precision_kind (TREE_TYPE (t));
> + if (this_kind > kind)
> + kind = this_kind;
> + }
> + if (is_gimple_assign (stmt) && gimple_store_p (stmt))
> + {
> + t = gimple_assign_rhs1 (stmt);
> + if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE)
> + {
> + bitint_prec_kind this_kind
> + = bitint_precision_kind (TREE_TYPE (t));
> + if (this_kind > kind)
> + kind = this_kind;
> + }
> + }
> + if (is_gimple_call (stmt))
> + {
> + t = gimple_call_lhs (stmt);
> + if (t
> + && TREE_CODE (TREE_TYPE (t)) == COMPLEX_TYPE
> + && TREE_CODE (TREE_TYPE (TREE_TYPE (t))) == BITINT_TYPE)
> + {
> + bitint_prec_kind this_kind
> + = bitint_precision_kind (TREE_TYPE (TREE_TYPE (t)));
> + if (this_kind > kind)
> + kind = this_kind;
> + }
> + }
> + if (kind == bitint_prec_small)
> + continue;
> + switch (gimple_code (stmt))
> + {
> + case GIMPLE_CALL:
> + /* For now. We'll need to handle some internal functions and
> + perhaps some builtins. */
> + if (kind == bitint_prec_middle)
> + continue;
> + break;
> + case GIMPLE_ASM:
> + if (kind == bitint_prec_middle)
> + continue;
> + break;
> + case GIMPLE_RETURN:
> + continue;
> + case GIMPLE_ASSIGN:
> + if (gimple_clobber_p (stmt))
> + continue;
> + if (kind >= bitint_prec_large)
> + break;
> + if (gimple_assign_single_p (stmt))
> + /* No need to lower copies, loads or stores. */
> + continue;
> + if (gimple_assign_cast_p (stmt))
> + {
> + tree lhs = gimple_assign_lhs (stmt);
> + tree rhs = gimple_assign_rhs1 (stmt);
> + if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
> + && INTEGRAL_TYPE_P (TREE_TYPE (rhs))
> + && (TYPE_PRECISION (TREE_TYPE (lhs))
> + == TYPE_PRECISION (TREE_TYPE (rhs))))
> + /* No need to lower casts to same precision. */
> + continue;
> + }
> + break;
> + default:
> + break;
> + }
> +
> + if (kind == bitint_prec_middle)
> + {
> + tree type = NULL_TREE;
> + /* Middle _BitInt(N) is rewritten to casts to INTEGER_TYPEs
> + with the same precision and back. */
> + if (tree lhs = gimple_get_lhs (stmt))
> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && (bitint_precision_kind (TREE_TYPE (lhs))
> + == bitint_prec_middle))
> + {
> + int prec = TYPE_PRECISION (TREE_TYPE (lhs));
> + int uns = TYPE_UNSIGNED (TREE_TYPE (lhs));
> + type = build_nonstandard_integer_type (prec, uns);
> + tree lhs2 = make_ssa_name (type);
> + gimple *g = gimple_build_assign (lhs, NOP_EXPR, lhs2);
> + gsi_insert_after (&gsi, g, GSI_SAME_STMT);
> + gimple_set_lhs (stmt, lhs2);
> + }
> + unsigned int nops = gimple_num_ops (stmt);
> + for (unsigned int i = 0; i < nops; ++i)
> + if (tree op = gimple_op (stmt, i))
> + {
> + tree nop = maybe_cast_middle_bitint (&gsi, op, type);
> + if (nop != op)
> + gimple_set_op (stmt, i, nop);
> + else if (COMPARISON_CLASS_P (op))
> + {
> + TREE_OPERAND (op, 0)
> + = maybe_cast_middle_bitint (&gsi,
> + TREE_OPERAND (op, 0),
> + type);
> + TREE_OPERAND (op, 1)
> + = maybe_cast_middle_bitint (&gsi,
> + TREE_OPERAND (op, 1),
> + type);
> + }
> + else if (TREE_CODE (op) == CASE_LABEL_EXPR)
> + {
> + CASE_LOW (op)
> + = maybe_cast_middle_bitint (&gsi, CASE_LOW (op),
> + type);
> + CASE_HIGH (op)
> + = maybe_cast_middle_bitint (&gsi, CASE_HIGH (op),
> + type);
> + }
> + }
> + update_stmt (stmt);
> + continue;
> + }
> +
> + if (tree lhs = gimple_get_lhs (stmt))
> + if (TREE_CODE (lhs) == SSA_NAME)
> + {
> + tree type = TREE_TYPE (lhs);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large
> + && (large_huge.m_names == NULL
> + || !bitmap_bit_p (large_huge.m_names,
> + SSA_NAME_VERSION (lhs))))
> + continue;
> + }
> +
> + large_huge.lower_stmt (stmt);
> + }
> +
> + tree atype = NULL_TREE;
> + for (gphi_iterator gsi = gsi_start_phis (bb); !gsi_end_p (gsi);
> + gsi_next (&gsi))
> + {
> + gphi *phi = gsi.phi ();
> + tree lhs = gimple_phi_result (phi);
> + if (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> + || bitint_precision_kind (TREE_TYPE (lhs)) < bitint_prec_large)
> + continue;
> + int p1 = var_to_partition (large_huge.m_map, lhs);
> + gcc_assert (large_huge.m_vars[p1] != NULL_TREE);
> + tree v1 = large_huge.m_vars[p1];
> + for (unsigned i = 0; i < gimple_phi_num_args (phi); ++i)
> + {
> + tree arg = gimple_phi_arg_def (phi, i);
> + edge e = gimple_phi_arg_edge (phi, i);
> + gimple *g;
> + switch (TREE_CODE (arg))
> + {
> + case INTEGER_CST:
> + if (integer_zerop (arg) && VAR_P (v1))
> + {
> + tree zero = build_zero_cst (TREE_TYPE (v1));
> + g = gimple_build_assign (v1, zero);
> + gsi_insert_on_edge (e, g);
> + edge_insertions = true;
> + break;
> + }
> + int ext;
> + unsigned int min_prec, prec, rem;
> + tree c;
> + prec = TYPE_PRECISION (TREE_TYPE (arg));
> + rem = prec % (2 * limb_prec);
> + min_prec = bitint_min_cst_precision (arg, ext);
> + if (min_prec > prec - rem - 2 * limb_prec
> + && min_prec > (unsigned) limb_prec)
> + /* Constant which has enough significant bits that it
> + isn't worth trying to save .rodata space by extending
> + from smaller number. */
> + min_prec = prec;
> + else
> + min_prec = CEIL (min_prec, limb_prec) * limb_prec;
> + if (min_prec == 0)
> + c = NULL_TREE;
> + else if (min_prec == prec)
> + c = tree_output_constant_def (arg);
> + else if (min_prec == (unsigned) limb_prec)
> + c = fold_convert (large_huge.m_limb_type, arg);
> + else
> + {
> + tree ctype = build_bitint_type (min_prec, 1);
> + c = tree_output_constant_def (fold_convert (ctype, arg));
> + }
> + if (c)
> + {
> + if (VAR_P (v1) && min_prec == prec)
> + {
> + tree v2 = build1 (VIEW_CONVERT_EXPR,
> + TREE_TYPE (v1), c);
> + g = gimple_build_assign (v1, v2);
> + gsi_insert_on_edge (e, g);
> + edge_insertions = true;
> + break;
> + }
> + if (TREE_CODE (TREE_TYPE (c)) == INTEGER_TYPE)
> + g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
> + TREE_TYPE (c), v1),
> + c);
> + else
> + {
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (c)))
> + / limb_prec;
> + tree vtype
> + = build_array_type_nelts (large_huge.m_limb_type,
> + nelts);
> + g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
> + vtype, v1),
> + build1 (VIEW_CONVERT_EXPR,
> + vtype, c));
> + }
> + gsi_insert_on_edge (e, g);
> + }
> + if (ext == 0)
> + {
> + unsigned HOST_WIDE_INT nelts
> + = (tree_to_uhwi (TYPE_SIZE (TREE_TYPE (v1)))
> + - min_prec) / limb_prec;
> + tree vtype
> + = build_array_type_nelts (large_huge.m_limb_type,
> + nelts);
> + tree ptype = build_pointer_type (TREE_TYPE (v1));
> + tree off = fold_convert (ptype,
> + TYPE_SIZE_UNIT (TREE_TYPE (c)));
> + tree vd = build2 (MEM_REF, vtype,
> + build_fold_addr_expr (v1), off);
> + g = gimple_build_assign (vd, build_zero_cst (vtype));
> + }
> + else
> + {
> + tree vd = v1;
> + if (c)
> + {
> + tree ptype = build_pointer_type (TREE_TYPE (v1));
> + tree off
> + = fold_convert (ptype,
> + TYPE_SIZE_UNIT (TREE_TYPE (c)));
> + vd = build2 (MEM_REF, large_huge.m_limb_type,
> + build_fold_addr_expr (v1), off);
> + }
> + vd = build_fold_addr_expr (vd);
> + unsigned HOST_WIDE_INT nbytes
> + = tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (v1)));
> + if (c)
> + nbytes
> + -= tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (c)));
> + tree fn = builtin_decl_implicit (BUILT_IN_MEMSET);
> + g = gimple_build_call (fn, 3, vd,
> + integer_minus_one_node,
> + build_int_cst (sizetype,
> + nbytes));
> + }
> + gsi_insert_on_edge (e, g);
> + edge_insertions = true;
> + break;
> + default:
> + gcc_unreachable ();
> + case SSA_NAME:
> + if (gimple_code (SSA_NAME_DEF_STMT (arg)) == GIMPLE_NOP)
> + {
> + if (large_huge.m_names == NULL
> + || !bitmap_bit_p (large_huge.m_names,
> + SSA_NAME_VERSION (arg)))
> + continue;
> + }
> + int p2 = var_to_partition (large_huge.m_map, arg);
> + if (p1 == p2)
> + continue;
> + gcc_assert (large_huge.m_vars[p2] != NULL_TREE);
> + tree v2 = large_huge.m_vars[p2];
> + if (VAR_P (v1) && VAR_P (v2))
> + g = gimple_build_assign (v1, v2);
> + else if (VAR_P (v1))
> + g = gimple_build_assign (v1, build1 (VIEW_CONVERT_EXPR,
> + TREE_TYPE (v1), v2));
> + else if (VAR_P (v2))
> + g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
> + TREE_TYPE (v2), v1), v2);
> + else
> + {
> + if (atype == NULL_TREE
> + || !tree_int_cst_equal (TYPE_SIZE (atype),
> + TYPE_SIZE (TREE_TYPE (lhs))))
> + {
> + unsigned HOST_WIDE_INT nelts
> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (lhs)))
> + / limb_prec;
> + atype
> + = build_array_type_nelts (large_huge.m_limb_type,
> + nelts);
> + }
> + g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
> + atype, v1),
> + build1 (VIEW_CONVERT_EXPR,
> + atype, v2));
> + }
> + gsi_insert_on_edge (e, g);
> + edge_insertions = true;
> + break;
> + }
> + }
> + }
> + }
> +
> + if (large_huge.m_names || has_large_huge)
> + {
> + gimple *nop = NULL;
> + for (i = 0; i < num_ssa_names; ++i)
> + {
> + tree s = ssa_name (i);
> + if (s == NULL_TREE)
> + continue;
> + tree type = TREE_TYPE (s);
> + if (TREE_CODE (type) == COMPLEX_TYPE)
> + type = TREE_TYPE (type);
> + if (TREE_CODE (type) == BITINT_TYPE
> + && bitint_precision_kind (type) >= bitint_prec_large)
> + {
> + if (large_huge.m_preserved
> + && bitmap_bit_p (large_huge.m_preserved,
> + SSA_NAME_VERSION (s)))
> + continue;
> + gimple *g = SSA_NAME_DEF_STMT (s);
> + if (gimple_code (g) == GIMPLE_NOP)
> + {
> + if (SSA_NAME_VAR (s))
> + set_ssa_default_def (cfun, SSA_NAME_VAR (s), NULL_TREE);
> + release_ssa_name (s);
> + continue;
> + }
> + if (gimple_code (g) != GIMPLE_ASM)
> + {
> + gimple_stmt_iterator gsi = gsi_for_stmt (g);
> + bool save_vta = flag_var_tracking_assignments;
> + flag_var_tracking_assignments = false;
> + gsi_remove (&gsi, true);
> + flag_var_tracking_assignments = save_vta;
> + }
> + if (nop == NULL)
> + nop = gimple_build_nop ();
> + SSA_NAME_DEF_STMT (s) = nop;
> + release_ssa_name (s);
> + }
> + }
> + if (optimize)
> + disable_ranger (cfun);
> + }
> +
> + if (edge_insertions)
> + gsi_commit_edge_inserts ();
> +
> + return ret;
> +}
> +
> +namespace {
> +
> +const pass_data pass_data_lower_bitint =
> +{
> + GIMPLE_PASS, /* type */
> + "bitintlower", /* name */
> + OPTGROUP_NONE, /* optinfo_flags */
> + TV_NONE, /* tv_id */
> + PROP_ssa, /* properties_required */
> + PROP_gimple_lbitint, /* properties_provided */
> + 0, /* properties_destroyed */
> + 0, /* todo_flags_start */
> + 0, /* todo_flags_finish */
> +};
> +
> +class pass_lower_bitint : public gimple_opt_pass
> +{
> +public:
> + pass_lower_bitint (gcc::context *ctxt)
> + : gimple_opt_pass (pass_data_lower_bitint, ctxt)
> + {}
> +
> + /* opt_pass methods: */
> + opt_pass * clone () final override { return new pass_lower_bitint (m_ctxt); }
> + unsigned int execute (function *) final override
> + {
> + return gimple_lower_bitint ();
> + }
> +
> +}; // class pass_lower_bitint
> +
> +} // anon namespace
> +
> +gimple_opt_pass *
> +make_pass_lower_bitint (gcc::context *ctxt)
> +{
> + return new pass_lower_bitint (ctxt);
> +}
> +
> +
> +namespace {
> +
> +const pass_data pass_data_lower_bitint_O0 =
> +{
> + GIMPLE_PASS, /* type */
> + "bitintlower0", /* name */
> + OPTGROUP_NONE, /* optinfo_flags */
> + TV_NONE, /* tv_id */
> + PROP_cfg, /* properties_required */
> + PROP_gimple_lbitint, /* properties_provided */
> + 0, /* properties_destroyed */
> + 0, /* todo_flags_start */
> + 0, /* todo_flags_finish */
> +};
> +
> +class pass_lower_bitint_O0 : public gimple_opt_pass
> +{
> +public:
> + pass_lower_bitint_O0 (gcc::context *ctxt)
> + : gimple_opt_pass (pass_data_lower_bitint_O0, ctxt)
> + {}
> +
> + /* opt_pass methods: */
> + bool gate (function *fun) final override
> + {
> + /* With errors, normal optimization passes are not run. If we don't
> + lower bitint operations at all, rtl expansion will abort. */
> + return !(fun->curr_properties & PROP_gimple_lbitint);
> + }
> +
> + unsigned int execute (function *) final override
> + {
> + return gimple_lower_bitint ();
> + }
> +
> +}; // class pass_lower_bitint_O0
> +
> +} // anon namespace
> +
> +gimple_opt_pass *
> +make_pass_lower_bitint_O0 (gcc::context *ctxt)
> +{
> + return new pass_lower_bitint_O0 (ctxt);
> +}
> --- gcc/gimple-lower-bitint.h.jj 2023-07-27 15:03:24.287233711 +0200
> +++ gcc/gimple-lower-bitint.h 2023-07-27 15:03:24.287233711 +0200
> @@ -0,0 +1,31 @@
> +/* Header file for gimple-lower-bitint.cc exports.
> + Copyright (C) 2023 Free Software Foundation, Inc.
> +
> +This file is part of GCC.
> +
> +GCC is free software; you can redistribute it and/or modify it under
> +the terms of the GNU General Public License as published by the Free
> +Software Foundation; either version 3, or (at your option) any later
> +version.
> +
> +GCC is distributed in the hope that it will be useful, but WITHOUT ANY
> +WARRANTY; without even the implied warranty of MERCHANTABILITY or
> +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
> + for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with GCC; see the file COPYING3. If not see
> +<http://www.gnu.org/licenses/>. */
> +
> +#ifndef GCC_GIMPLE_LOWER_BITINT_H
> +#define GCC_GIMPLE_LOWER_BITINT_H
> +
> +class live_track;
> +struct ssa_conflicts;
> +extern void build_bitint_stmt_ssa_conflicts (gimple *, live_track *,
> + ssa_conflicts *, bitmap,
> + void (*) (live_track *, tree,
> + ssa_conflicts *),
> + void (*) (live_track *, tree));
> +
> +#endif /* GCC_GIMPLE_LOWER_BITINT_H */
> --- gcc/internal-fn.cc.jj 2023-07-24 17:48:26.494040524 +0200
> +++ gcc/internal-fn.cc 2023-07-27 15:03:24.288233697 +0200
> @@ -981,8 +981,38 @@ expand_arith_overflow_result_store (tree
> /* Helper for expand_*_overflow. Store RES into TARGET. */
>
> static void
> -expand_ubsan_result_store (rtx target, rtx res)
> +expand_ubsan_result_store (tree lhs, rtx target, scalar_int_mode mode,
> + rtx res, rtx_code_label *do_error)
> {
> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && TYPE_PRECISION (TREE_TYPE (lhs)) < GET_MODE_PRECISION (mode))
> + {
> + int uns = TYPE_UNSIGNED (TREE_TYPE (lhs));
> + int prec = TYPE_PRECISION (TREE_TYPE (lhs));
> + int tgtprec = GET_MODE_PRECISION (mode);
> + rtx resc = gen_reg_rtx (mode), lres;
> + emit_move_insn (resc, res);
> + if (uns)
> + {
> + rtx mask
> + = immed_wide_int_const (wi::shifted_mask (0, prec, false, tgtprec),
> + mode);
> + lres = expand_simple_binop (mode, AND, res, mask, NULL_RTX,
> + true, OPTAB_LIB_WIDEN);
> + }
> + else
> + {
> + lres = expand_shift (LSHIFT_EXPR, mode, res, tgtprec - prec,
> + NULL_RTX, 1);
> + lres = expand_shift (RSHIFT_EXPR, mode, lres, tgtprec - prec,
> + NULL_RTX, 0);
> + }
> + if (lres != res)
> + emit_move_insn (res, lres);
> + do_compare_rtx_and_jump (res, resc,
> + NE, true, mode, NULL_RTX, NULL, do_error,
> + profile_probability::very_unlikely ());
> + }
> if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
> /* If this is a scalar in a register that is stored in a wider mode
> than the declared mode, compute the result into its declared mode
> @@ -1431,7 +1461,7 @@ expand_addsub_overflow (location_t loc,
> if (lhs)
> {
> if (is_ubsan)
> - expand_ubsan_result_store (target, res);
> + expand_ubsan_result_store (lhs, target, mode, res, do_error);
> else
> {
> if (do_xor)
> @@ -1528,7 +1558,7 @@ expand_neg_overflow (location_t loc, tre
> if (lhs)
> {
> if (is_ubsan)
> - expand_ubsan_result_store (target, res);
> + expand_ubsan_result_store (lhs, target, mode, res, do_error);
> else
> expand_arith_overflow_result_store (lhs, target, mode, res);
> }
> @@ -1646,6 +1676,12 @@ expand_mul_overflow (location_t loc, tre
>
> int pos_neg0 = get_range_pos_neg (arg0);
> int pos_neg1 = get_range_pos_neg (arg1);
> + /* Unsigned types with smaller than mode precision, even if they have most
> + significant bit set, are still zero-extended. */
> + if (uns0_p && TYPE_PRECISION (TREE_TYPE (arg0)) < GET_MODE_PRECISION (mode))
> + pos_neg0 = 1;
> + if (uns1_p && TYPE_PRECISION (TREE_TYPE (arg1)) < GET_MODE_PRECISION (mode))
> + pos_neg1 = 1;
>
> /* s1 * u2 -> ur */
> if (!uns0_p && uns1_p && unsr_p)
> @@ -2414,7 +2450,7 @@ expand_mul_overflow (location_t loc, tre
> if (lhs)
> {
> if (is_ubsan)
> - expand_ubsan_result_store (target, res);
> + expand_ubsan_result_store (lhs, target, mode, res, do_error);
> else
> expand_arith_overflow_result_store (lhs, target, mode, res);
> }
> @@ -4899,3 +4935,76 @@ expand_MASK_CALL (internal_fn, gcall *)
> /* This IFN should only exist between ifcvt and vect passes. */
> gcc_unreachable ();
> }
> +
> +void
> +expand_MULBITINT (internal_fn, gcall *stmt)
> +{
> + rtx_mode_t args[6];
> + for (int i = 0; i < 6; i++)
> + args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
> + (i & 1) ? SImode : ptr_mode);
> + rtx fun = init_one_libfunc ("__mulbitint3");
> + emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 6, args);
> +}
> +
> +void
> +expand_DIVMODBITINT (internal_fn, gcall *stmt)
> +{
> + rtx_mode_t args[8];
> + for (int i = 0; i < 8; i++)
> + args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
> + (i & 1) ? SImode : ptr_mode);
> + rtx fun = init_one_libfunc ("__divmodbitint4");
> + emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 8, args);
> +}
> +
> +void
> +expand_FLOATTOBITINT (internal_fn, gcall *stmt)
> +{
> + machine_mode mode = TYPE_MODE (TREE_TYPE (gimple_call_arg (stmt, 2)));
> + rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
> + rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
> + rtx arg2 = expand_normal (gimple_call_arg (stmt, 2));
> + const char *mname = GET_MODE_NAME (mode);
> + unsigned mname_len = strlen (mname);
> + int len = 12 + mname_len;
> + char *libfunc_name = XALLOCAVEC (char, len);
> + char *p = libfunc_name;
> + const char *q;
> + memcpy (p, "__fix", 5);
> + p += 5;
> + for (q = mname; *q; q++)
> + *p++ = TOLOWER (*q);
> + memcpy (p, "bitint", 7);
> + rtx fun = init_one_libfunc (libfunc_name);
> + emit_library_call (fun, LCT_NORMAL, VOIDmode, arg0, ptr_mode, arg1,
> + SImode, arg2, mode);
> +}
> +
> +void
> +expand_BITINTTOFLOAT (internal_fn, gcall *stmt)
> +{
> + tree lhs = gimple_call_lhs (stmt);
> + if (!lhs)
> + return;
> + machine_mode mode = TYPE_MODE (TREE_TYPE (lhs));
> + rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
> + rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
> + const char *mname = GET_MODE_NAME (mode);
> + unsigned mname_len = strlen (mname);
> + int len = 14 + mname_len;
> + char *libfunc_name = XALLOCAVEC (char, len);
> + char *p = libfunc_name;
> + const char *q;
> + memcpy (p, "__floatbitint", 13);
> + p += 13;
> + for (q = mname; *q; q++)
> + *p++ = TOLOWER (*q);
> + *p = '\0';
> + rtx fun = init_one_libfunc (libfunc_name);
> + rtx target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
> + rtx val = emit_library_call_value (fun, target, LCT_PURE, mode,
> + arg0, ptr_mode, arg1, SImode);
> + if (val != target)
> + emit_move_insn (target, val);
> +}
> --- gcc/internal-fn.def.jj 2023-07-24 17:48:26.494040524 +0200
> +++ gcc/internal-fn.def 2023-07-27 15:03:24.259234103 +0200
> @@ -559,6 +559,12 @@ DEF_INTERNAL_FN (ASSUME, ECF_CONST | ECF
> /* For if-conversion of inbranch SIMD clones. */
> DEF_INTERNAL_FN (MASK_CALL, ECF_NOVOPS, NULL)
>
> +/* _BitInt support. */
> +DEF_INTERNAL_FN (MULBITINT, ECF_LEAF | ECF_NOTHROW, ". O . R . R . ")
> +DEF_INTERNAL_FN (DIVMODBITINT, ECF_LEAF, ". O . O . R . R . ")
> +DEF_INTERNAL_FN (FLOATTOBITINT, ECF_LEAF | ECF_NOTHROW, ". O . . ")
> +DEF_INTERNAL_FN (BITINTTOFLOAT, ECF_PURE | ECF_LEAF, ". R . ")
> +
> #undef DEF_INTERNAL_INT_FN
> #undef DEF_INTERNAL_FLT_FN
> #undef DEF_INTERNAL_FLT_FLOATN_FN
> --- gcc/internal-fn.h.jj 2023-07-17 09:07:42.071283977 +0200
> +++ gcc/internal-fn.h 2023-07-27 15:03:24.231234494 +0200
> @@ -256,6 +256,10 @@ extern void expand_SPACESHIP (internal_f
> extern void expand_TRAP (internal_fn, gcall *);
> extern void expand_ASSUME (internal_fn, gcall *);
> extern void expand_MASK_CALL (internal_fn, gcall *);
> +extern void expand_MULBITINT (internal_fn, gcall *);
> +extern void expand_DIVMODBITINT (internal_fn, gcall *);
> +extern void expand_FLOATTOBITINT (internal_fn, gcall *);
> +extern void expand_BITINTTOFLOAT (internal_fn, gcall *);
>
> extern bool vectorized_internal_fn_supported_p (internal_fn, tree);
>
> --- gcc/lto-streamer-in.cc.jj 2023-07-17 09:07:42.078283882 +0200
> +++ gcc/lto-streamer-in.cc 2023-07-27 15:03:24.255234159 +0200
> @@ -1888,7 +1888,7 @@ lto_input_tree_1 (class lto_input_block
>
> for (i = 0; i < len; i++)
> a[i] = streamer_read_hwi (ib);
> - gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
> + gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
OK to push separately.
> result = wide_int_to_tree (type, wide_int::from_array
> (a, len, TYPE_PRECISION (type)));
> streamer_tree_cache_append (data_in->reader_cache, result, hash);
> --- gcc/Makefile.in.jj 2023-07-27 15:02:53.744661238 +0200
> +++ gcc/Makefile.in 2023-07-27 15:03:24.281233795 +0200
> @@ -1453,6 +1453,7 @@ OBJS = \
> gimple-loop-jam.o \
> gimple-loop-versioning.o \
> gimple-low.o \
> + gimple-lower-bitint.o \
> gimple-predicate-analysis.o \
> gimple-pretty-print.o \
> gimple-range.o \
> --- gcc/match.pd.jj 2023-07-24 17:49:05.496533445 +0200
> +++ gcc/match.pd 2023-07-27 15:03:24.225234577 +0200
> @@ -6433,6 +6433,7 @@ (define_operator_list SYNC_FETCH_AND_AND
> - 1)); }))))
> (if (wi::to_wide (cst) == signed_max
> && TYPE_UNSIGNED (arg1_type)
> + && TYPE_MODE (arg1_type) != BLKmode
> /* We will flip the signedness of the comparison operator
> associated with the mode of @1, so the sign bit is
> specified by this mode. Check that @1 is the signed
> --- gcc/passes.def.jj 2023-07-17 09:07:42.092283692 +0200
> +++ gcc/passes.def 2023-07-27 15:03:24.287233711 +0200
> @@ -237,6 +237,7 @@ along with GCC; see the file COPYING3.
> NEXT_PASS (pass_tail_recursion);
> NEXT_PASS (pass_ch);
> NEXT_PASS (pass_lower_complex);
> + NEXT_PASS (pass_lower_bitint);
> NEXT_PASS (pass_sra);
> /* The dom pass will also resolve all __builtin_constant_p calls
> that are still there to 0. This has to be done after some
> @@ -386,6 +387,7 @@ along with GCC; see the file COPYING3.
> NEXT_PASS (pass_strip_predict_hints, false /* early_p */);
> /* Lower remaining pieces of GIMPLE. */
> NEXT_PASS (pass_lower_complex);
> + NEXT_PASS (pass_lower_bitint);
> NEXT_PASS (pass_lower_vector_ssa);
> NEXT_PASS (pass_lower_switch);
> /* Perform simple scalar cleanup which is constant/copy propagation. */
> @@ -429,6 +431,7 @@ along with GCC; see the file COPYING3.
> NEXT_PASS (pass_lower_vaarg);
> NEXT_PASS (pass_lower_vector);
> NEXT_PASS (pass_lower_complex_O0);
> + NEXT_PASS (pass_lower_bitint_O0);
> NEXT_PASS (pass_sancov_O0);
> NEXT_PASS (pass_lower_switch_O0);
> NEXT_PASS (pass_asan_O0);
> --- gcc/pretty-print.h.jj 2023-06-26 09:27:04.352366471 +0200
> +++ gcc/pretty-print.h 2023-07-27 15:03:24.281233795 +0200
> @@ -336,8 +336,23 @@ pp_get_prefix (const pretty_printer *pp)
> #define pp_wide_int(PP, W, SGN) \
> do \
> { \
> - print_dec (W, pp_buffer (PP)->digit_buffer, SGN); \
> - pp_string (PP, pp_buffer (PP)->digit_buffer); \
> + const wide_int_ref &pp_wide_int_ref = (W); \
> + unsigned int pp_wide_int_prec \
> + = pp_wide_int_ref.get_precision (); \
> + if ((pp_wide_int_prec + 3) / 4 \
> + > sizeof (pp_buffer (PP)->digit_buffer) - 3) \
> + { \
> + char *pp_wide_int_buf \
> + = XALLOCAVEC (char, (pp_wide_int_prec + 3) / 4 + 3);\
> + print_dec (pp_wide_int_ref, pp_wide_int_buf, SGN); \
> + pp_string (PP, pp_wide_int_buf); \
> + } \
> + else \
> + { \
> + print_dec (pp_wide_int_ref, \
> + pp_buffer (PP)->digit_buffer, SGN); \
> + pp_string (PP, pp_buffer (PP)->digit_buffer); \
> + } \
> } \
> while (0)
> #define pp_vrange(PP, R) \
> --- gcc/stor-layout.cc.jj 2023-05-30 17:52:34.509856813 +0200
> +++ gcc/stor-layout.cc 2023-07-27 15:03:24.295233599 +0200
> @@ -2393,6 +2393,64 @@ layout_type (tree type)
> break;
> }
>
> + case BITINT_TYPE:
> + {
> + struct bitint_info info;
> + int cnt;
> + gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type), &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
> + {
> + SET_TYPE_MODE (type, limb_mode);
> + cnt = 1;
> + }
> + else
> + {
> + SET_TYPE_MODE (type, BLKmode);
> + cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
> + }
> + TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
> + TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
> + SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
so when a target allows say TImode we don't align to that larger mode?
Might be worth documenting in the target hook that the alignment
which I think is part of the ABI is specified by the limb mode.
Are arrays of _BitInt a thing? _BitInt<8>[10] would have quite some
padding then which might be unexpected?
> + if (cnt > 1)
> + {
> + /* Use same mode as compute_record_mode would use for a structure
> + containing cnt limb_mode elements. */
> + machine_mode mode = mode_for_size_tree (TYPE_SIZE (type),
> + MODE_INT, 1).else_blk ();
> + if (mode == BLKmode)
> + break;
> + finalize_type_size (type);
> + SET_TYPE_MODE (type, mode);
> + if (STRICT_ALIGNMENT
> + && !(TYPE_ALIGN (type) >= BIGGEST_ALIGNMENT
> + || TYPE_ALIGN (type) >= GET_MODE_ALIGNMENT (mode)))
> + {
> + /* If this is the only reason this type is BLKmode, then
> + don't force containing types to be BLKmode. */
> + TYPE_NO_FORCE_BLK (type) = 1;
> + SET_TYPE_MODE (type, BLKmode);
> + }
> + if (TYPE_NEXT_VARIANT (type) || type != TYPE_MAIN_VARIANT (type))
> + for (tree variant = TYPE_MAIN_VARIANT (type);
> + variant != NULL_TREE;
> + variant = TYPE_NEXT_VARIANT (variant))
> + {
> + SET_TYPE_MODE (variant, mode);
> + if (STRICT_ALIGNMENT
> + && !(TYPE_ALIGN (variant) >= BIGGEST_ALIGNMENT
> + || (TYPE_ALIGN (variant)
> + >= GET_MODE_ALIGNMENT (mode))))
> + {
> + TYPE_NO_FORCE_BLK (variant) = 1;
> + SET_TYPE_MODE (variant, BLKmode);
> + }
> + }
> + return;
> + }
> + break;
> + }
> +
> case REAL_TYPE:
> {
> /* Allow the caller to choose the type mode, which is how decimal
> @@ -2417,6 +2475,18 @@ layout_type (tree type)
>
> case COMPLEX_TYPE:
> TYPE_UNSIGNED (type) = TYPE_UNSIGNED (TREE_TYPE (type));
> + if (TYPE_MODE (TREE_TYPE (type)) == BLKmode)
> + {
> + gcc_checking_assert (TREE_CODE (TREE_TYPE (type)) == BITINT_TYPE);
> + SET_TYPE_MODE (type, BLKmode);
> + TYPE_SIZE (type)
> + = int_const_binop (MULT_EXPR, TYPE_SIZE (TREE_TYPE (type)),
> + bitsize_int (2));
> + TYPE_SIZE_UNIT (type)
> + = int_const_binop (MULT_EXPR, TYPE_SIZE_UNIT (TREE_TYPE (type)),
> + bitsize_int (2));
> + break;
> + }
> SET_TYPE_MODE (type,
> GET_MODE_COMPLEX_MODE (TYPE_MODE (TREE_TYPE (type))));
>
> --- gcc/target.def.jj 2023-05-30 17:52:34.510856799 +0200
> +++ gcc/target.def 2023-07-27 15:03:24.298233557 +0200
> @@ -6241,6 +6241,15 @@ when @var{type} is @code{EXCESS_PRECISIO
> enum flt_eval_method, (enum excess_precision_type type),
> default_excess_precision)
>
> +/* Return true if _BitInt(N) is supported and fill details about it into
> + *INFO. */
> +DEFHOOK
> +(bitint_type_info,
> + "This target hook returns true if _BitInt(N) is supported and provides some\n\
> +details on it.",
> + bool, (int n, struct bitint_info *info),
> + default_bitint_type_info)
> +
> HOOK_VECTOR_END (c)
>
> /* Functions specific to the C++ frontend. */
> --- gcc/target.h.jj 2023-03-13 23:01:42.078959188 +0100
> +++ gcc/target.h 2023-07-27 15:03:24.273233907 +0200
> @@ -68,6 +68,20 @@ union cumulative_args_t { void *p; };
>
> #endif /* !CHECKING_P */
>
> +/* Target properties of _BitInt(N) type. _BitInt(N) is to be represented
> + as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
> + ordered from least significant to most significant if !big_endian,
> + otherwise from most significant to least significant. If extended is
> + false, the bits above or equal to N are undefined when stored in a register
> + or memory, otherwise they are zero or sign extended depending on if
> + it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N). */
> +
I think this belongs to tm.texi (or duplicated there)
> +struct bitint_info {
> + machine_mode limb_mode;
> + bool big_endian;
> + bool extended;
> +};
> +
> /* Types of memory operation understood by the "by_pieces" infrastructure.
> Used by the TARGET_USE_BY_PIECES_INFRASTRUCTURE_P target hook and
> internally by the functions in expr.cc. */
> --- gcc/targhooks.cc.jj 2023-05-01 23:07:05.366414623 +0200
> +++ gcc/targhooks.cc 2023-07-27 15:03:24.280233809 +0200
> @@ -2595,6 +2595,14 @@ default_excess_precision (enum excess_pr
> return FLT_EVAL_METHOD_PROMOTE_TO_FLOAT;
> }
>
> +/* Return true if _BitInt(N) is supported and fill details about it into
> + *INFO. */
> +bool
> +default_bitint_type_info (int, struct bitint_info *)
> +{
> + return false;
> +}
> +
> /* Default implementation for
> TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE. */
> HOST_WIDE_INT
> --- gcc/targhooks.h.jj 2023-05-01 23:07:05.366414623 +0200
> +++ gcc/targhooks.h 2023-07-27 15:03:24.258234117 +0200
> @@ -284,6 +284,7 @@ extern unsigned int default_min_arithmet
>
> extern enum flt_eval_method
> default_excess_precision (enum excess_precision_type ATTRIBUTE_UNUSED);
> +extern bool default_bitint_type_info (int, struct bitint_info *);
> extern HOST_WIDE_INT default_stack_clash_protection_alloca_probe_range (void);
> extern void default_select_early_remat_modes (sbitmap);
> extern tree default_preferred_else_value (unsigned, tree, unsigned, tree *);
> --- gcc/tree-pass.h.jj 2023-07-17 09:07:42.140283040 +0200
> +++ gcc/tree-pass.h 2023-07-27 15:03:24.218234675 +0200
> @@ -229,6 +229,7 @@ protected:
> have completed. */
> #define PROP_assumptions_done (1 << 19) /* Assume function kept
> around. */
> +#define PROP_gimple_lbitint (1 << 20) /* lowered large _BitInt */
>
> #define PROP_gimple \
> (PROP_gimple_any | PROP_gimple_lcf | PROP_gimple_leh | PROP_gimple_lomp)
> @@ -420,6 +421,8 @@ extern gimple_opt_pass *make_pass_strip_
> extern gimple_opt_pass *make_pass_rebuild_frequencies (gcc::context *ctxt);
> extern gimple_opt_pass *make_pass_lower_complex_O0 (gcc::context *ctxt);
> extern gimple_opt_pass *make_pass_lower_complex (gcc::context *ctxt);
> +extern gimple_opt_pass *make_pass_lower_bitint_O0 (gcc::context *ctxt);
> +extern gimple_opt_pass *make_pass_lower_bitint (gcc::context *ctxt);
> extern gimple_opt_pass *make_pass_lower_switch (gcc::context *ctxt);
> extern gimple_opt_pass *make_pass_lower_switch_O0 (gcc::context *ctxt);
> extern gimple_opt_pass *make_pass_lower_vector (gcc::context *ctxt);
> --- gcc/tree-pretty-print.cc.jj 2023-06-06 20:02:35.676210599 +0200
> +++ gcc/tree-pretty-print.cc 2023-07-27 15:03:24.296233585 +0200
> @@ -1924,6 +1924,7 @@ dump_generic_node (pretty_printer *pp, t
> case VECTOR_TYPE:
> case ENUMERAL_TYPE:
> case BOOLEAN_TYPE:
> + case BITINT_TYPE:
> case OPAQUE_TYPE:
> {
> unsigned int quals = TYPE_QUALS (node);
> @@ -2038,6 +2039,14 @@ dump_generic_node (pretty_printer *pp, t
> pp_decimal_int (pp, TYPE_PRECISION (node));
> pp_greater (pp);
> }
> + else if (TREE_CODE (node) == BITINT_TYPE)
> + {
> + if (TYPE_UNSIGNED (node))
> + pp_string (pp, "unsigned ");
> + pp_string (pp, "_BitInt(");
> + pp_decimal_int (pp, TYPE_PRECISION (node));
> + pp_right_paren (pp);
> + }
> else if (TREE_CODE (node) == VOID_TYPE)
> pp_string (pp, "void");
> else
> @@ -2234,8 +2243,18 @@ dump_generic_node (pretty_printer *pp, t
> pp_minus (pp);
> val = -val;
> }
> - print_hex (val, pp_buffer (pp)->digit_buffer);
> - pp_string (pp, pp_buffer (pp)->digit_buffer);
> + unsigned int prec = val.get_precision ();
> + if ((prec + 3) / 4 > sizeof (pp_buffer (pp)->digit_buffer) - 3)
> + {
> + char *buf = XALLOCAVEC (char, (prec + 3) / 4 + 3);
> + print_hex (val, buf);
> + pp_string (pp, buf);
> + }
> + else
> + {
> + print_hex (val, pp_buffer (pp)->digit_buffer);
> + pp_string (pp, pp_buffer (pp)->digit_buffer);
> + }
> }
> if ((flags & TDF_GIMPLE)
> && ! (POINTER_TYPE_P (TREE_TYPE (node))
> --- gcc/tree-ssa-coalesce.cc.jj 2023-05-20 15:31:09.229661068 +0200
> +++ gcc/tree-ssa-coalesce.cc 2023-07-27 15:03:24.254234173 +0200
> @@ -38,6 +38,7 @@ along with GCC; see the file COPYING3.
> #include "explow.h"
> #include "tree-dfa.h"
> #include "stor-layout.h"
> +#include "gimple-lower-bitint.h"
>
> /* This set of routines implements a coalesce_list. This is an object which
> is used to track pairs of ssa_names which are desirable to coalesce
> @@ -914,6 +915,14 @@ build_ssa_conflict_graph (tree_live_info
> else if (is_gimple_debug (stmt))
> continue;
>
> + if (map->bitint)
> + {
> + build_bitint_stmt_ssa_conflicts (stmt, live, graph, map->bitint,
> + live_track_process_def,
> + live_track_process_use);
> + continue;
> + }
> +
> /* For stmts with more than one SSA_NAME definition pretend all the
> SSA_NAME outputs but the first one are live at this point, so
> that conflicts are added in between all those even when they are
> @@ -1058,6 +1067,8 @@ create_coalesce_list_for_region (var_map
> if (virtual_operand_p (res))
> continue;
> ver = SSA_NAME_VERSION (res);
> + if (map->bitint && !bitmap_bit_p (map->bitint, ver))
> + continue;
>
> /* Register ssa_names and coalesces between the args and the result
> of all PHI. */
> @@ -1106,6 +1117,8 @@ create_coalesce_list_for_region (var_map
> {
> v1 = SSA_NAME_VERSION (lhs);
> v2 = SSA_NAME_VERSION (rhs1);
> + if (map->bitint && !bitmap_bit_p (map->bitint, v1))
> + break;
> cost = coalesce_cost_bb (bb);
> add_coalesce (cl, v1, v2, cost);
> bitmap_set_bit (used_in_copy, v1);
> @@ -1124,12 +1137,16 @@ create_coalesce_list_for_region (var_map
> if (!rhs1)
> break;
> tree lhs = ssa_default_def (cfun, res);
> + if (map->bitint && !lhs)
> + break;
> gcc_assert (lhs);
> if (TREE_CODE (rhs1) == SSA_NAME
> && gimple_can_coalesce_p (lhs, rhs1))
> {
> v1 = SSA_NAME_VERSION (lhs);
> v2 = SSA_NAME_VERSION (rhs1);
> + if (map->bitint && !bitmap_bit_p (map->bitint, v1))
> + break;
> cost = coalesce_cost_bb (bb);
> add_coalesce (cl, v1, v2, cost);
> bitmap_set_bit (used_in_copy, v1);
> @@ -1177,6 +1194,8 @@ create_coalesce_list_for_region (var_map
>
> v1 = SSA_NAME_VERSION (outputs[match]);
> v2 = SSA_NAME_VERSION (input);
> + if (map->bitint && !bitmap_bit_p (map->bitint, v1))
> + continue;
>
> if (gimple_can_coalesce_p (outputs[match], input))
> {
> @@ -1651,6 +1670,33 @@ compute_optimized_partition_bases (var_m
> }
> }
>
> + if (map->bitint
> + && flag_tree_coalesce_vars
> + && (optimize > 1 || parts < 500))
> + for (i = 0; i < (unsigned) parts; ++i)
> + {
> + tree s1 = partition_to_var (map, i);
> + int p1 = partition_find (tentative, i);
> + for (unsigned j = i + 1; j < (unsigned) parts; ++j)
> + {
> + tree s2 = partition_to_var (map, j);
> + if (s1 == s2)
> + continue;
> + if (tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
> + TYPE_SIZE (TREE_TYPE (s2))))
> + {
> + int p2 = partition_find (tentative, j);
> +
> + if (p1 == p2)
> + continue;
> +
> + partition_union (tentative, p1, p2);
> + if (partition_find (tentative, i) != p1)
> + break;
> + }
> + }
> + }
> +
> map->partition_to_base_index = XCNEWVEC (int, parts);
> auto_vec<unsigned int> index_map (parts);
> if (parts)
> @@ -1692,6 +1738,101 @@ compute_optimized_partition_bases (var_m
> partition_delete (tentative);
> }
>
> +/* For the bitint lowering pass, try harder. Partitions which contain
> + SSA_NAME default def of a PARM_DECL or have RESULT_DECL need to have
> + compatible types because they will use that RESULT_DECL or PARM_DECL.
> + Other partitions can have even incompatible _BitInt types, as long
> + as they have the same size - those will use VAR_DECLs which are just
> + arrays of the limbs. */
> +
> +static void
> +coalesce_bitint (var_map map, ssa_conflicts *graph)
> +{
> + unsigned n = num_var_partitions (map);
> + if (optimize <= 1 && n > 500)
> + return;
> +
> + bool try_same_size = false;
> + FILE *debug_file = (dump_flags & TDF_DETAILS) ? dump_file : NULL;
> + for (unsigned i = 0; i < n; ++i)
> + {
> + tree s1 = partition_to_var (map, i);
> + if ((unsigned) var_to_partition (map, s1) != i)
> + continue;
> + int v1 = SSA_NAME_VERSION (s1);
> + for (unsigned j = i + 1; j < n; ++j)
> + {
> + tree s2 = partition_to_var (map, j);
> + if (s1 == s2 || (unsigned) var_to_partition (map, s2) != j)
> + continue;
> + if (!types_compatible_p (TREE_TYPE (s1), TREE_TYPE (s2)))
> + {
> + if (!try_same_size
> + && tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
> + TYPE_SIZE (TREE_TYPE (s2))))
> + try_same_size = true;
> + continue;
> + }
> + int v2 = SSA_NAME_VERSION (s2);
> + if (attempt_coalesce (map, graph, v1, v2, debug_file)
> + && partition_to_var (map, i) != s1)
> + break;
> + }
> + }
> +
> + if (!try_same_size)
> + return;
> +
> + unsigned i;
> + bitmap_iterator bi;
> + bitmap same_type = NULL;
> +
> + EXECUTE_IF_SET_IN_BITMAP (map->bitint, 0, i, bi)
> + {
> + tree s = ssa_name (i);
> + if (!SSA_NAME_VAR (s))
> + continue;
> + if (TREE_CODE (SSA_NAME_VAR (s)) != RESULT_DECL
> + && (TREE_CODE (SSA_NAME_VAR (s)) != PARM_DECL
> + || !SSA_NAME_IS_DEFAULT_DEF (s)))
> + continue;
> + if (same_type == NULL)
> + same_type = BITMAP_ALLOC (NULL);
> + int p = var_to_partition (map, s);
> + bitmap_set_bit (same_type, p);
> + }
> +
> + for (i = 0; i < n; ++i)
> + {
> + if (same_type && bitmap_bit_p (same_type, i))
> + continue;
> + tree s1 = partition_to_var (map, i);
> + if ((unsigned) var_to_partition (map, s1) != i)
> + continue;
> + int v1 = SSA_NAME_VERSION (s1);
> + for (unsigned j = i + 1; j < n; ++j)
> + {
> + if (same_type && bitmap_bit_p (same_type, j))
> + continue;
> +
> + tree s2 = partition_to_var (map, j);
> + if (s1 == s2 || (unsigned) var_to_partition (map, s2) != j)
> + continue;
> +
> + if (!tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
> + TYPE_SIZE (TREE_TYPE (s2))))
> + continue;
> +
> + int v2 = SSA_NAME_VERSION (s2);
> + if (attempt_coalesce (map, graph, v1, v2, debug_file)
> + && partition_to_var (map, i) != s1)
> + break;
> + }
> + }
> +
> + BITMAP_FREE (same_type);
> +}
> +
> /* Given an initial var_map MAP, coalesce variables and return a partition map
> with the resulting coalesce. Note that this function is called in either
> live range computation context or out-of-ssa context, indicated by MAP. */
> @@ -1709,6 +1850,8 @@ coalesce_ssa_name (var_map map)
> if (map->outofssa_p)
> populate_coalesce_list_for_outofssa (cl, used_in_copies);
> bitmap_list_view (used_in_copies);
> + if (map->bitint)
> + bitmap_ior_into (used_in_copies, map->bitint);
>
> if (dump_file && (dump_flags & TDF_DETAILS))
> dump_var_map (dump_file, map);
> @@ -1756,6 +1899,9 @@ coalesce_ssa_name (var_map map)
> ((dump_flags & TDF_DETAILS) ? dump_file : NULL));
>
> delete_coalesce_list (cl);
> +
> + if (map->bitint && flag_tree_coalesce_vars)
> + coalesce_bitint (map, graph);
> +
> ssa_conflicts_delete (graph);
> }
> -
> --- gcc/tree-ssa-live.cc.jj 2023-03-16 22:01:02.376089791 +0100
> +++ gcc/tree-ssa-live.cc 2023-07-27 15:03:24.289233683 +0200
> @@ -76,10 +76,11 @@ var_map_base_fini (var_map map)
> }
> /* Create a variable partition map of SIZE for region, initialize and return
> it. Region is a loop if LOOP is non-NULL, otherwise is the current
> - function. */
> + function. If BITINT is non-NULL, only SSA_NAMEs from that bitmap
> + will be coalesced. */
>
> var_map
> -init_var_map (int size, class loop *loop)
> +init_var_map (int size, class loop *loop, bitmap bitint)
> {
> var_map map;
>
> @@ -108,7 +109,8 @@ init_var_map (int size, class loop *loop
> else
> {
> map->bmp_bbs = NULL;
> - map->outofssa_p = true;
> + map->outofssa_p = bitint == NULL;
> + map->bitint = bitint;
> basic_block bb;
> FOR_EACH_BB_FN (bb, cfun)
> map->vec_bbs.safe_push (bb);
> --- gcc/tree-ssa-live.h.jj 2023-02-17 22:20:14.986011041 +0100
> +++ gcc/tree-ssa-live.h 2023-07-27 15:03:24.231234494 +0200
> @@ -70,6 +70,10 @@ typedef struct _var_map
> /* Vector of basic block in the region. */
> vec<basic_block> vec_bbs;
>
> + /* If non-NULL, only coalesce SSA_NAMEs from this bitmap, and try harder
> + for those (for bitint lowering pass). */
> + bitmap bitint;
> +
> /* True if this map is for out-of-ssa, otherwise for live range
> computation. When for out-of-ssa, it also means the var map is computed
> for whole current function. */
> @@ -80,7 +84,7 @@ typedef struct _var_map
> /* Value used to represent no partition number. */
> #define NO_PARTITION -1
>
> -extern var_map init_var_map (int, class loop* = NULL);
> +extern var_map init_var_map (int, class loop * = NULL, bitmap = NULL);
> extern void delete_var_map (var_map);
> extern int var_union (var_map, tree, tree);
> extern void partition_view_normal (var_map);
> @@ -100,7 +104,7 @@ inline bool
> region_contains_p (var_map map, basic_block bb)
> {
> /* It's possible that the function is called with ENTRY_BLOCK/EXIT_BLOCK. */
> - if (map->outofssa_p)
> + if (map->outofssa_p || map->bitint)
> return (bb->index != ENTRY_BLOCK && bb->index != EXIT_BLOCK);
>
> return bitmap_bit_p (map->bmp_bbs, bb->index);
> --- gcc/tree-ssa-sccvn.cc.jj 2023-07-24 17:48:26.536039977 +0200
> +++ gcc/tree-ssa-sccvn.cc 2023-07-27 15:03:24.289233683 +0200
> @@ -74,6 +74,7 @@ along with GCC; see the file COPYING3.
> #include "ipa-modref-tree.h"
> #include "ipa-modref.h"
> #include "tree-ssa-sccvn.h"
> +#include "target.h"
>
> /* This algorithm is based on the SCC algorithm presented by Keith
> Cooper and L. Taylor Simpson in "SCC-Based Value numbering"
> @@ -6969,8 +6970,14 @@ eliminate_dom_walker::eliminate_stmt (ba
> || !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
> && !type_has_mode_precision_p (TREE_TYPE (lhs)))
> {
> - if (TREE_CODE (lhs) == COMPONENT_REF
> - || TREE_CODE (lhs) == MEM_REF)
> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> + && (TYPE_PRECISION (TREE_TYPE (lhs))
> + > (targetm.scalar_mode_supported_p (TImode)
> + ? GET_MODE_PRECISION (TImode)
> + : GET_MODE_PRECISION (DImode))))
> + lookup_lhs = NULL_TREE;
What's the reason for this? You allow non-mode precision
stores, if you wanted to disallow BLKmode I think the better
way would be to add != BLKmode above or alternatively
build a limb-size _BitInt type (instead of
build_nonstandard_integer_type)?
> + else if (TREE_CODE (lhs) == COMPONENT_REF
> + || TREE_CODE (lhs) == MEM_REF)
> {
> tree ltype = build_nonstandard_integer_type
> (TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (lhs))),
> --- gcc/tree-switch-conversion.cc.jj 2023-04-27 11:33:13.477770933 +0200
> +++ gcc/tree-switch-conversion.cc 2023-07-27 15:03:24.241234354 +0200
> @@ -1143,32 +1143,93 @@ jump_table_cluster::emit (tree index_exp
> tree default_label_expr, basic_block default_bb,
> location_t loc)
> {
> - unsigned HOST_WIDE_INT range = get_range (get_low (), get_high ());
> + tree low = get_low ();
> + unsigned HOST_WIDE_INT range = get_range (low, get_high ());
> unsigned HOST_WIDE_INT nondefault_range = 0;
> + bool bitint = false;
> + gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
> +
> + /* For large/huge _BitInt, subtract low from index_expr, cast to unsigned
> + DImode type (get_range doesn't support ranges larger than 64-bits)
> + and subtract low from all case values as well. */
> + if (TREE_CODE (TREE_TYPE (index_expr)) == BITINT_TYPE
> + && TYPE_PRECISION (TREE_TYPE (index_expr)) > GET_MODE_PRECISION (DImode))
> + {
> + bitint = true;
> + tree this_low = low, type;
> + gimple *g;
> + if (!TYPE_OVERFLOW_WRAPS (TREE_TYPE (index_expr)))
> + {
> + type = unsigned_type_for (TREE_TYPE (index_expr));
> + g = gimple_build_assign (make_ssa_name (type), NOP_EXPR, index_expr);
> + gimple_set_location (g, loc);
> + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> + index_expr = gimple_assign_lhs (g);
> + this_low = fold_convert (type, this_low);
> + }
> + this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (index_expr)),
> + PLUS_EXPR, index_expr, this_low);
> + gimple_set_location (g, loc);
> + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> + index_expr = gimple_assign_lhs (g);
I suppose using gimple_convert/gimple_build with a sequence would be
easier to follow.
> + type = build_nonstandard_integer_type (GET_MODE_PRECISION (DImode), 1);
> + g = gimple_build_cond (GT_EXPR, index_expr,
> + fold_convert (TREE_TYPE (index_expr),
> + TYPE_MAX_VALUE (type)),
> + NULL_TREE, NULL_TREE);
> + gimple_set_location (g, loc);
> + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> + edge e1 = split_block (m_case_bb, g);
> + e1->flags = EDGE_FALSE_VALUE;
> + e1->probability = profile_probability::likely ();
> + edge e2 = make_edge (e1->src, default_bb, EDGE_TRUE_VALUE);
> + e2->probability = e1->probability.invert ();
> + gsi = gsi_start_bb (e1->dest);
> + g = gimple_build_assign (make_ssa_name (type), NOP_EXPR, index_expr);
> + gimple_set_location (g, loc);
> + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> + index_expr = gimple_assign_lhs (g);
> + }
>
> /* For jump table we just emit a new gswitch statement that will
> be latter lowered to jump table. */
> auto_vec <tree> labels;
> labels.create (m_cases.length ());
>
> - make_edge (m_case_bb, default_bb, 0);
> + basic_block case_bb = gsi_bb (gsi);
> + make_edge (case_bb, default_bb, 0);
> for (unsigned i = 0; i < m_cases.length (); i++)
> {
> - labels.quick_push (unshare_expr (m_cases[i]->m_case_label_expr));
> - make_edge (m_case_bb, m_cases[i]->m_case_bb, 0);
> + tree lab = unshare_expr (m_cases[i]->m_case_label_expr);
> + if (bitint)
> + {
> + CASE_LOW (lab)
> + = fold_convert (TREE_TYPE (index_expr),
> + const_binop (MINUS_EXPR,
> + TREE_TYPE (CASE_LOW (lab)),
> + CASE_LOW (lab), low));
> + if (CASE_HIGH (lab))
> + CASE_HIGH (lab)
> + = fold_convert (TREE_TYPE (index_expr),
> + const_binop (MINUS_EXPR,
> + TREE_TYPE (CASE_HIGH (lab)),
> + CASE_HIGH (lab), low));
> + }
> + labels.quick_push (lab);
> + make_edge (case_bb, m_cases[i]->m_case_bb, 0);
> }
>
> gswitch *s = gimple_build_switch (index_expr,
> unshare_expr (default_label_expr), labels);
> gimple_set_location (s, loc);
> - gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
> gsi_insert_after (&gsi, s, GSI_NEW_STMT);
>
> /* Set up even probabilities for all cases. */
> for (unsigned i = 0; i < m_cases.length (); i++)
> {
> simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
> - edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
> + edge case_edge = find_edge (case_bb, sc->m_case_bb);
> unsigned HOST_WIDE_INT case_range
> = sc->get_range (sc->get_low (), sc->get_high ());
> nondefault_range += case_range;
> @@ -1184,7 +1245,7 @@ jump_table_cluster::emit (tree index_exp
> for (unsigned i = 0; i < m_cases.length (); i++)
> {
> simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
> - edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
> + edge case_edge = find_edge (case_bb, sc->m_case_bb);
> case_edge->probability
> = profile_probability::always ().apply_scale ((intptr_t)case_edge->aux,
> range);
> --- gcc/typeclass.h.jj 2023-01-03 00:20:35.218084730 +0100
> +++ gcc/typeclass.h 2023-07-27 15:03:24.273233907 +0200
> @@ -37,7 +37,8 @@ enum type_class
> function_type_class, method_type_class,
> record_type_class, union_type_class,
> array_type_class, string_type_class,
> - lang_type_class, opaque_type_class
> + lang_type_class, opaque_type_class,
> + bitint_type_class
> };
>
> #endif /* GCC_TYPECLASS_H */
> --- gcc/ubsan.cc.jj 2023-05-20 15:31:09.240660915 +0200
> +++ gcc/ubsan.cc 2023-07-27 15:03:24.260234089 +0200
> @@ -50,6 +50,8 @@ along with GCC; see the file COPYING3.
> #include "gimple-fold.h"
> #include "varasm.h"
> #include "realmpfr.h"
> +#include "target.h"
> +#include "langhooks.h"
Sanitizer support into a separate patch?
> /* Map from a tree to a VAR_DECL tree. */
>
> @@ -125,6 +127,25 @@ tree
> ubsan_encode_value (tree t, enum ubsan_encode_value_phase phase)
> {
> tree type = TREE_TYPE (t);
> + if (TREE_CODE (type) == BITINT_TYPE)
> + {
> + if (TYPE_PRECISION (type) <= POINTER_SIZE)
> + {
> + type = pointer_sized_int_node;
> + t = fold_build1 (NOP_EXPR, type, t);
> + }
> + else
> + {
> + scalar_int_mode arith_mode
> + = (targetm.scalar_mode_supported_p (TImode) ? TImode : DImode);
> + if (TYPE_PRECISION (type) > GET_MODE_PRECISION (arith_mode))
> + return build_zero_cst (pointer_sized_int_node);
> + type
> + = build_nonstandard_integer_type (GET_MODE_PRECISION (arith_mode),
> + TYPE_UNSIGNED (type));
> + t = fold_build1 (NOP_EXPR, type, t);
> + }
> + }
> scalar_mode mode = SCALAR_TYPE_MODE (type);
> const unsigned int bitsize = GET_MODE_BITSIZE (mode);
> if (bitsize <= POINTER_SIZE)
> @@ -355,14 +376,32 @@ ubsan_type_descriptor (tree type, enum u
> {
> /* See through any typedefs. */
> type = TYPE_MAIN_VARIANT (type);
> + tree type3 = type;
> + if (pstyle == UBSAN_PRINT_FORCE_INT)
> + {
> + /* Temporary hack for -fsanitize=shift with _BitInt(129) and more.
> + libubsan crashes if it is not TK_Integer type. */
> + if (TREE_CODE (type) == BITINT_TYPE)
> + {
> + scalar_int_mode arith_mode
> + = (targetm.scalar_mode_supported_p (TImode)
> + ? TImode : DImode);
> + if (TYPE_PRECISION (type) > GET_MODE_PRECISION (arith_mode))
> + type3 = build_qualified_type (type, TYPE_QUAL_CONST);
> + }
> + if (type3 == type)
> + pstyle = UBSAN_PRINT_NORMAL;
> + }
>
> - tree decl = decl_for_type_lookup (type);
> + tree decl = decl_for_type_lookup (type3);
> /* It is possible that some of the earlier created DECLs were found
> unused, in that case they weren't emitted and varpool_node::get
> returns NULL node on them. But now we really need them. Thus,
> renew them here. */
> if (decl != NULL_TREE && varpool_node::get (decl))
> - return build_fold_addr_expr (decl);
> + {
> + return build_fold_addr_expr (decl);
> + }
>
> tree dtype = ubsan_get_type_descriptor_type ();
> tree type2 = type;
> @@ -370,6 +409,7 @@ ubsan_type_descriptor (tree type, enum u
> pretty_printer pretty_name;
> unsigned char deref_depth = 0;
> unsigned short tkind, tinfo;
> + char tname_bitint[sizeof ("unsigned _BitInt(2147483647)")];
>
> /* Get the name of the type, or the name of the pointer type. */
> if (pstyle == UBSAN_PRINT_POINTER)
> @@ -403,8 +443,18 @@ ubsan_type_descriptor (tree type, enum u
> }
>
> if (tname == NULL)
> - /* We weren't able to determine the type name. */
> - tname = "<unknown>";
> + {
> + if (TREE_CODE (type2) == BITINT_TYPE)
> + {
> + snprintf (tname_bitint, sizeof (tname_bitint),
> + "%s_BitInt(%d)", TYPE_UNSIGNED (type2) ? "unsigned " : "",
> + TYPE_PRECISION (type2));
> + tname = tname_bitint;
> + }
> + else
> + /* We weren't able to determine the type name. */
> + tname = "<unknown>";
> + }
>
> pp_quote (&pretty_name);
>
> @@ -472,6 +522,18 @@ ubsan_type_descriptor (tree type, enum u
> case INTEGER_TYPE:
> tkind = 0x0000;
> break;
> + case BITINT_TYPE:
> + {
> + /* FIXME: libubsan right now only supports _BitInts which
> + fit into DImode or TImode. */
> + scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
> + ? TImode : DImode);
> + if (TYPE_PRECISION (eltype) <= GET_MODE_PRECISION (arith_mode))
> + tkind = 0x0000;
> + else
> + tkind = 0xffff;
> + }
> + break;
> case REAL_TYPE:
> /* FIXME: libubsan right now only supports float, double and
> long double type formats. */
> @@ -486,7 +548,17 @@ ubsan_type_descriptor (tree type, enum u
> tkind = 0xffff;
> break;
> }
> - tinfo = get_ubsan_type_info_for_type (eltype);
> + tinfo = tkind == 0xffff ? 0 : get_ubsan_type_info_for_type (eltype);
> +
> + if (pstyle == UBSAN_PRINT_FORCE_INT)
> + {
> + tkind = 0x0000;
> + scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
> + ? TImode : DImode);
> + tree t = lang_hooks.types.type_for_mode (arith_mode,
> + TYPE_UNSIGNED (eltype));
> + tinfo = get_ubsan_type_info_for_type (t);
> + }
>
> /* Create a new VAR_DECL of type descriptor. */
> const char *tmp = pp_formatted_text (&pretty_name);
> @@ -522,7 +594,7 @@ ubsan_type_descriptor (tree type, enum u
> varpool_node::finalize_decl (decl);
>
> /* Save the VAR_DECL into the hash table. */
> - decl_for_type_insert (type, decl);
> + decl_for_type_insert (type3, decl);
>
> return build_fold_addr_expr (decl);
> }
> @@ -1604,8 +1676,9 @@ instrument_si_overflow (gimple_stmt_iter
> Also punt on bit-fields. */
> if (!INTEGRAL_TYPE_P (lhsinner)
> || TYPE_OVERFLOW_WRAPS (lhsinner)
> - || maybe_ne (GET_MODE_BITSIZE (TYPE_MODE (lhsinner)),
> - TYPE_PRECISION (lhsinner)))
> + || (TREE_CODE (lhsinner) != BITINT_TYPE
> + && maybe_ne (GET_MODE_BITSIZE (TYPE_MODE (lhsinner)),
> + TYPE_PRECISION (lhsinner))))
> return;
>
> switch (code)
> --- gcc/ubsan.h.jj 2023-01-03 00:20:35.219084715 +0100
> +++ gcc/ubsan.h 2023-07-27 15:03:24.222234619 +0200
> @@ -39,7 +39,8 @@ enum ubsan_null_ckind {
> enum ubsan_print_style {
> UBSAN_PRINT_NORMAL,
> UBSAN_PRINT_POINTER,
> - UBSAN_PRINT_ARRAY
> + UBSAN_PRINT_ARRAY,
> + UBSAN_PRINT_FORCE_INT
> };
>
> /* This controls ubsan_encode_value behavior. */
> --- gcc/varasm.cc.jj 2023-07-17 09:07:42.158282795 +0200
> +++ gcc/varasm.cc 2023-07-27 15:03:24.291233655 +0200
> @@ -5281,6 +5281,61 @@ output_constant (tree exp, unsigned HOST
> reverse, false);
> break;
>
> + case BITINT_TYPE:
> + if (TREE_CODE (exp) != INTEGER_CST)
> + error ("initializer for %<_BitInt(%d)%> value is not an integer "
> + "constant", TYPE_PRECISION (TREE_TYPE (exp)));
> + else
> + {
> + struct bitint_info info;
> + tree type = TREE_TYPE (exp);
> + gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
> + &info));
> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> + if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
> + {
> + cst = expand_expr (exp, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
> + if (reverse)
> + cst = flip_storage_order (TYPE_MODE (TREE_TYPE (exp)), cst);
> + if (!assemble_integer (cst, MIN (size, thissize), align, 0))
> + error ("initializer for integer/fixed-point value is too "
> + "complicated");
> + break;
> + }
> + int prec = GET_MODE_PRECISION (limb_mode);
> + int cnt = CEIL (TYPE_PRECISION (type), prec);
> + tree limb_type = build_nonstandard_integer_type (prec, 1);
> + int elt_size = GET_MODE_SIZE (limb_mode);
> + unsigned int nalign = MIN (align, GET_MODE_ALIGNMENT (limb_mode));
> + thissize = 0;
> + if (prec == HOST_BITS_PER_WIDE_INT)
> + for (int i = 0; i < cnt; i++)
> + {
> + int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
> + tree c;
> + if (idx >= TREE_INT_CST_EXT_NUNITS (exp))
> + c = build_int_cst (limb_type,
> + tree_int_cst_sgn (exp) < 0 ? -1 : 0);
> + else
> + c = build_int_cst (limb_type,
> + TREE_INT_CST_ELT (exp, idx));
> + output_constant (c, elt_size, nalign, reverse, false);
> + thissize += elt_size;
> + }
> + else
> + for (int i = 0; i < cnt; i++)
> + {
> + int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
> + wide_int w = wi::rshift (wi::to_wide (exp), idx * prec,
> + TYPE_SIGN (TREE_TYPE (exp)));
> + tree c = wide_int_to_tree (limb_type,
> + wide_int::from (w, prec, UNSIGNED));
> + output_constant (c, elt_size, nalign, reverse, false);
> + thissize += elt_size;
> + }
> + }
> + break;
> +
> case ARRAY_TYPE:
> case VECTOR_TYPE:
> switch (TREE_CODE (exp))
> --- gcc/vr-values.cc.jj 2023-06-12 18:52:23.237435819 +0200
> +++ gcc/vr-values.cc 2023-07-27 15:03:24.274233893 +0200
> @@ -111,21 +111,21 @@ check_for_binary_op_overflow (range_quer
> {
> /* So far we found that there is an overflow on the boundaries.
> That doesn't prove that there is an overflow even for all values
> - in between the boundaries. For that compute widest_int range
> + in between the boundaries. For that compute widest2_int range
> of the result and see if it doesn't overlap the range of
> type. */
> - widest_int wmin, wmax;
> - widest_int w[4];
> + widest2_int wmin, wmax;
> + widest2_int w[4];
> int i;
> signop sign0 = TYPE_SIGN (TREE_TYPE (op0));
> signop sign1 = TYPE_SIGN (TREE_TYPE (op1));
> - w[0] = widest_int::from (vr0.lower_bound (), sign0);
> - w[1] = widest_int::from (vr0.upper_bound (), sign0);
> - w[2] = widest_int::from (vr1.lower_bound (), sign1);
> - w[3] = widest_int::from (vr1.upper_bound (), sign1);
> + w[0] = widest2_int::from (vr0.lower_bound (), sign0);
> + w[1] = widest2_int::from (vr0.upper_bound (), sign0);
> + w[2] = widest2_int::from (vr1.lower_bound (), sign1);
> + w[3] = widest2_int::from (vr1.upper_bound (), sign1);
> for (i = 0; i < 4; i++)
> {
> - widest_int wt;
> + widest2_int wt;
> switch (subcode)
> {
> case PLUS_EXPR:
> @@ -153,10 +153,10 @@ check_for_binary_op_overflow (range_quer
> }
> /* The result of op0 CODE op1 is known to be in range
> [wmin, wmax]. */
> - widest_int wtmin
> - = widest_int::from (irange_val_min (type), TYPE_SIGN (type));
> - widest_int wtmax
> - = widest_int::from (irange_val_max (type), TYPE_SIGN (type));
> + widest2_int wtmin
> + = widest2_int::from (irange_val_min (type), TYPE_SIGN (type));
> + widest2_int wtmax
> + = widest2_int::from (irange_val_max (type), TYPE_SIGN (type));
> /* If all values in [wmin, wmax] are smaller than
> [wtmin, wtmax] or all are larger than [wtmin, wtmax],
> the arithmetic operation will always overflow. */
> @@ -1717,12 +1717,11 @@ simplify_using_ranges::simplify_internal
> g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
> else
> {
> - int prec = TYPE_PRECISION (type);
> tree utype = type;
> if (ovf
> || !useless_type_conversion_p (type, TREE_TYPE (op0))
> || !useless_type_conversion_p (type, TREE_TYPE (op1)))
> - utype = build_nonstandard_integer_type (prec, 1);
> + utype = unsigned_type_for (type);
> if (TREE_CODE (op0) == INTEGER_CST)
> op0 = fold_convert (utype, op0);
> else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))
Phew. That was big.
A lot of it looks OK (I guess nearly all of it). For the overall
picture I'm unsure esp. how/if we need to keep the distinction for
small _BitInt<>s and if we maybe want to lower them earlier even?
I think we can iterate on most of this when this is already in-tree.
In case you want to address some of the issues first please chop
the patch into smaller pieces.
Thanks,
Richard.
On Fri, Aug 04, 2023 at 01:25:07PM +0000, Richard Biener wrote:
> > @@ -144,6 +144,9 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type
> > and TYPE_PRECISION (number of bits used by this type). */
> > DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)
Thanks.
> > +/* Bit-precise integer type. */
> > +DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
> > +
>
> So what was the main reason to not make BITINT_TYPE equal to INTEGER_TYPE?
The fact that they do or can have different calling conventions from normal
integers; they e.g. don't promote to integers, so IFN_VA_ARG handling is
affected (lowered only during stdarg pass after IPA), calling conventions
depend (with a single finalized target it is premature to hardcode how it
will behave for all the others, and while on x86_64 the up to 128-bit
_BitInt pass/return mostly the same, e.g. _BitInt(128) has alignof
like long long, while __int128 has twice as large alignment.
So, the above was the main reason to make BITINT_TYPE <-> non-BITINT_TYPE
conversions non-useless such that calls have the right type of arguments.
I'll try to adjust the comments and mention it in generic.texi.
> Maybe note that in the comment as
>
> "While bit-precise integer types share the same properties as
> INTEGER_TYPE ..."
>
> ?
>
> Note INTEGER_TYPE is documeted in generic.texi but unless I missed
> it the changelog above doesn't mention documentation for BITINT_TYPE
> added there.
> > + if (bitint_type_cache == NULL)
> > + vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
> > +
> > + if (precision <= MAX_INT_CACHED_PREC)
> > + {
> > + itype = (*bitint_type_cache)[precision + unsignedp];
> > + if (itype)
> > + return itype;
>
> I think we added this kind of cache for standard INTEGER_TYPE because
> the middle-end builds those all over the place and going through
> the type_hash is expensive. Is that true for _BitInt as well? If
> not it doesn't seem worth the extra caching.
As even the very large _BitInts are used in the pre-IPA passes, IPA passes
and a few post-IPA passes similarly to other integral types, I think the
caching is very useful. But if you want, I could gather some statistics
on those. Most importantly, no price (almost) is paid if one doesn't use
those types in the source.
> In fact, I wonder whether the middle-end does/should treat
> _BitInt<N> and an INTEGER_TYPE with precision N any different?
See above.
> Aka, should we build an INTEGER_TYPE whenever N is say less than
> the number of bits in word_mode?
>
> > + if (TREE_CODE (pval) == INTEGER_CST
> > + && TREE_CODE (TREE_TYPE (pval)) == BITINT_TYPE)
> > + {
> > + unsigned int prec = TYPE_PRECISION (TREE_TYPE (pval));
> > + struct bitint_info info;
> > + gcc_assert (targetm.c.bitint_type_info (prec, &info));
> > + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> > + unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
> > + if (prec > limb_prec)
> > + {
> > + scalar_int_mode arith_mode
> > + = (targetm.scalar_mode_supported_p (TImode)
> > + ? TImode : DImode);
> > + if (prec > GET_MODE_PRECISION (arith_mode))
> > + pval = tree_output_constant_def (pval);
> > + }
>
> A comment would be helpful to understand what we are doing here.
Ok, will add that. Note, this particular spot is an area for future
improvement, I've spent half of day on it but then gave up for now.
In the lowering pass I'm trying to optimize the common case where a lot
of constants don't need all the limbs and can be represented as one limb
or several limbs in memory with all the higher limbs then filled with 0s
or -1s. For the argument passing, it would be even useful to have smaller
_BitInt constants passed by not having them in memory at all and just
pushing a couple of constants (i.e. store_by_pieces way). But trying to
do that in emit_push_insn wasn't really easy...
> > --- gcc/config/i386/i386.cc.jj 2023-07-19 10:01:17.380467993 +0200
> > +++ gcc/config/i386/i386.cc 2023-07-27 15:03:24.230234508 +0200
> > @@ -2121,7 +2121,8 @@ classify_argument (machine_mode mode, co
> > return 0;
> > }
>
> splitting out target support to a separate patch might be helpful
Ok.
> > --- gcc/doc/tm.texi.jj 2023-05-30 17:52:34.474857301 +0200
> > +++ gcc/doc/tm.texi 2023-07-27 15:03:24.284233753 +0200
> > @@ -1020,6 +1020,11 @@ Return a value, with the same meaning as
> > @code{FLT_EVAL_METHOD} that describes which excess precision should be
> > applied.
> >
> > +@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, struct bitint_info *@var{info})
> > +This target hook returns true if _BitInt(N) is supported and provides some
> > +details on it.
> > +@end deftypefn
> > +
>
> document the "details" here please?
Will do.
> > @@ -20523,6 +20546,22 @@ rtl_for_decl_init (tree init, tree type)
> > return NULL;
> > }
> >
> > + /* RTL can't deal with BLKmode INTEGER_CSTs. */
> > + if (TREE_CODE (init) == INTEGER_CST
> > + && TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
> > + && TYPE_MODE (TREE_TYPE (init)) == BLKmode)
> > + {
> > + if (tree_fits_shwi_p (init))
> > + {
> > + bool uns = TYPE_UNSIGNED (TREE_TYPE (init));
> > + tree type
> > + = build_nonstandard_integer_type (HOST_BITS_PER_WIDE_INT, uns);
> > + init = fold_convert (type, init);
> > + }
> > + else
> > + return NULL;
> > + }
> > +
>
> it feels like we should avoid the above and fix expand_expr instead.
> The assert immediately following seems to "support" a NULL_RTX return
> value so the above trick should work there, too, and we can possibly
> avoid creating a new INTEGER_TYPE and INTEGER_CST? Another option
> would be to simply use immed_wide_int_const or simply
> build a VOIDmode CONST_INT directly here?
Not really sure in this case. I guess I could instead deal with BLKmode
BITINT_TYPE INTEGER_CSTs in expand_expr* and emit those into memory, but
I think dwarf2out would be upset that a constant was forced into memory,
it really wants some DWARF constant.
Sure, I could create a CONST_INT directly. What to do for larger ones
is I'm afraid an area for future DWARF improvements.
> > --- gcc/expr.cc.jj 2023-07-02 12:07:08.455164393 +0200
> > +++ gcc/expr.cc 2023-07-27 15:03:24.253234187 +0200
> > @@ -10828,6 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
> > ssa_name = exp;
> > decl_rtl = get_rtx_for_ssa_name (ssa_name);
> > exp = SSA_NAME_VAR (ssa_name);
> > + if (!exp || VAR_P (exp))
> > + reduce_bit_field = false;
>
> That needs an explanation. Can we do this and related changes
> as prerequesite instead?
I can add a comment, but those 2 lines are an optimization for the other
hunks in the same function. The intent is to do the zero/sign extensions
of _BitInt < mode precision objects (note, this is about the small/middle
ones which aren't or aren't much lowered in the lowering pass) when reading
from memory, or function arguments (or RESULT_DECL?) because the ABI says
those bits are undefined there, but not to do that for temporaries
(SSA_NAMEs other than the parameters/RESULT_DECLs) because RTL expansion
has done those extensions already when storing them into the pseudos.
> > goto expand_decl_rtl;
> >
> > case VAR_DECL:
> > @@ -10961,6 +10963,13 @@ expand_expr_real_1 (tree exp, rtx target
> > temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> > MEM_ALIGN (temp), NULL_RTX, NULL);
> >
> > + if (TREE_CODE (type) == BITINT_TYPE
> > + && reduce_bit_field
> > + && mode != BLKmode
> > + && modifier != EXPAND_MEMORY
> > + && modifier != EXPAND_WRITE
> > + && modifier != EXPAND_CONST_ADDRESS)
> > + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
>
> I wonder how much work it would be to "lower" 'reduce_bit_field' earlier
> on GIMPLE...
I know that the expr.cc hacks aren't nice, but I'm afraid it would be a lot
of work and lot of code. And not really sure how to make sure further
GIMPLE passes wouldn't optimize that away.
>
> > @@ -11192,6 +11215,13 @@ expand_expr_real_1 (tree exp, rtx target
> > && align < GET_MODE_ALIGNMENT (mode))
> > temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> > align, NULL_RTX, NULL);
> > + if (TREE_CODE (type) == BITINT_TYPE
> > + && reduce_bit_field
> > + && mode != BLKmode
> > + && modifier != EXPAND_WRITE
> > + && modifier != EXPAND_MEMORY
> > + && modifier != EXPAND_CONST_ADDRESS)
> > + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
>
> so this is quite repetitive, I suppose the checks ensure we apply
> it to rvalues only, but I don't really get why we only reduce
> BITINT_TYPE, esp. as we are not considering BLKmode here?
There could be a macro for that or something to avoid the repetitions.
The reason to do that for BITINT_TYPE only is that for everything else
unfortunately RTL does it completely differently. There is separate
code when reading from bit-fields (which does those extensions), but for
anything else RTL assumes that sub-mode integers are always extended to the
corresponding mode. Say for the case where the non-mode integers leak into
code (C long long/__int128 bit-fields larger than 32 bits) and where say
FRE/SRA optimizes into SSA_NAMEs, everything assumes that when it is spilled
in memory, it is always extended and re-extends after every binary/unary
operation.
Unfortunately, the x86-64 psABI (and the plans in other psABIs) says the
padding bits are undefined and so for ABI compatibility we can't rely
on those bits. Now, for the large/huge ones where lowering occurs I believe
this shouldn't be a problem, those are VCEd to full limbs and then
explicitly extend from smaller number of bits on reads.
> > @@ -11253,18 +11283,21 @@ expand_expr_real_1 (tree exp, rtx target
> > set_mem_addr_space (temp, as);
> > if (TREE_THIS_VOLATILE (exp))
> > MEM_VOLATILE_P (temp) = 1;
> > - if (modifier != EXPAND_WRITE
> > - && modifier != EXPAND_MEMORY
> > - && !inner_reference_p
> > + if (modifier == EXPAND_WRITE || modifier == EXPAND_MEMORY)
> > + return temp;
> > + if (!inner_reference_p
> > && mode != BLKmode
> > && align < GET_MODE_ALIGNMENT (mode))
> > temp = expand_misaligned_mem_ref (temp, mode, unsignedp, align,
> > modifier == EXPAND_STACK_PARM
> > ? NULL_RTX : target, alt_rtl);
> > - if (reverse
> > - && modifier != EXPAND_MEMORY
> > - && modifier != EXPAND_WRITE)
> > + if (reverse)
>
> the above two look like a useful prerequesite, OK to push separately.
Ok, will do.
> > +enum bitint_prec_kind {
> > + bitint_prec_small,
> > + bitint_prec_middle,
> > + bitint_prec_large,
> > + bitint_prec_huge
> > +};
> > +
> > +/* Caches to speed up bitint_precision_kind. */
> > +
> > +static int small_max_prec, mid_min_prec, large_min_prec, huge_min_prec;
> > +static int limb_prec;
>
> I would appreciate the lowering pass to be in a separate patch in
> case we need to iterate on it.
I guess that is possible, as long as the C + testcases patches go last,
nothing will really create those types.
>
> > +/* Categorize _BitInt(PREC) as small, middle, large or huge. */
> > +
> > +static bitint_prec_kind
> > +bitint_precision_kind (int prec)
> > +{
> > + if (prec <= small_max_prec)
> > + return bitint_prec_small;
> > + if (huge_min_prec && prec >= huge_min_prec)
> > + return bitint_prec_huge;
> > + if (large_min_prec && prec >= large_min_prec)
> > + return bitint_prec_large;
> > + if (mid_min_prec && prec >= mid_min_prec)
> > + return bitint_prec_middle;
> > +
> > + struct bitint_info info;
> > + gcc_assert (targetm.c.bitint_type_info (prec, &info));
> > + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> > + if (prec <= GET_MODE_PRECISION (limb_mode))
> > + {
> > + small_max_prec = prec;
> > + return bitint_prec_small;
> > + }
> > + scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
> > + ? TImode : DImode);
> > + if (!large_min_prec
> > + && GET_MODE_PRECISION (arith_mode) > GET_MODE_PRECISION (limb_mode))
> > + large_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> > + if (!limb_prec)
> > + limb_prec = GET_MODE_PRECISION (limb_mode);
> > + if (!huge_min_prec)
> > + {
> > + if (4 * limb_prec >= GET_MODE_PRECISION (arith_mode))
> > + huge_min_prec = 4 * limb_prec;
> > + else
> > + huge_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> > + }
> > + if (prec <= GET_MODE_PRECISION (arith_mode))
> > + {
> > + if (!mid_min_prec || prec < mid_min_prec)
> > + mid_min_prec = prec;
> > + return bitint_prec_middle;
> > + }
> > + if (large_min_prec && prec <= large_min_prec)
> > + return bitint_prec_large;
> > + return bitint_prec_huge;
> > +}
> > +
> > +/* Same for a TYPE. */
> > +
> > +static bitint_prec_kind
> > +bitint_precision_kind (tree type)
> > +{
> > + return bitint_precision_kind (TYPE_PRECISION (type));
> > +}
> > +
> > +/* Return minimum precision needed to describe INTEGER_CST
> > + CST. All bits above that precision up to precision of
> > + TREE_TYPE (CST) are cleared if EXT is set to 0, or set
> > + if EXT is set to -1. */
> > +
> > +static unsigned
> > +bitint_min_cst_precision (tree cst, int &ext)
> > +{
> > + ext = tree_int_cst_sgn (cst) < 0 ? -1 : 0;
> > + wide_int w = wi::to_wide (cst);
> > + unsigned min_prec = wi::min_precision (w, TYPE_SIGN (TREE_TYPE (cst)));
> > + /* For signed values, we don't need to count the sign bit,
> > + we'll use constant 0 or -1 for the upper bits. */
> > + if (!TYPE_UNSIGNED (TREE_TYPE (cst)))
> > + --min_prec;
> > + else
> > + {
> > + /* For unsigned values, also try signed min_precision
> > + in case the constant has lots of most significant bits set. */
> > + unsigned min_prec2 = wi::min_precision (w, SIGNED) - 1;
> > + if (min_prec2 < min_prec)
> > + {
> > + ext = -1;
> > + return min_prec2;
> > + }
> > + }
> > + return min_prec;
> > +}
> > +
> > +namespace {
> > +
> > +/* If OP is middle _BitInt, cast it to corresponding INTEGER_TYPE
> > + cached in TYPE and return it. */
> > +
> > +tree
> > +maybe_cast_middle_bitint (gimple_stmt_iterator *gsi, tree op, tree &type)
> > +{
> > + if (op == NULL_TREE
> > + || TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
> > + || bitint_precision_kind (TREE_TYPE (op)) != bitint_prec_middle)
> > + return op;
> > +
> > + int prec = TYPE_PRECISION (TREE_TYPE (op));
> > + int uns = TYPE_UNSIGNED (TREE_TYPE (op));
> > + if (type == NULL_TREE
> > + || TYPE_PRECISION (type) != prec
> > + || TYPE_UNSIGNED (type) != uns)
> > + type = build_nonstandard_integer_type (prec, uns);
> > +
> > + if (TREE_CODE (op) != SSA_NAME)
> > + {
> > + tree nop = fold_convert (type, op);
> > + if (is_gimple_val (nop))
> > + return nop;
> > + }
> > +
> > + tree nop = make_ssa_name (type);
> > + gimple *g = gimple_build_assign (nop, NOP_EXPR, op);
> > + gsi_insert_before (gsi, g, GSI_SAME_STMT);
> > + return nop;
> > +}
> > +
> > +/* Return true if STMT can be handled in a loop from least to most
> > + significant limb together with its dependencies. */
> > +
> > +bool
> > +mergeable_op (gimple *stmt)
> > +{
> > + if (!is_gimple_assign (stmt))
> > + return false;
> > + switch (gimple_assign_rhs_code (stmt))
> > + {
> > + case PLUS_EXPR:
> > + case MINUS_EXPR:
> > + case NEGATE_EXPR:
> > + case BIT_AND_EXPR:
> > + case BIT_IOR_EXPR:
> > + case BIT_XOR_EXPR:
> > + case BIT_NOT_EXPR:
> > + case SSA_NAME:
> > + case INTEGER_CST:
> > + return true;
> > + case LSHIFT_EXPR:
> > + {
> > + tree cnt = gimple_assign_rhs2 (stmt);
> > + if (tree_fits_uhwi_p (cnt)
> > + && tree_to_uhwi (cnt) < (unsigned HOST_WIDE_INT) limb_prec)
> > + return true;
> > + }
> > + break;
> > + CASE_CONVERT:
> > + case VIEW_CONVERT_EXPR:
> > + {
> > + tree lhs_type = TREE_TYPE (gimple_assign_lhs (stmt));
> > + tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
> > + if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
> > + && TREE_CODE (lhs_type) == BITINT_TYPE
> > + && TREE_CODE (rhs_type) == BITINT_TYPE
> > + && bitint_precision_kind (lhs_type) >= bitint_prec_large
> > + && bitint_precision_kind (rhs_type) >= bitint_prec_large
> > + && tree_int_cst_equal (TYPE_SIZE (lhs_type), TYPE_SIZE (rhs_type)))
> > + {
> > + if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type))
> > + return true;
> > + if ((unsigned) TYPE_PRECISION (lhs_type) % (2 * limb_prec) != 0)
> > + return true;
> > + if (bitint_precision_kind (lhs_type) == bitint_prec_large)
> > + return true;
> > + }
> > + break;
> > + }
> > + default:
> > + break;
> > + }
> > + return false;
> > +}
> > +
> > +/* Return non-zero if stmt is .{ADD,SUB,MUL}_OVERFLOW call with
> > + _Complex large/huge _BitInt lhs which has at most two immediate uses,
> > + at most one use in REALPART_EXPR stmt in the same bb and exactly one
> > + IMAGPART_EXPR use in the same bb with a single use which casts it to
> > + non-BITINT_TYPE integral type. If there is a REALPART_EXPR use,
> > + return 2. Such cases (most common uses of those builtins) can be
> > + optimized by marking their lhs and lhs of IMAGPART_EXPR and maybe lhs
> > + of REALPART_EXPR as not needed to be backed up by a stack variable.
> > + For .UBSAN_CHECK_{ADD,SUB,MUL} return 3. */
> > +
> > +int
> > +optimizable_arith_overflow (gimple *stmt)
> > +{
> > + bool is_ubsan = false;
> > + if (!is_gimple_call (stmt) || !gimple_call_internal_p (stmt))
> > + return false;
> > + switch (gimple_call_internal_fn (stmt))
> > + {
> > + case IFN_ADD_OVERFLOW:
> > + case IFN_SUB_OVERFLOW:
> > + case IFN_MUL_OVERFLOW:
> > + break;
> > + case IFN_UBSAN_CHECK_ADD:
> > + case IFN_UBSAN_CHECK_SUB:
> > + case IFN_UBSAN_CHECK_MUL:
> > + is_ubsan = true;
> > + break;
> > + default:
> > + return 0;
> > + }
> > + tree lhs = gimple_call_lhs (stmt);
> > + if (!lhs)
> > + return 0;
> > + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs))
> > + return 0;
> > + tree type = is_ubsan ? TREE_TYPE (lhs) : TREE_TYPE (TREE_TYPE (lhs));
> > + if (TREE_CODE (type) != BITINT_TYPE
> > + || bitint_precision_kind (type) < bitint_prec_large)
> > + return 0;
> > +
> > + if (is_ubsan)
> > + {
> > + use_operand_p use_p;
> > + gimple *use_stmt;
> > + if (!single_imm_use (lhs, &use_p, &use_stmt)
> > + || gimple_bb (use_stmt) != gimple_bb (stmt)
> > + || !gimple_store_p (use_stmt)
> > + || !is_gimple_assign (use_stmt)
> > + || gimple_has_volatile_ops (use_stmt)
> > + || stmt_ends_bb_p (use_stmt))
> > + return 0;
> > + return 3;
> > + }
> > +
> > + imm_use_iterator ui;
> > + use_operand_p use_p;
> > + int seen = 0;
> > + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> > + {
> > + gimple *g = USE_STMT (use_p);
> > + if (is_gimple_debug (g))
> > + continue;
> > + if (!is_gimple_assign (g) || gimple_bb (g) != gimple_bb (stmt))
> > + return 0;
> > + if (gimple_assign_rhs_code (g) == REALPART_EXPR)
> > + {
> > + if ((seen & 1) != 0)
> > + return 0;
> > + seen |= 1;
> > + }
> > + else if (gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> > + {
> > + if ((seen & 2) != 0)
> > + return 0;
> > + seen |= 2;
> > +
> > + use_operand_p use2_p;
> > + gimple *use_stmt;
> > + tree lhs2 = gimple_assign_lhs (g);
> > + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs2))
> > + return 0;
> > + if (!single_imm_use (lhs2, &use2_p, &use_stmt)
> > + || gimple_bb (use_stmt) != gimple_bb (stmt)
> > + || !gimple_assign_cast_p (use_stmt))
> > + return 0;
> > +
> > + lhs2 = gimple_assign_lhs (use_stmt);
> > + if (!INTEGRAL_TYPE_P (TREE_TYPE (lhs2))
> > + || TREE_CODE (TREE_TYPE (lhs2)) == BITINT_TYPE)
> > + return 0;
> > + }
> > + else
> > + return 0;
> > + }
> > + if ((seen & 2) == 0)
> > + return 0;
> > + return seen == 3 ? 2 : 1;
> > +}
> > +
> > +/* If STMT is some kind of comparison (GIMPLE_COND, comparison
> > + assignment or COND_EXPR) comparing large/huge _BitInt types,
> > + return the comparison code and if non-NULL fill in the comparison
> > + operands to *POP1 and *POP2. */
> > +
> > +tree_code
> > +comparison_op (gimple *stmt, tree *pop1, tree *pop2)
> > +{
> > + tree op1 = NULL_TREE, op2 = NULL_TREE;
> > + tree_code code = ERROR_MARK;
> > + if (gimple_code (stmt) == GIMPLE_COND)
> > + {
> > + code = gimple_cond_code (stmt);
> > + op1 = gimple_cond_lhs (stmt);
> > + op2 = gimple_cond_rhs (stmt);
> > + }
> > + else if (is_gimple_assign (stmt))
> > + {
> > + code = gimple_assign_rhs_code (stmt);
> > + op1 = gimple_assign_rhs1 (stmt);
> > + if (TREE_CODE_CLASS (code) == tcc_comparison
> > + || TREE_CODE_CLASS (code) == tcc_binary)
> > + op2 = gimple_assign_rhs2 (stmt);
> > + switch (code)
> > + {
> > + default:
> > + break;
> > + case COND_EXPR:
> > + tree cond = gimple_assign_rhs1 (stmt);
> > + code = TREE_CODE (cond);
> > + op1 = TREE_OPERAND (cond, 0);
> > + op2 = TREE_OPERAND (cond, 1);
>
> this should ICE, COND_EXPRs now have is_gimple_reg conditions.
COND_EXPR was a case I haven't managed to reproduce (I think
usually if it is created at all it is created later).
I see tree-cfg.cc for this was changed in GCC 13, but I see tons
of spots which still try to handle COMPARISON_CLASS_P rhs1 of COND_EXPR
(e.g. in tree-ssa-math-opts.cc). Does the rhs1 have to be boolean,
or could it be any integral type (so, would I need to e.g. be prepared
for BITINT_TYPE rhs1 which would need to have lowered != 0 comparison for
it)?
> > +/* Return a tree how to access limb IDX of VAR corresponding to BITINT_TYPE
> > + TYPE. If WRITE_P is true, it will be a store, otherwise a read. */
> > +
> > +tree
> > +bitint_large_huge::limb_access (tree type, tree var, tree idx, bool write_p)
> > +{
> > + tree atype = (tree_fits_uhwi_p (idx)
> > + ? limb_access_type (type, idx) : m_limb_type);
> > + tree ret;
> > + if (DECL_P (var) && tree_fits_uhwi_p (idx))
> > + {
> > + tree ptype = build_pointer_type (strip_array_types (TREE_TYPE (var)));
> > + unsigned HOST_WIDE_INT off = tree_to_uhwi (idx) * m_limb_size;
> > + ret = build2 (MEM_REF, m_limb_type,
> > + build_fold_addr_expr (var),
> > + build_int_cst (ptype, off));
> > + if (TREE_THIS_VOLATILE (var) || TREE_THIS_VOLATILE (TREE_TYPE (var)))
> > + TREE_THIS_VOLATILE (ret) = 1;
>
> Note if we have
>
> volatile int i;
> x = *(int *)&i;
>
> we get a non-volatile load from 'i', likewise in the reverse case
> where we get a volatile load from a non-volatile decl. The above
> gets this wrong - the volatileness should be derived from the
> original reference with just TREE_THIS_VOLATILE checking
> (and not on the type).
>
> You possibly also want to set TREE_SIDE_EFFECTS (not sure when
> that was exactly set), forwprop for example makes sure to copy
> that (and also TREE_THIS_NOTRAP in some cases).
Ok.
> How do "volatile" _BitInt(n) work? People expect 'volatile'
> objects to be operated on in whole, thus a 'volatile int'
> load not split into two, etc. I guess if we split a volatile
> _BitInt access it's reasonable to remove the 'volatile'?
They work like volatile bitfields or volatile __int128 or long long
on 32-bit arches, we don't really guarantee a single load or store there
(unless one uses __atomic* APIs which are lock-free).
The intent for volatile and what I've checked e.g. by eyeballing dumps
was that the volatile _BitInt loads or stores aren't merged with other
operations (if they were merged and we e.g. had z = x + y where all 3
vars would be volatile, we'd first read LSB limb of all those and store
result etc., when not merged each "load" or "store" isn't interleaved
with others) and e.g. even _BitInt bit-field loads/stores aren't reading
the same memory multiple times (which is what can happen e.g. for shifts
or </<=/>/>= comparisons when they aren't iterating on limbs strictly
upwards from least significant to most).
> > + else
> > + {
> > + var = unshare_expr (var);
> > + if (TREE_CODE (TREE_TYPE (var)) != ARRAY_TYPE
> > + || !useless_type_conversion_p (m_limb_type,
> > + TREE_TYPE (TREE_TYPE (var))))
> > + {
> > + unsigned HOST_WIDE_INT nelts
> > + = tree_to_uhwi (TYPE_SIZE (type)) / limb_prec;
> > + tree atype = build_array_type_nelts (m_limb_type, nelts);
> > + var = build1 (VIEW_CONVERT_EXPR, atype, var);
> > + }
> > + ret = build4 (ARRAY_REF, m_limb_type, var, idx, NULL_TREE, NULL_TREE);
> > + }
>
> maybe the volatile handling can be commonized here?
From my experience with it, the volatile handling didn't have to be added
in this case because it works from the VIEW_CONVERT_EXPRs.
It was just the optimizations for decls and MEM_REFs with constant indexes
where I had to do something about volatile.
> > + case SSA_NAME:
> > + if (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
> > + {
> > + if (gimple_code (SSA_NAME_DEF_STMT (op)) == GIMPLE_NOP)
>
> SSA_NAME_IS_DEFAULT_DEF
Ok.
>
> > + {
> > + if (m_first)
> > + {
> > + tree v = create_tmp_var (m_limb_type);
>
> create_tmp_reg?
I see create_tmp_reg just calls create_tmp_var, but if you prefer it,
sure, it isn't an addressable var and so either is fine.
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
> > + e3->probability = profile_probability::likely ();
> > + if (min_prec >= (prec - rem) / 2)
> > + e3->probability = e3->probability.invert ();
> > + e1->flags = EDGE_FALSE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + if (min_prec > (unsigned) limb_prec)
> > + {
> > + c = limb_access (TREE_TYPE (op), c, idx, false);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (c)), c);
> > + insert_before (g);
> > + c = gimple_assign_lhs (g);
> > + }
> > + tree c2 = build_int_cst (m_limb_type, ext);
> > + m_gsi = gsi_after_labels (e2->dest);
> > + t = make_ssa_name (m_limb_type);
> > + gphi *phi = create_phi_node (t, e2->dest);
> > + add_phi_arg (phi, c, e2, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, c2, e3, UNKNOWN_LOCATION);
>
> Not sure if I get to see more than the two cases above but maybe
> a helper to emit a (half-)diamond for N values (PHI results) would be
> helpful (possibly indicating the fallthru edge truth value if any)?
I've added a helper to create a loop, but indeed doing this for the
ifs might be a good idea too, just quite a lot of work to get it right
because it is now used in many places.
I think the code uses 3 cases, one is to create
C1
|\
|B1
|/
+
another
C1
/ \
B1 B2
\ /
+
and another
C1
/ \
| C2
| |\
| | \
|B1 B2
\ | /
\|/
+
and needs to remember for later the edges to create phis if needed.
And, sometimes the B1 or B2 bbs are split to deal with EH edges. So will
need to think about best interface for these. Could this be done
incrementally when/if it is committed to trunk?
> > + tree in = add_cast (rhs1_type, data_in);
> > + lhs = make_ssa_name (rhs1_type);
> > + g = gimple_build_assign (lhs, code, rhs1, rhs2);
> > + insert_before (g);
> > + rhs1 = make_ssa_name (rhs1_type);
> > + g = gimple_build_assign (rhs1, code, lhs, in);
> > + insert_before (g);
>
> I'll just note there's now gimple_build overloads inserting at an
> iterator:
>
> extern tree gimple_build (gimple_stmt_iterator *, bool,
> enum gsi_iterator_update,
> location_t, code_helper, tree, tree, tree);
>
> I guess there's not much folding possibilities during the building,
> but it would allow to write
Changing that would mean rewriting everything I'm afraid. Indeed as you
wrote, it is very rare that something could be folded during the lowering.
>
> rhs1 = gimple_build (&gsi, true, GSI_SAME_STMT, m_loc, code, rhs1_type,
> lhs, in);
>
> instead of
>
> > + rhs1 = make_ssa_name (rhs1_type);
> > + g = gimple_build_assign (rhs1, code, lhs, in);
> > + insert_before (g);
>
> just in case you forgot about those. I think we're missing some
> gimple-build "state" class to keep track of common arguments, like
>
> gimple_build gb (&gsi, true, GSI_SAME_STMT, m_loc);
> rhs1 = gb.build (code, rhs1_type, lhs, in);
> ...
>
> anyway, just wanted to note this - no need to change the patch.
> > + switch (gimple_code (stmt))
> > + {
> > + case GIMPLE_ASSIGN:
> > + if (gimple_assign_load_p (stmt))
> > + {
> > + rhs1 = gimple_assign_rhs1 (stmt);
>
> so TREE_THIS_VOLATILE/TREE_SIDE_EFFECTS (rhs1) would be the thing
> to eventually preserve
limb_access should do that.
> > +tree
> > +bitint_large_huge::create_loop (tree init, tree *idx_next)
> > +{
> > + if (!gsi_end_p (m_gsi))
> > + gsi_prev (&m_gsi);
> > + else
> > + m_gsi = gsi_last_bb (gsi_bb (m_gsi));
> > + edge e1 = split_block (gsi_bb (m_gsi), gsi_stmt (m_gsi));
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->dest, e1->dest, EDGE_TRUE_VALUE);
> > + e3->probability = profile_probability::very_unlikely ();
> > + e2->flags = EDGE_FALSE_VALUE;
> > + e2->probability = e3->probability.invert ();
> > + tree idx = make_ssa_name (sizetype);
>
> maybe you want integer_type_node instead?
The indexes are certainly unsigned, and given that they are used
as array indexes, I thought sizetype would avoid zero or sign extensions
in lots of places.
> > + gphi *phi = create_phi_node (idx, e1->dest);
> > + add_phi_arg (phi, init, e1, UNKNOWN_LOCATION);
> > + *idx_next = make_ssa_name (sizetype);
> > + add_phi_arg (phi, *idx_next, e3, UNKNOWN_LOCATION);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + m_bb = e1->dest;
> > + m_preheader_bb = e1->src;
> > + class loop *loop = alloc_loop ();
> > + loop->header = e1->dest;
> > + add_loop (loop, e1->src->loop_father);
>
> There is create_empty_loop_on_edge, it does a little bit more
> than the above though.
That looks much larger than what I need.
>
> > + return idx;
> > +}
> > +
> > +/* Lower large/huge _BitInt statement mergeable or similar STMT which can be
> > + lowered using iteration from the least significant limb up to the most
> > + significant limb. For large _BitInt it is emitted as straight line code
> > + before current location, for huge _BitInt as a loop handling two limbs
> > + at once, followed by handling up to limbs in straight line code (at most
> > + one full and one partial limb). It can also handle EQ_EXPR/NE_EXPR
> > + comparisons, in that case CMP_CODE should be the comparison code and
> > + CMP_OP1/CMP_OP2 the comparison operands. */
> > +
> > +tree
> > +bitint_large_huge::lower_mergeable_stmt (gimple *stmt, tree_code &cmp_code,
> > + tree cmp_op1, tree cmp_op2)
> > +{
> > + bool eq_p = cmp_code != ERROR_MARK;
> > + tree type;
> > + if (eq_p)
> > + type = TREE_TYPE (cmp_op1);
> > + else
> > + type = TREE_TYPE (gimple_assign_lhs (stmt));
> > + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> > + bitint_prec_kind kind = bitint_precision_kind (type);
> > + gcc_assert (kind >= bitint_prec_large);
> > + gimple *g;
> > + tree lhs = gimple_get_lhs (stmt);
> > + tree rhs1, lhs_type = lhs ? TREE_TYPE (lhs) : NULL_TREE;
> > + if (lhs
> > + && TREE_CODE (lhs) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > + {
> > + int p = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[p] != NULL_TREE);
> > + m_lhs = lhs = m_vars[p];
> > + }
> > + unsigned cnt, rem = 0, end = 0, prec = TYPE_PRECISION (type);
> > + bool sext = false;
> > + tree ext = NULL_TREE, store_operand = NULL_TREE;
> > + bool eh = false;
> > + basic_block eh_pad = NULL;
> > + if (gimple_store_p (stmt))
> > + {
> > + store_operand = gimple_assign_rhs1 (stmt);
> > + eh = stmt_ends_bb_p (stmt);
> > + if (eh)
> > + {
> > + edge e;
> > + edge_iterator ei;
> > + basic_block bb = gimple_bb (stmt);
> > +
> > + FOR_EACH_EDGE (e, ei, bb->succs)
> > + if (e->flags & EDGE_EH)
> > + {
> > + eh_pad = e->dest;
> > + break;
> > + }
> > + }
> > + }
> > + if ((store_operand
> > + && TREE_CODE (store_operand) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (store_operand)))
> > + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (store_operand)))
> > + || gimple_assign_cast_p (stmt))
> > + {
> > + rhs1 = gimple_assign_rhs1 (store_operand
> > + ? SSA_NAME_DEF_STMT (store_operand)
> > + : stmt);
> > + /* Optimize mergeable ops ending with widening cast to _BitInt
> > + (or followed by store). We can lower just the limbs of the
> > + cast operand and widen afterwards. */
> > + if (TREE_CODE (rhs1) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1)))
> > + && TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> > + && (CEIL ((unsigned) TYPE_PRECISION (TREE_TYPE (rhs1)),
> > + limb_prec) < CEIL (prec, limb_prec)
> > + || (kind == bitint_prec_huge
> > + && TYPE_PRECISION (TREE_TYPE (rhs1)) < prec)))
> > + {
> > + store_operand = rhs1;
> > + prec = TYPE_PRECISION (TREE_TYPE (rhs1));
> > + kind = bitint_precision_kind (TREE_TYPE (rhs1));
> > + if (!TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > + sext = true;
> > + }
> > + }
> > + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> > + if (kind == bitint_prec_large)
> > + cnt = CEIL (prec, limb_prec);
> > + else
> > + {
> > + rem = (prec % (2 * limb_prec));
> > + end = (prec - rem) / limb_prec;
> > + cnt = 2 + CEIL (rem, limb_prec);
> > + idx = idx_first = create_loop (size_zero_node, &idx_next);
> > + }
> > +
> > + basic_block edge_bb = NULL;
> > + if (eq_p)
> > + {
> > + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > + gsi_prev (&gsi);
> > + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > + edge_bb = e->src;
> > + if (kind == bitint_prec_large)
> > + {
> > + m_gsi = gsi_last_bb (edge_bb);
> > + if (!gsi_end_p (m_gsi))
> > + gsi_next (&m_gsi);
> > + }
> > + }
> > + else
> > + m_after_stmt = stmt;
> > + if (kind != bitint_prec_large)
> > + m_upwards_2limb = end;
> > +
> > + for (unsigned i = 0; i < cnt; i++)
> > + {
> > + m_data_cnt = 0;
> > + if (kind == bitint_prec_large)
> > + idx = size_int (i);
> > + else if (i >= 2)
> > + idx = size_int (end + (i > 2));
> > + if (eq_p)
> > + {
> > + rhs1 = handle_operand (cmp_op1, idx);
> > + tree rhs2 = handle_operand (cmp_op2, idx);
> > + g = gimple_build_cond (NE_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + e1->flags = EDGE_FALSE_VALUE;
> > + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > + e1->probability = profile_probability::unlikely ();
> > + e2->probability = e1->probability.invert ();
> > + if (i == 0)
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + }
> > + else
> > + {
> > + if (store_operand)
> > + rhs1 = handle_operand (store_operand, idx);
> > + else
> > + rhs1 = handle_stmt (stmt, idx);
> > + tree l = limb_access (lhs_type, lhs, idx, true);
> > + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> > + rhs1 = add_cast (TREE_TYPE (l), rhs1);
> > + if (sext && i == cnt - 1)
> > + ext = rhs1;
> > + g = gimple_build_assign (l, rhs1);
> > + insert_before (g);
> > + if (eh)
> > + {
> > + maybe_duplicate_eh_stmt (g, stmt);
> > + if (eh_pad)
> > + {
> > + edge e = split_block (gsi_bb (m_gsi), g);
> > + m_gsi = gsi_after_labels (e->dest);
> > + make_edge (e->src, eh_pad, EDGE_EH)->probability
> > + = profile_probability::very_unlikely ();
> > + }
> > + }
> > + }
> > + m_first = false;
> > + if (kind == bitint_prec_huge && i <= 1)
> > + {
> > + if (i == 0)
> > + {
> > + idx = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> > + size_one_node);
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> > + size_int (2));
> > + insert_before (g);
> > + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + if (eq_p)
> > + m_gsi = gsi_after_labels (edge_bb);
> > + else
> > + m_gsi = gsi_for_stmt (stmt);
> > + }
> > + }
> > + }
> > +
> > + if (prec != (unsigned) TYPE_PRECISION (type)
> > + && (CEIL ((unsigned) TYPE_PRECISION (type), limb_prec)
> > + > CEIL (prec, limb_prec)))
> > + {
> > + if (sext)
> > + {
> > + ext = add_cast (signed_type_for (m_limb_type), ext);
> > + tree lpm1 = build_int_cst (unsigned_type_node,
> > + limb_prec - 1);
> > + tree n = make_ssa_name (TREE_TYPE (ext));
> > + g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
> > + insert_before (g);
> > + ext = add_cast (m_limb_type, n);
> > + }
> > + else
> > + ext = build_zero_cst (m_limb_type);
> > + kind = bitint_precision_kind (type);
> > + unsigned start = CEIL (prec, limb_prec);
> > + prec = TYPE_PRECISION (type);
> > + idx = idx_first = idx_next = NULL_TREE;
> > + if (prec <= (start + 2) * limb_prec)
> > + kind = bitint_prec_large;
> > + if (kind == bitint_prec_large)
> > + cnt = CEIL (prec, limb_prec) - start;
> > + else
> > + {
> > + rem = prec % limb_prec;
> > + end = (prec - rem) / limb_prec;
> > + cnt = 1 + (rem != 0);
> > + idx = create_loop (size_int (start), &idx_next);
> > + }
> > + for (unsigned i = 0; i < cnt; i++)
> > + {
> > + if (kind == bitint_prec_large)
> > + idx = size_int (start + i);
> > + else if (i == 1)
> > + idx = size_int (end);
> > + rhs1 = ext;
> > + tree l = limb_access (lhs_type, lhs, idx, true);
> > + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> > + rhs1 = add_cast (TREE_TYPE (l), rhs1);
> > + g = gimple_build_assign (l, rhs1);
> > + insert_before (g);
> > + if (kind == bitint_prec_huge && i == 0)
> > + {
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> > + size_one_node);
> > + insert_before (g);
> > + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + m_gsi = gsi_for_stmt (stmt);
> > + }
> > + }
> > + }
> > +
> > + if (gimple_store_p (stmt))
> > + {
> > + unlink_stmt_vdef (stmt);
> > + release_ssa_name (gimple_vdef (stmt));
> > + gsi_remove (&m_gsi, true);
> > + }
> > + if (eq_p)
> > + {
> > + lhs = make_ssa_name (boolean_type_node);
> > + basic_block bb = gimple_bb (stmt);
> > + gphi *phi = create_phi_node (lhs, bb);
> > + edge e = find_edge (gsi_bb (m_gsi), bb);
> > + unsigned int n = EDGE_COUNT (bb->preds);
> > + for (unsigned int i = 0; i < n; i++)
> > + {
> > + edge e2 = EDGE_PRED (bb, i);
> > + add_phi_arg (phi, e == e2 ? boolean_true_node : boolean_false_node,
> > + e2, UNKNOWN_LOCATION);
> > + }
> > + cmp_code = cmp_code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
> > + return lhs;
> > + }
> > + else
> > + return NULL_TREE;
> > +}
> > +
> > +/* Handle a large/huge _BitInt comparison statement STMT other than
> > + EQ_EXPR/NE_EXPR. CMP_CODE, CMP_OP1 and CMP_OP2 meaning is like in
> > + lower_mergeable_stmt. The {GT,GE,LT,LE}_EXPR comparisons are
> > + lowered by iteration from the most significant limb downwards to
> > + the least significant one, for large _BitInt in straight line code,
> > + otherwise with most significant limb handled in
> > + straight line code followed by a loop handling one limb at a time.
> > + Comparisons with unsigned huge _BitInt with precisions which are
> > + multiples of limb precision can use just the loop and don't need to
> > + handle most significant limb before the loop. The loop or straight
> > + line code jumps to final basic block if a particular pair of limbs
> > + is not equal. */
> > +
> > +tree
> > +bitint_large_huge::lower_comparison_stmt (gimple *stmt, tree_code &cmp_code,
> > + tree cmp_op1, tree cmp_op2)
> > +{
> > + tree type = TREE_TYPE (cmp_op1);
> > + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> > + bitint_prec_kind kind = bitint_precision_kind (type);
> > + gcc_assert (kind >= bitint_prec_large);
> > + gimple *g;
> > + if (!TYPE_UNSIGNED (type)
> > + && integer_zerop (cmp_op2)
> > + && (cmp_code == GE_EXPR || cmp_code == LT_EXPR))
> > + {
> > + unsigned end = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec) - 1;
> > + tree idx = size_int (end);
> > + m_data_cnt = 0;
> > + tree rhs1 = handle_operand (cmp_op1, idx);
> > + if (TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > + {
> > + tree stype = signed_type_for (TREE_TYPE (rhs1));
> > + rhs1 = add_cast (stype, rhs1);
> > + }
> > + tree lhs = make_ssa_name (boolean_type_node);
> > + g = gimple_build_assign (lhs, cmp_code, rhs1,
> > + build_zero_cst (TREE_TYPE (rhs1)));
> > + insert_before (g);
> > + cmp_code = NE_EXPR;
> > + return lhs;
> > + }
> > +
> > + unsigned cnt, rem = 0, end = 0;
> > + tree idx = NULL_TREE, idx_next = NULL_TREE;
> > + if (kind == bitint_prec_large)
> > + cnt = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec);
> > + else
> > + {
> > + rem = ((unsigned) TYPE_PRECISION (type) % limb_prec);
> > + if (rem == 0 && !TYPE_UNSIGNED (type))
> > + rem = limb_prec;
> > + end = ((unsigned) TYPE_PRECISION (type) - rem) / limb_prec;
> > + cnt = 1 + (rem != 0);
> > + }
> > +
> > + basic_block edge_bb = NULL;
> > + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > + gsi_prev (&gsi);
> > + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > + edge_bb = e->src;
> > + m_gsi = gsi_last_bb (edge_bb);
> > + if (!gsi_end_p (m_gsi))
> > + gsi_next (&m_gsi);
> > +
> > + edge *edges = XALLOCAVEC (edge, cnt * 2);
> > + for (unsigned i = 0; i < cnt; i++)
> > + {
> > + m_data_cnt = 0;
> > + if (kind == bitint_prec_large)
> > + idx = size_int (cnt - i - 1);
> > + else if (i == cnt - 1)
> > + idx = create_loop (size_int (end - 1), &idx_next);
> > + else
> > + idx = size_int (end);
> > + tree rhs1 = handle_operand (cmp_op1, idx);
> > + tree rhs2 = handle_operand (cmp_op2, idx);
> > + if (i == 0
> > + && !TYPE_UNSIGNED (type)
> > + && TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > + {
> > + tree stype = signed_type_for (TREE_TYPE (rhs1));
> > + rhs1 = add_cast (stype, rhs1);
> > + rhs2 = add_cast (stype, rhs2);
> > + }
> > + g = gimple_build_cond (GT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + e1->flags = EDGE_FALSE_VALUE;
> > + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > + e1->probability = profile_probability::likely ();
> > + e2->probability = e1->probability.invert ();
> > + if (i == 0)
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + edges[2 * i] = e2;
> > + g = gimple_build_cond (LT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e1 = split_block (gsi_bb (m_gsi), g);
> > + e1->flags = EDGE_FALSE_VALUE;
> > + e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > + e1->probability = profile_probability::unlikely ();
> > + e2->probability = e1->probability.invert ();
> > + m_gsi = gsi_after_labels (e1->dest);
> > + edges[2 * i + 1] = e2;
> > + m_first = false;
> > + if (kind == bitint_prec_huge && i == cnt - 1)
> > + {
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > + insert_before (g);
> > + g = gimple_build_cond (NE_EXPR, idx, size_zero_node,
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge true_edge, false_edge;
> > + extract_true_false_edges_from_block (gsi_bb (m_gsi),
> > + &true_edge, &false_edge);
> > + m_gsi = gsi_after_labels (false_edge->dest);
> > + }
> > + }
> > +
> > + tree lhs = make_ssa_name (boolean_type_node);
> > + basic_block bb = gimple_bb (stmt);
> > + gphi *phi = create_phi_node (lhs, bb);
> > + for (unsigned int i = 0; i < cnt * 2; i++)
> > + {
> > + tree val = ((cmp_code == GT_EXPR || cmp_code == GE_EXPR)
> > + ^ (i & 1)) ? boolean_true_node : boolean_false_node;
> > + add_phi_arg (phi, val, edges[i], UNKNOWN_LOCATION);
> > + }
> > + add_phi_arg (phi, (cmp_code == GE_EXPR || cmp_code == LE_EXPR)
> > + ? boolean_true_node : boolean_false_node,
> > + find_edge (gsi_bb (m_gsi), bb), UNKNOWN_LOCATION);
> > + cmp_code = NE_EXPR;
> > + return lhs;
> > +}
> > +
> > +/* Lower large/huge _BitInt left and right shift except for left
> > + shift by < limb_prec constant. */
> > +
> > +void
> > +bitint_large_huge::lower_shift_stmt (tree obj, gimple *stmt)
> > +{
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + tree lhs = gimple_assign_lhs (stmt);
> > + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > + tree type = TREE_TYPE (rhs1);
> > + gimple *final_stmt = gsi_stmt (m_gsi);
> > + gcc_assert (TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) >= bitint_prec_large);
> > + int prec = TYPE_PRECISION (type);
> > + tree n = gimple_assign_rhs2 (stmt), n1, n2, n3, n4;
> > + gimple *g;
> > + if (obj == NULL_TREE)
> > + {
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + }
> > + /* Preparation code common for both left and right shifts.
> > + unsigned n1 = n % limb_prec;
> > + size_t n2 = n / limb_prec;
> > + size_t n3 = n1 != 0;
> > + unsigned n4 = (limb_prec - n1) % limb_prec;
> > + (for power of 2 limb_prec n4 can be -n1 & (limb_prec)). */
> > + if (TREE_CODE (n) == INTEGER_CST)
> > + {
> > + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> > + n1 = int_const_binop (TRUNC_MOD_EXPR, n, lp);
> > + n2 = fold_convert (sizetype, int_const_binop (TRUNC_DIV_EXPR, n, lp));
> > + n3 = size_int (!integer_zerop (n1));
> > + n4 = int_const_binop (TRUNC_MOD_EXPR,
> > + int_const_binop (MINUS_EXPR, lp, n1), lp);
> > + }
> > + else
> > + {
> > + n1 = make_ssa_name (TREE_TYPE (n));
> > + n2 = make_ssa_name (sizetype);
> > + n3 = make_ssa_name (sizetype);
> > + n4 = make_ssa_name (TREE_TYPE (n));
> > + if (pow2p_hwi (limb_prec))
> > + {
> > + tree lpm1 = build_int_cst (TREE_TYPE (n), limb_prec - 1);
> > + g = gimple_build_assign (n1, BIT_AND_EXPR, n, lpm1);
> > + insert_before (g);
> > + g = gimple_build_assign (useless_type_conversion_p (sizetype,
> > + TREE_TYPE (n))
> > + ? n2 : make_ssa_name (TREE_TYPE (n)),
> > + RSHIFT_EXPR, n,
> > + build_int_cst (TREE_TYPE (n),
> > + exact_log2 (limb_prec)));
> > + insert_before (g);
> > + if (gimple_assign_lhs (g) != n2)
> > + {
> > + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> > + insert_before (g);
> > + }
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> > + NEGATE_EXPR, n1);
> > + insert_before (g);
> > + g = gimple_build_assign (n4, BIT_AND_EXPR, gimple_assign_lhs (g),
> > + lpm1);
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> > + g = gimple_build_assign (n1, TRUNC_MOD_EXPR, n, lp);
> > + insert_before (g);
> > + g = gimple_build_assign (useless_type_conversion_p (sizetype,
> > + TREE_TYPE (n))
> > + ? n2 : make_ssa_name (TREE_TYPE (n)),
> > + TRUNC_DIV_EXPR, n, lp);
> > + insert_before (g);
> > + if (gimple_assign_lhs (g) != n2)
> > + {
> > + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> > + insert_before (g);
> > + }
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> > + MINUS_EXPR, lp, n1);
> > + insert_before (g);
> > + g = gimple_build_assign (n4, TRUNC_MOD_EXPR, gimple_assign_lhs (g),
> > + lp);
> > + insert_before (g);
> > + }
> > + g = gimple_build_assign (make_ssa_name (boolean_type_node), NE_EXPR, n1,
> > + build_zero_cst (TREE_TYPE (n)));
> > + insert_before (g);
> > + g = gimple_build_assign (n3, NOP_EXPR, gimple_assign_lhs (g));
> > + insert_before (g);
> > + }
> > + tree p = build_int_cst (sizetype,
> > + prec / limb_prec - (prec % limb_prec == 0));
> > + if (rhs_code == RSHIFT_EXPR)
> > + {
> > + /* Lower
> > + dst = src >> n;
> > + as
> > + unsigned n1 = n % limb_prec;
> > + size_t n2 = n / limb_prec;
> > + size_t n3 = n1 != 0;
> > + unsigned n4 = (limb_prec - n1) % limb_prec;
> > + size_t idx;
> > + size_t p = prec / limb_prec - (prec % limb_prec == 0);
> > + int signed_p = (typeof (src) -1) < 0;
> > + for (idx = n2; idx < ((!signed_p && (prec % limb_prec == 0))
> > + ? p : p - n3); ++idx)
> > + dst[idx - n2] = (src[idx] >> n1) | (src[idx + n3] << n4);
> > + limb_type ext;
> > + if (prec % limb_prec == 0)
> > + ext = src[p];
> > + else if (signed_p)
> > + ext = ((signed limb_type) (src[p] << (limb_prec
> > + - (prec % limb_prec))))
> > + >> (limb_prec - (prec % limb_prec));
> > + else
> > + ext = src[p] & (((limb_type) 1 << (prec % limb_prec)) - 1);
> > + if (!signed_p && (prec % limb_prec == 0))
> > + ;
> > + else if (idx < prec / 64)
> > + {
> > + dst[idx - n2] = (src[idx] >> n1) | (ext << n4);
> > + ++idx;
> > + }
> > + idx -= n2;
> > + if (signed_p)
> > + {
> > + dst[idx] = ((signed limb_type) ext) >> n1;
> > + ext = ((signed limb_type) ext) >> (limb_prec - 1);
> > + }
> > + else
> > + {
> > + dst[idx] = ext >> n1;
> > + ext = 0;
> > + }
> > + for (++idx; idx <= p; ++idx)
> > + dst[idx] = ext; */
> > + tree pmn3;
> > + if (TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> > + pmn3 = p;
> > + else if (TREE_CODE (n3) == INTEGER_CST)
> > + pmn3 = int_const_binop (MINUS_EXPR, p, n3);
> > + else
> > + {
> > + pmn3 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (pmn3, MINUS_EXPR, p, n3);
> > + insert_before (g);
> > + }
> > + g = gimple_build_cond (LT_EXPR, n2, pmn3, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + tree idx_next;
> > + tree idx = create_loop (n2, &idx_next);
> > + tree idxmn2 = make_ssa_name (sizetype);
> > + tree idxpn3 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > + insert_before (g);
> > + g = gimple_build_assign (idxpn3, PLUS_EXPR, idx, n3);
> > + insert_before (g);
> > + m_data_cnt = 0;
> > + tree t1 = handle_operand (rhs1, idx);
> > + m_first = false;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + RSHIFT_EXPR, t1, n1);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + if (!integer_zerop (n3))
> > + {
> > + m_data_cnt = 0;
> > + tree t2 = handle_operand (rhs1, idxpn3);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, t2, n4);
> > + insert_before (g);
> > + t2 = gimple_assign_lhs (g);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + BIT_IOR_EXPR, t1, t2);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + }
> > + tree l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> > + g = gimple_build_assign (l, t1);
> > + insert_before (g);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > + insert_before (g);
> > + g = gimple_build_cond (LT_EXPR, idx_next, pmn3, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + idx = make_ssa_name (sizetype);
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> > + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > + add_phi_arg (phi, n2, e1, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > + m_data_cnt = 0;
> > + tree ms = handle_operand (rhs1, p);
> > + tree ext = ms;
> > + if (!types_compatible_p (TREE_TYPE (ms), m_limb_type))
> > + ext = add_cast (m_limb_type, ms);
> > + if (!(TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> > + && !integer_zerop (n3))
> > + {
> > + g = gimple_build_cond (LT_EXPR, idx, p, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e1 = split_block (gsi_bb (m_gsi), g);
> > + e2 = split_block (e1->dest, (gimple *) NULL);
> > + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + m_data_cnt = 0;
> > + t1 = handle_operand (rhs1, idx);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + RSHIFT_EXPR, t1, n1);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, ext, n4);
> > + insert_before (g);
> > + tree t2 = gimple_assign_lhs (g);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + BIT_IOR_EXPR, t1, t2);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + idxmn2 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > + insert_before (g);
> > + l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> > + g = gimple_build_assign (l, t1);
> > + insert_before (g);
> > + idx_next = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > + insert_before (g);
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + tree nidx = make_ssa_name (sizetype);
> > + phi = create_phi_node (nidx, gsi_bb (m_gsi));
> > + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > + idx = nidx;
> > + }
> > + g = gimple_build_assign (make_ssa_name (sizetype), MINUS_EXPR, idx, n2);
> > + insert_before (g);
> > + idx = gimple_assign_lhs (g);
> > + tree sext = ext;
> > + if (!TYPE_UNSIGNED (type))
> > + sext = add_cast (signed_type_for (m_limb_type), ext);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> > + RSHIFT_EXPR, sext, n1);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + if (!TYPE_UNSIGNED (type))
> > + {
> > + t1 = add_cast (m_limb_type, t1);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> > + RSHIFT_EXPR, sext,
> > + build_int_cst (TREE_TYPE (n),
> > + limb_prec - 1));
> > + insert_before (g);
> > + ext = add_cast (m_limb_type, gimple_assign_lhs (g));
> > + }
> > + else
> > + ext = build_zero_cst (m_limb_type);
> > + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > + g = gimple_build_assign (l, t1);
> > + insert_before (g);
> > + g = gimple_build_assign (make_ssa_name (sizetype), PLUS_EXPR, idx,
> > + size_one_node);
> > + insert_before (g);
> > + idx = gimple_assign_lhs (g);
> > + g = gimple_build_cond (LE_EXPR, idx, p, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e1 = split_block (gsi_bb (m_gsi), g);
> > + e2 = split_block (e1->dest, (gimple *) NULL);
> > + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + idx = create_loop (idx, &idx_next);
> > + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > + g = gimple_build_assign (l, ext);
> > + insert_before (g);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > + insert_before (g);
> > + g = gimple_build_cond (LE_EXPR, idx_next, p, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + /* Lower
> > + dst = src << n;
> > + as
> > + unsigned n1 = n % limb_prec;
> > + size_t n2 = n / limb_prec;
> > + size_t n3 = n1 != 0;
> > + unsigned n4 = (limb_prec - n1) % limb_prec;
> > + size_t idx;
> > + size_t p = prec / limb_prec - (prec % limb_prec == 0);
> > + for (idx = p; (ssize_t) idx >= (ssize_t) (n2 + n3); --idx)
> > + dst[idx] = (src[idx - n2] << n1) | (src[idx - n2 - n3] >> n4);
> > + if (n1)
> > + {
> > + dst[idx] = src[idx - n2] << n1;
> > + --idx;
> > + }
> > + for (; (ssize_t) idx >= 0; --idx)
> > + dst[idx] = 0; */
> > + tree n2pn3;
> > + if (TREE_CODE (n2) == INTEGER_CST && TREE_CODE (n3) == INTEGER_CST)
> > + n2pn3 = int_const_binop (PLUS_EXPR, n2, n3);
> > + else
> > + {
> > + n2pn3 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (n2pn3, PLUS_EXPR, n2, n3);
> > + insert_before (g);
> > + }
> > + /* For LSHIFT_EXPR, we can use handle_operand with non-INTEGER_CST
> > + idx even to access the most significant partial limb. */
> > + m_var_msb = true;
> > + if (integer_zerop (n3))
> > + /* For n3 == 0 p >= n2 + n3 is always true for all valid shift
> > + counts. Emit if (true) condition that can be optimized later. */
> > + g = gimple_build_cond (NE_EXPR, boolean_true_node, boolean_false_node,
> > + NULL_TREE, NULL_TREE);
> > + else
> > + g = gimple_build_cond (LE_EXPR, n2pn3, p, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + tree idx_next;
> > + tree idx = create_loop (p, &idx_next);
> > + tree idxmn2 = make_ssa_name (sizetype);
> > + tree idxmn2mn3 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > + insert_before (g);
> > + g = gimple_build_assign (idxmn2mn3, MINUS_EXPR, idxmn2, n3);
> > + insert_before (g);
> > + m_data_cnt = 0;
> > + tree t1 = handle_operand (rhs1, idxmn2);
> > + m_first = false;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, t1, n1);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + if (!integer_zerop (n3))
> > + {
> > + m_data_cnt = 0;
> > + tree t2 = handle_operand (rhs1, idxmn2mn3);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + RSHIFT_EXPR, t2, n4);
> > + insert_before (g);
> > + t2 = gimple_assign_lhs (g);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + BIT_IOR_EXPR, t1, t2);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + }
> > + tree l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > + g = gimple_build_assign (l, t1);
> > + insert_before (g);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > + insert_before (g);
> > + tree sn2pn3 = add_cast (ssizetype, n2pn3);
> > + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next), sn2pn3,
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + idx = make_ssa_name (sizetype);
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> > + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > + add_phi_arg (phi, p, e1, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > + m_data_cnt = 0;
> > + if (!integer_zerop (n3))
> > + {
> > + g = gimple_build_cond (NE_EXPR, n3, size_zero_node,
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e1 = split_block (gsi_bb (m_gsi), g);
> > + e2 = split_block (e1->dest, (gimple *) NULL);
> > + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + idxmn2 = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > + insert_before (g);
> > + m_data_cnt = 0;
> > + t1 = handle_operand (rhs1, idxmn2);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, t1, n1);
> > + insert_before (g);
> > + t1 = gimple_assign_lhs (g);
> > + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > + g = gimple_build_assign (l, t1);
> > + insert_before (g);
> > + idx_next = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > + insert_before (g);
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + tree nidx = make_ssa_name (sizetype);
> > + phi = create_phi_node (nidx, gsi_bb (m_gsi));
> > + e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > + idx = nidx;
> > + }
> > + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx),
> > + ssize_int (0), NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e1 = split_block (gsi_bb (m_gsi), g);
> > + e2 = split_block (e1->dest, (gimple *) NULL);
> > + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + idx = create_loop (idx, &idx_next);
> > + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > + g = gimple_build_assign (l, build_zero_cst (m_limb_type));
> > + insert_before (g);
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > + insert_before (g);
> > + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next),
> > + ssize_int (0), NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + }
> > +}
> > +
> > +/* Lower large/huge _BitInt multiplication or division. */
> > +
> > +void
> > +bitint_large_huge::lower_muldiv_stmt (tree obj, gimple *stmt)
> > +{
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + tree rhs2 = gimple_assign_rhs2 (stmt);
> > + tree lhs = gimple_assign_lhs (stmt);
> > + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > + tree type = TREE_TYPE (rhs1);
> > + gcc_assert (TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) >= bitint_prec_large);
> > + int prec = TYPE_PRECISION (type), prec1, prec2;
> > + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec1);
> > + rhs2 = handle_operand_addr (rhs2, stmt, NULL, &prec2);
> > + if (obj == NULL_TREE)
> > + {
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + lhs = build_fold_addr_expr (obj);
> > + }
> > + else
> > + {
> > + lhs = build_fold_addr_expr (obj);
> > + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> > + NULL_TREE, true, GSI_SAME_STMT);
> > + }
> > + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > + gimple *g;
> > + switch (rhs_code)
> > + {
> > + case MULT_EXPR:
> > + g = gimple_build_call_internal (IFN_MULBITINT, 6,
> > + lhs, build_int_cst (sitype, prec),
> > + rhs1, build_int_cst (sitype, prec1),
> > + rhs2, build_int_cst (sitype, prec2));
> > + insert_before (g);
> > + break;
> > + case TRUNC_DIV_EXPR:
> > + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8,
> > + lhs, build_int_cst (sitype, prec),
> > + null_pointer_node,
> > + build_int_cst (sitype, 0),
> > + rhs1, build_int_cst (sitype, prec1),
> > + rhs2, build_int_cst (sitype, prec2));
> > + if (!stmt_ends_bb_p (stmt))
> > + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > + insert_before (g);
> > + break;
> > + case TRUNC_MOD_EXPR:
> > + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8, null_pointer_node,
> > + build_int_cst (sitype, 0),
> > + lhs, build_int_cst (sitype, prec),
> > + rhs1, build_int_cst (sitype, prec1),
> > + rhs2, build_int_cst (sitype, prec2));
> > + if (!stmt_ends_bb_p (stmt))
> > + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > + insert_before (g);
> > + break;
> > + default:
> > + gcc_unreachable ();
> > + }
> > + if (stmt_ends_bb_p (stmt))
> > + {
> > + maybe_duplicate_eh_stmt (g, stmt);
> > + edge e1;
> > + edge_iterator ei;
> > + basic_block bb = gimple_bb (stmt);
> > +
> > + FOR_EACH_EDGE (e1, ei, bb->succs)
> > + if (e1->flags & EDGE_EH)
> > + break;
> > + if (e1)
> > + {
> > + edge e2 = split_block (gsi_bb (m_gsi), g);
> > + m_gsi = gsi_after_labels (e2->dest);
> > + make_edge (e2->src, e1->dest, EDGE_EH)->probability
> > + = profile_probability::very_unlikely ();
> > + }
> > + }
> > +}
> > +
> > +/* Lower large/huge _BitInt conversion to/from floating point. */
> > +
> > +void
> > +bitint_large_huge::lower_float_conv_stmt (tree obj, gimple *stmt)
> > +{
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + tree lhs = gimple_assign_lhs (stmt);
> > + tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > + if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (rhs1)))
> > + || DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (lhs))))
> > + {
> > + sorry_at (gimple_location (stmt),
> > + "unsupported conversion between %<_BitInt(%d)%> and %qT",
> > + rhs_code == FIX_TRUNC_EXPR
> > + ? TYPE_PRECISION (TREE_TYPE (lhs))
> > + : TYPE_PRECISION (TREE_TYPE (rhs1)),
> > + rhs_code == FIX_TRUNC_EXPR
> > + ? TREE_TYPE (rhs1) : TREE_TYPE (lhs));
> > + if (rhs_code == FLOAT_EXPR)
> > + {
> > + gimple *g
> > + = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> > + gsi_replace (&m_gsi, g, true);
> > + }
> > + return;
> > + }
> > + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > + gimple *g;
> > + if (rhs_code == FIX_TRUNC_EXPR)
> > + {
> > + int prec = TYPE_PRECISION (TREE_TYPE (lhs));
> > + if (!TYPE_UNSIGNED (TREE_TYPE (lhs)))
> > + prec = -prec;
> > + if (obj == NULL_TREE)
> > + {
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + lhs = build_fold_addr_expr (obj);
> > + }
> > + else
> > + {
> > + lhs = build_fold_addr_expr (obj);
> > + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> > + NULL_TREE, true, GSI_SAME_STMT);
> > + }
> > + scalar_mode from_mode
> > + = as_a <scalar_mode> (TYPE_MODE (TREE_TYPE (rhs1)));
> > +#ifdef HAVE_SFmode
> > + /* IEEE single is a full superset of both IEEE half and
> > + bfloat formats, convert to float first and then to _BitInt
> > + to avoid the need of another 2 library routines. */
> > + if ((REAL_MODE_FORMAT (from_mode) == &arm_bfloat_half_format
> > + || REAL_MODE_FORMAT (from_mode) == &ieee_half_format)
> > + && REAL_MODE_FORMAT (SFmode) == &ieee_single_format)
> > + {
> > + tree type = lang_hooks.types.type_for_mode (SFmode, 0);
> > + if (type)
> > + rhs1 = add_cast (type, rhs1);
> > + }
> > +#endif
> > + g = gimple_build_call_internal (IFN_FLOATTOBITINT, 3,
> > + lhs, build_int_cst (sitype, prec),
> > + rhs1);
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + int prec;
> > + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec);
> > + g = gimple_build_call_internal (IFN_BITINTTOFLOAT, 2,
> > + rhs1, build_int_cst (sitype, prec));
> > + gimple_call_set_lhs (g, lhs);
> > + if (!stmt_ends_bb_p (stmt))
> > + gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > + gsi_replace (&m_gsi, g, true);
> > + }
> > +}
> > +
> > +/* Helper method for lower_addsub_overflow and lower_mul_overflow.
> > + If check_zero is true, caller wants to check if all bits in [start, end)
> > + are zero, otherwise if bits in [start, end) are either all zero or
> > + all ones. L is the limb with index LIMB, START and END are measured
> > + in bits. */
> > +
> > +tree
> > +bitint_large_huge::arith_overflow_extract_bits (unsigned int start,
> > + unsigned int end, tree l,
> > + unsigned int limb,
> > + bool check_zero)
> > +{
> > + unsigned startlimb = start / limb_prec;
> > + unsigned endlimb = (end - 1) / limb_prec;
> > + gimple *g;
> > +
> > + if ((start % limb_prec) == 0 && (end % limb_prec) == 0)
> > + return l;
> > + if (startlimb == endlimb && limb == startlimb)
> > + {
> > + if (check_zero)
> > + {
> > + wide_int w = wi::shifted_mask (start % limb_prec,
> > + end - start, false, limb_prec);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + BIT_AND_EXPR, l,
> > + wide_int_to_tree (m_limb_type, w));
> > + insert_before (g);
> > + return gimple_assign_lhs (g);
> > + }
> > + unsigned int shift = start % limb_prec;
> > + if ((end % limb_prec) != 0)
> > + {
> > + unsigned int lshift = (-end) % limb_prec;
> > + shift += lshift;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, l,
> > + build_int_cst (unsigned_type_node,
> > + lshift));
> > + insert_before (g);
> > + l = gimple_assign_lhs (g);
> > + }
> > + l = add_cast (signed_type_for (m_limb_type), l);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > + RSHIFT_EXPR, l,
> > + build_int_cst (unsigned_type_node, shift));
> > + insert_before (g);
> > + return add_cast (m_limb_type, gimple_assign_lhs (g));
> > + }
> > + else if (limb == startlimb)
> > + {
> > + if ((start % limb_prec) == 0)
> > + return l;
> > + if (!check_zero)
> > + l = add_cast (signed_type_for (m_limb_type), l);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > + RSHIFT_EXPR, l,
> > + build_int_cst (unsigned_type_node,
> > + start % limb_prec));
> > + insert_before (g);
> > + l = gimple_assign_lhs (g);
> > + if (!check_zero)
> > + l = add_cast (m_limb_type, l);
> > + return l;
> > + }
> > + else if (limb == endlimb)
> > + {
> > + if ((end % limb_prec) == 0)
> > + return l;
> > + if (check_zero)
> > + {
> > + wide_int w = wi::mask (end % limb_prec, false, limb_prec);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + BIT_AND_EXPR, l,
> > + wide_int_to_tree (m_limb_type, w));
> > + insert_before (g);
> > + return gimple_assign_lhs (g);
> > + }
> > + unsigned int shift = (-end) % limb_prec;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + LSHIFT_EXPR, l,
> > + build_int_cst (unsigned_type_node, shift));
> > + insert_before (g);
> > + l = add_cast (signed_type_for (m_limb_type), gimple_assign_lhs (g));
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > + RSHIFT_EXPR, l,
> > + build_int_cst (unsigned_type_node, shift));
> > + insert_before (g);
> > + return add_cast (m_limb_type, gimple_assign_lhs (g));
> > + }
> > + return l;
> > +}
> > +
> > +/* Helper method for lower_addsub_overflow and lower_mul_overflow. Store
> > + result including overflow flag into the right locations. */
> > +
> > +void
> > +bitint_large_huge::finish_arith_overflow (tree var, tree obj, tree type,
> > + tree ovf, tree lhs, tree orig_obj,
> > + gimple *stmt, tree_code code)
> > +{
> > + gimple *g;
> > +
> > + if (obj == NULL_TREE
> > + && (TREE_CODE (type) != BITINT_TYPE
> > + || bitint_precision_kind (type) < bitint_prec_large))
> > + {
> > + /* Add support for 3 or more limbs filled in from normal integral
> > + type if this assert fails. If no target chooses limb mode smaller
> > + than half of largest supported normal integral type, this will not
> > + be needed. */
> > + gcc_assert (TYPE_PRECISION (type) <= 2 * limb_prec);
> > + tree lhs_type = type;
> > + if (TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) == bitint_prec_middle)
> > + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (type),
> > + TYPE_UNSIGNED (type));
> > + tree r1 = limb_access (NULL_TREE, var, size_int (0), true);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type), r1);
> > + insert_before (g);
> > + r1 = gimple_assign_lhs (g);
> > + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> > + r1 = add_cast (lhs_type, r1);
> > + if (TYPE_PRECISION (lhs_type) > limb_prec)
> > + {
> > + tree r2 = limb_access (NULL_TREE, var, size_int (1), true);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type), r2);
> > + insert_before (g);
> > + r2 = gimple_assign_lhs (g);
> > + r2 = add_cast (lhs_type, r2);
> > + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> > + build_int_cst (unsigned_type_node,
> > + limb_prec));
> > + insert_before (g);
> > + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> > + gimple_assign_lhs (g));
> > + insert_before (g);
> > + r1 = gimple_assign_lhs (g);
> > + }
> > + if (lhs_type != type)
> > + r1 = add_cast (type, r1);
> > + ovf = add_cast (lhs_type, ovf);
> > + if (lhs_type != type)
> > + ovf = add_cast (type, ovf);
> > + g = gimple_build_assign (lhs, COMPLEX_EXPR, r1, ovf);
> > + m_gsi = gsi_for_stmt (stmt);
> > + gsi_replace (&m_gsi, g, true);
> > + }
> > + else
> > + {
> > + unsigned HOST_WIDE_INT nelts = 0;
> > + tree atype = NULL_TREE;
> > + if (obj)
> > + {
> > + nelts = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> > + if (orig_obj == NULL_TREE)
> > + nelts >>= 1;
> > + atype = build_array_type_nelts (m_limb_type, nelts);
> > + }
> > + if (var && obj)
> > + {
> > + tree v1, v2;
> > + tree zero;
> > + if (orig_obj == NULL_TREE)
> > + {
> > + zero = build_zero_cst (build_pointer_type (TREE_TYPE (obj)));
> > + v1 = build2 (MEM_REF, atype,
> > + build_fold_addr_expr (unshare_expr (obj)), zero);
> > + }
> > + else if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> > + v1 = build1 (VIEW_CONVERT_EXPR, atype, unshare_expr (obj));
> > + else
> > + v1 = unshare_expr (obj);
> > + zero = build_zero_cst (build_pointer_type (TREE_TYPE (var)));
> > + v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), zero);
> > + g = gimple_build_assign (v1, v2);
> > + insert_before (g);
> > + }
> > + if (orig_obj == NULL_TREE && obj)
> > + {
> > + ovf = add_cast (m_limb_type, ovf);
> > + tree l = limb_access (NULL_TREE, obj, size_int (nelts), true);
> > + g = gimple_build_assign (l, ovf);
> > + insert_before (g);
> > + if (nelts > 1)
> > + {
> > + atype = build_array_type_nelts (m_limb_type, nelts - 1);
> > + tree off = build_int_cst (build_pointer_type (TREE_TYPE (obj)),
> > + (nelts + 1) * m_limb_size);
> > + tree v1 = build2 (MEM_REF, atype,
> > + build_fold_addr_expr (unshare_expr (obj)),
> > + off);
> > + g = gimple_build_assign (v1, build_zero_cst (atype));
> > + insert_before (g);
> > + }
> > + }
> > + else if (TREE_CODE (TREE_TYPE (lhs)) == COMPLEX_TYPE)
> > + {
> > + imm_use_iterator ui;
> > + use_operand_p use_p;
> > + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> > + {
> > + g = USE_STMT (use_p);
> > + if (!is_gimple_assign (g)
> > + || gimple_assign_rhs_code (g) != IMAGPART_EXPR)
> > + continue;
> > + tree lhs2 = gimple_assign_lhs (g);
> > + gimple *use_stmt;
> > + single_imm_use (lhs2, &use_p, &use_stmt);
> > + lhs2 = gimple_assign_lhs (use_stmt);
> > + gimple_stmt_iterator gsi = gsi_for_stmt (use_stmt);
> > + if (useless_type_conversion_p (TREE_TYPE (lhs2), TREE_TYPE (ovf)))
> > + g = gimple_build_assign (lhs2, ovf);
> > + else
> > + g = gimple_build_assign (lhs2, NOP_EXPR, ovf);
> > + gsi_replace (&gsi, g, true);
> > + break;
> > + }
> > + }
> > + else if (ovf != boolean_false_node)
> > + {
> > + g = gimple_build_cond (NE_EXPR, ovf, boolean_false_node,
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + e3->probability = profile_probability::very_likely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + tree zero = build_zero_cst (TREE_TYPE (lhs));
> > + tree fn = ubsan_build_overflow_builtin (code, m_loc,
> > + TREE_TYPE (lhs),
> > + zero, zero, NULL);
> > + force_gimple_operand_gsi (&m_gsi, fn, true, NULL_TREE,
> > + true, GSI_SAME_STMT);
> > + m_gsi = gsi_after_labels (e2->dest);
> > + }
> > + }
> > + if (var)
> > + {
> > + tree clobber = build_clobber (TREE_TYPE (var), CLOBBER_EOL);
> > + g = gimple_build_assign (var, clobber);
> > + gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
> > + }
> > +}
> > +
> > +/* Helper function for lower_addsub_overflow and lower_mul_overflow.
> > + Given precisions of result TYPE (PREC), argument 0 precision PREC0,
> > + argument 1 precision PREC1 and minimum precision for the result
> > + PREC2, compute *START, *END, *CHECK_ZERO and return OVF. */
> > +
> > +static tree
> > +arith_overflow (tree_code code, tree type, int prec, int prec0, int prec1,
> > + int prec2, unsigned *start, unsigned *end, bool *check_zero)
> > +{
> > + *start = 0;
> > + *end = 0;
> > + *check_zero = true;
> > + /* Ignore this special rule for subtraction, even if both
> > + prec0 >= 0 and prec1 >= 0, their subtraction can be negative
> > + in infinite precision. */
> > + if (code != MINUS_EXPR && prec0 >= 0 && prec1 >= 0)
> > + {
> > + /* Result in [0, prec2) is unsigned, if prec > prec2,
> > + all bits above it will be zero. */
> > + if ((prec - !TYPE_UNSIGNED (type)) >= prec2)
> > + return boolean_false_node;
> > + else
> > + {
> > + /* ovf if any of bits in [start, end) is non-zero. */
> > + *start = prec - !TYPE_UNSIGNED (type);
> > + *end = prec2;
> > + }
> > + }
> > + else if (TYPE_UNSIGNED (type))
> > + {
> > + /* If result in [0, prec2) is signed and if prec > prec2,
> > + all bits above it will be sign bit copies. */
> > + if (prec >= prec2)
> > + {
> > + /* ovf if bit prec - 1 is non-zero. */
> > + *start = prec - 1;
> > + *end = prec;
> > + }
> > + else
> > + {
> > + /* ovf if any of bits in [start, end) is non-zero. */
> > + *start = prec;
> > + *end = prec2;
> > + }
> > + }
> > + else if (prec >= prec2)
> > + return boolean_false_node;
> > + else
> > + {
> > + /* ovf if [start, end) bits aren't all zeros or all ones. */
> > + *start = prec - 1;
> > + *end = prec2;
> > + *check_zero = false;
> > + }
> > + return NULL_TREE;
> > +}
> > +
> > +/* Lower a .{ADD,SUB}_OVERFLOW call with at least one large/huge _BitInt
> > + argument or return type _Complex large/huge _BitInt. */
> > +
> > +void
> > +bitint_large_huge::lower_addsub_overflow (tree obj, gimple *stmt)
> > +{
> > + tree arg0 = gimple_call_arg (stmt, 0);
> > + tree arg1 = gimple_call_arg (stmt, 1);
> > + tree lhs = gimple_call_lhs (stmt);
> > + gimple *g;
> > +
> > + if (!lhs)
> > + {
> > + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > + gsi_remove (&gsi, true);
> > + return;
> > + }
> > + gimple *final_stmt = gsi_stmt (m_gsi);
> > + tree type = TREE_TYPE (lhs);
> > + if (TREE_CODE (type) == COMPLEX_TYPE)
> > + type = TREE_TYPE (type);
> > + int prec = TYPE_PRECISION (type);
> > + int prec0 = range_to_prec (arg0, stmt);
> > + int prec1 = range_to_prec (arg1, stmt);
> > + int prec2 = ((prec0 < 0) == (prec1 < 0)
> > + ? MAX (prec0 < 0 ? -prec0 : prec0,
> > + prec1 < 0 ? -prec1 : prec1) + 1
> > + : MAX (prec0 < 0 ? -prec0 : prec0 + 1,
> > + prec1 < 0 ? -prec1 : prec1 + 1) + 1);
> > + int prec3 = MAX (prec0 < 0 ? -prec0 : prec0,
> > + prec1 < 0 ? -prec1 : prec1);
> > + prec3 = MAX (prec3, prec);
> > + tree var = NULL_TREE;
> > + tree orig_obj = obj;
> > + if (obj == NULL_TREE
> > + && TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) >= bitint_prec_large
> > + && m_names
> > + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > + {
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + if (TREE_TYPE (lhs) == type)
> > + orig_obj = obj;
> > + }
> > + if (TREE_CODE (type) != BITINT_TYPE
> > + || bitint_precision_kind (type) < bitint_prec_large)
> > + {
> > + unsigned HOST_WIDE_INT nelts = CEIL (prec, limb_prec);
> > + tree atype = build_array_type_nelts (m_limb_type, nelts);
> > + var = create_tmp_var (atype);
> > + }
> > +
> > + enum tree_code code;
> > + switch (gimple_call_internal_fn (stmt))
> > + {
> > + case IFN_ADD_OVERFLOW:
> > + case IFN_UBSAN_CHECK_ADD:
> > + code = PLUS_EXPR;
> > + break;
> > + case IFN_SUB_OVERFLOW:
> > + case IFN_UBSAN_CHECK_SUB:
> > + code = MINUS_EXPR;
> > + break;
> > + default:
> > + gcc_unreachable ();
> > + }
> > + unsigned start, end;
> > + bool check_zero;
> > + tree ovf = arith_overflow (code, type, prec, prec0, prec1, prec2,
> > + &start, &end, &check_zero);
> > +
> > + unsigned startlimb, endlimb;
> > + if (ovf)
> > + {
> > + startlimb = ~0U;
> > + endlimb = ~0U;
> > + }
> > + else
> > + {
> > + startlimb = start / limb_prec;
> > + endlimb = (end - 1) / limb_prec;
> > + }
> > +
> > + int prec4 = ovf != NULL_TREE ? prec : prec3;
> > + bitint_prec_kind kind = bitint_precision_kind (prec4);
> > + unsigned cnt, rem = 0, fin = 0;
> > + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> > + bool last_ovf = (ovf == NULL_TREE
> > + && CEIL (prec2, limb_prec) > CEIL (prec3, limb_prec));
> > + if (kind != bitint_prec_huge)
> > + cnt = CEIL (prec4, limb_prec) + last_ovf;
> > + else
> > + {
> > + rem = (prec4 % (2 * limb_prec));
> > + fin = (prec4 - rem) / limb_prec;
> > + cnt = 2 + CEIL (rem, limb_prec) + last_ovf;
> > + idx = idx_first = create_loop (size_zero_node, &idx_next);
> > + }
> > +
> > + if (kind == bitint_prec_huge)
> > + m_upwards_2limb = fin;
> > +
> > + tree type0 = TREE_TYPE (arg0);
> > + tree type1 = TREE_TYPE (arg1);
> > + if (TYPE_PRECISION (type0) < prec3)
> > + {
> > + type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
> > + if (TREE_CODE (arg0) == INTEGER_CST)
> > + arg0 = fold_convert (type0, arg0);
> > + }
> > + if (TYPE_PRECISION (type1) < prec3)
> > + {
> > + type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
> > + if (TREE_CODE (arg1) == INTEGER_CST)
> > + arg1 = fold_convert (type1, arg1);
> > + }
> > + unsigned int data_cnt = 0;
> > + tree last_rhs1 = NULL_TREE, last_rhs2 = NULL_TREE;
> > + tree cmp = build_zero_cst (m_limb_type);
> > + unsigned prec_limbs = CEIL ((unsigned) prec, limb_prec);
> > + tree ovf_out = NULL_TREE, cmp_out = NULL_TREE;
> > + for (unsigned i = 0; i < cnt; i++)
> > + {
> > + m_data_cnt = 0;
> > + tree rhs1, rhs2;
> > + if (kind != bitint_prec_huge)
> > + idx = size_int (i);
> > + else if (i >= 2)
> > + idx = size_int (fin + (i > 2));
> > + if (!last_ovf || i < cnt - 1)
> > + {
> > + if (type0 != TREE_TYPE (arg0))
> > + rhs1 = handle_cast (type0, arg0, idx);
> > + else
> > + rhs1 = handle_operand (arg0, idx);
> > + if (type1 != TREE_TYPE (arg1))
> > + rhs2 = handle_cast (type1, arg1, idx);
> > + else
> > + rhs2 = handle_operand (arg1, idx);
> > + if (i == 0)
> > + data_cnt = m_data_cnt;
> > + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
> > + rhs1 = add_cast (m_limb_type, rhs1);
> > + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs2)))
> > + rhs2 = add_cast (m_limb_type, rhs2);
> > + last_rhs1 = rhs1;
> > + last_rhs2 = rhs2;
> > + }
> > + else
> > + {
> > + m_data_cnt = data_cnt;
> > + if (TYPE_UNSIGNED (type0))
> > + rhs1 = build_zero_cst (m_limb_type);
> > + else
> > + {
> > + rhs1 = add_cast (signed_type_for (m_limb_type), last_rhs1);
> > + if (TREE_CODE (rhs1) == INTEGER_CST)
> > + rhs1 = build_int_cst (m_limb_type,
> > + tree_int_cst_sgn (rhs1) < 0 ? -1 : 0);
> > + else
> > + {
> > + tree lpm1 = build_int_cst (unsigned_type_node,
> > + limb_prec - 1);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> > + RSHIFT_EXPR, rhs1, lpm1);
> > + insert_before (g);
> > + rhs1 = add_cast (m_limb_type, gimple_assign_lhs (g));
> > + }
> > + }
> > + if (TYPE_UNSIGNED (type1))
> > + rhs2 = build_zero_cst (m_limb_type);
> > + else
> > + {
> > + rhs2 = add_cast (signed_type_for (m_limb_type), last_rhs2);
> > + if (TREE_CODE (rhs2) == INTEGER_CST)
> > + rhs2 = build_int_cst (m_limb_type,
> > + tree_int_cst_sgn (rhs2) < 0 ? -1 : 0);
> > + else
> > + {
> > + tree lpm1 = build_int_cst (unsigned_type_node,
> > + limb_prec - 1);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs2)),
> > + RSHIFT_EXPR, rhs2, lpm1);
> > + insert_before (g);
> > + rhs2 = add_cast (m_limb_type, gimple_assign_lhs (g));
> > + }
> > + }
> > + }
> > + tree rhs = handle_plus_minus (code, rhs1, rhs2, idx);
> > + if (ovf != boolean_false_node)
> > + {
> > + if (tree_fits_uhwi_p (idx))
> > + {
> > + unsigned limb = tree_to_uhwi (idx);
> > + if (limb >= startlimb && limb <= endlimb)
> > + {
> > + tree l = arith_overflow_extract_bits (start, end, rhs,
> > + limb, check_zero);
> > + tree this_ovf = make_ssa_name (boolean_type_node);
> > + if (ovf == NULL_TREE && !check_zero)
> > + {
> > + cmp = l;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + PLUS_EXPR, l,
> > + build_int_cst (m_limb_type, 1));
> > + insert_before (g);
> > + g = gimple_build_assign (this_ovf, GT_EXPR,
> > + gimple_assign_lhs (g),
> > + build_int_cst (m_limb_type, 1));
> > + }
> > + else
> > + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> > + insert_before (g);
> > + if (ovf == NULL_TREE)
> > + ovf = this_ovf;
> > + else
> > + {
> > + tree b = make_ssa_name (boolean_type_node);
> > + g = gimple_build_assign (b, BIT_IOR_EXPR, ovf, this_ovf);
> > + insert_before (g);
> > + ovf = b;
> > + }
> > + }
> > + }
> > + else if (startlimb < fin)
> > + {
> > + if (m_first && startlimb + 2 < fin)
> > + {
> > + tree data_out;
> > + ovf = prepare_data_in_out (boolean_false_node, idx, &data_out);
> > + ovf_out = m_data.pop ();
> > + m_data.pop ();
> > + if (!check_zero)
> > + {
> > + cmp = prepare_data_in_out (cmp, idx, &data_out);
> > + cmp_out = m_data.pop ();
> > + m_data.pop ();
> > + }
> > + }
> > + if (i != 0 || startlimb != fin - 1)
> > + {
> > + tree_code cmp_code;
> > + bool single_comparison
> > + = (startlimb + 2 >= fin || (startlimb & 1) != (i & 1));
> > + if (!single_comparison)
> > + {
> > + cmp_code = GE_EXPR;
> > + if (!check_zero && (start % limb_prec) == 0)
> > + single_comparison = true;
> > + }
> > + else if ((startlimb & 1) == (i & 1))
> > + cmp_code = EQ_EXPR;
> > + else
> > + cmp_code = GT_EXPR;
> > + g = gimple_build_cond (cmp_code, idx, size_int (startlimb),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + edge e4 = NULL;
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + if (!single_comparison)
> > + {
> > + m_gsi = gsi_after_labels (e1->dest);
> > + g = gimple_build_cond (EQ_EXPR, idx,
> > + size_int (startlimb), NULL_TREE,
> > + NULL_TREE);
> > + insert_before (g);
> > + e2 = split_block (gsi_bb (m_gsi), g);
> > + basic_block bb = create_empty_bb (e2->dest);
> > + add_bb_to_loop (bb, e2->dest->loop_father);
> > + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> > + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> > + e4->probability = profile_probability::unlikely ();
> > + e2->flags = EDGE_FALSE_VALUE;
> > + e2->probability = e4->probability.invert ();
> > + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> > + e2 = find_edge (e2->dest, e3->dest);
> > + }
> > + m_gsi = gsi_after_labels (e2->src);
> > + unsigned tidx = startlimb + (cmp_code == GT_EXPR);
> > + tree l = arith_overflow_extract_bits (start, end, rhs, tidx,
> > + check_zero);
> > + tree this_ovf = make_ssa_name (boolean_type_node);
> > + if (cmp_code != GT_EXPR && !check_zero)
> > + {
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + PLUS_EXPR, l,
> > + build_int_cst (m_limb_type, 1));
> > + insert_before (g);
> > + g = gimple_build_assign (this_ovf, GT_EXPR,
> > + gimple_assign_lhs (g),
> > + build_int_cst (m_limb_type, 1));
> > + }
> > + else
> > + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> > + insert_before (g);
> > + if (cmp_code == GT_EXPR)
> > + {
> > + tree t = make_ssa_name (boolean_type_node);
> > + g = gimple_build_assign (t, BIT_IOR_EXPR, ovf, this_ovf);
> > + insert_before (g);
> > + this_ovf = t;
> > + }
> > + tree this_ovf2 = NULL_TREE;
> > + if (!single_comparison)
> > + {
> > + m_gsi = gsi_after_labels (e4->src);
> > + tree t = make_ssa_name (boolean_type_node);
> > + g = gimple_build_assign (t, NE_EXPR, rhs, cmp);
> > + insert_before (g);
> > + this_ovf2 = make_ssa_name (boolean_type_node);
> > + g = gimple_build_assign (this_ovf2, BIT_IOR_EXPR,
> > + ovf, t);
> > + insert_before (g);
> > + }
> > + m_gsi = gsi_after_labels (e2->dest);
> > + tree t;
> > + if (i == 1 && ovf_out)
> > + t = ovf_out;
> > + else
> > + t = make_ssa_name (boolean_type_node);
> > + gphi *phi = create_phi_node (t, e2->dest);
> > + add_phi_arg (phi, this_ovf, e2, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, ovf ? ovf
> > + : boolean_false_node, e3,
> > + UNKNOWN_LOCATION);
> > + if (e4)
> > + add_phi_arg (phi, this_ovf2, e4, UNKNOWN_LOCATION);
> > + ovf = t;
> > + if (!check_zero && cmp_code != GT_EXPR)
> > + {
> > + t = cmp_out ? cmp_out : make_ssa_name (m_limb_type);
> > + phi = create_phi_node (t, e2->dest);
> > + add_phi_arg (phi, l, e2, UNKNOWN_LOCATION);
> > + add_phi_arg (phi, cmp, e3, UNKNOWN_LOCATION);
> > + if (e4)
> > + add_phi_arg (phi, cmp, e4, UNKNOWN_LOCATION);
> > + cmp = t;
> > + }
> > + }
> > + }
> > + }
> > +
> > + if (var || obj)
> > + {
> > + if (tree_fits_uhwi_p (idx) && tree_to_uhwi (idx) >= prec_limbs)
> > + ;
> > + else if (!tree_fits_uhwi_p (idx)
> > + && (unsigned) prec < (fin - (i == 0)) * limb_prec)
> > + {
> > + bool single_comparison
> > + = (((unsigned) prec % limb_prec) == 0
> > + || prec_limbs + 1 >= fin
> > + || (prec_limbs & 1) == (i & 1));
> > + g = gimple_build_cond (LE_EXPR, idx, size_int (prec_limbs - 1),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + edge e2 = split_block (e1->dest, (gimple *) NULL);
> > + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > + edge e4 = NULL;
> > + e3->probability = profile_probability::unlikely ();
> > + e1->flags = EDGE_TRUE_VALUE;
> > + e1->probability = e3->probability.invert ();
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > + if (!single_comparison)
> > + {
> > + m_gsi = gsi_after_labels (e1->dest);
> > + g = gimple_build_cond (LT_EXPR, idx,
> > + size_int (prec_limbs - 1),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + e2 = split_block (gsi_bb (m_gsi), g);
> > + basic_block bb = create_empty_bb (e2->dest);
> > + add_bb_to_loop (bb, e2->dest->loop_father);
> > + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> > + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> > + e4->probability = profile_probability::unlikely ();
> > + e2->flags = EDGE_FALSE_VALUE;
> > + e2->probability = e4->probability.invert ();
> > + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> > + e2 = find_edge (e2->dest, e3->dest);
> > + }
> > + m_gsi = gsi_after_labels (e2->src);
> > + tree l = limb_access (type, var ? var : obj, idx, true);
> > + g = gimple_build_assign (l, rhs);
> > + insert_before (g);
> > + if (!single_comparison)
> > + {
> > + m_gsi = gsi_after_labels (e4->src);
> > + l = limb_access (type, var ? var : obj,
> > + size_int (prec_limbs - 1), true);
> > + if (!useless_type_conversion_p (TREE_TYPE (l),
> > + TREE_TYPE (rhs)))
> > + rhs = add_cast (TREE_TYPE (l), rhs);
> > + g = gimple_build_assign (l, rhs);
> > + insert_before (g);
> > + }
> > + m_gsi = gsi_after_labels (e2->dest);
> > + }
> > + else
> > + {
> > + tree l = limb_access (type, var ? var : obj, idx, true);
> > + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs)))
> > + rhs = add_cast (TREE_TYPE (l), rhs);
> > + g = gimple_build_assign (l, rhs);
> > + insert_before (g);
> > + }
> > + }
> > + m_first = false;
> > + if (kind == bitint_prec_huge && i <= 1)
> > + {
> > + if (i == 0)
> > + {
> > + idx = make_ssa_name (sizetype);
> > + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> > + size_one_node);
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> > + size_int (2));
> > + insert_before (g);
> > + g = gimple_build_cond (NE_EXPR, idx_next, size_int (fin),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + }
> > + }
> > + }
> > +
> > + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, code);
> > +}
> > +
> > +/* Lower a .MUL_OVERFLOW call with at least one large/huge _BitInt
> > + argument or return type _Complex large/huge _BitInt. */
> > +
> > +void
> > +bitint_large_huge::lower_mul_overflow (tree obj, gimple *stmt)
> > +{
> > + tree arg0 = gimple_call_arg (stmt, 0);
> > + tree arg1 = gimple_call_arg (stmt, 1);
> > + tree lhs = gimple_call_lhs (stmt);
> > + if (!lhs)
> > + {
> > + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > + gsi_remove (&gsi, true);
> > + return;
> > + }
> > + gimple *final_stmt = gsi_stmt (m_gsi);
> > + tree type = TREE_TYPE (lhs);
> > + if (TREE_CODE (type) == COMPLEX_TYPE)
> > + type = TREE_TYPE (type);
> > + int prec = TYPE_PRECISION (type), prec0, prec1;
> > + arg0 = handle_operand_addr (arg0, stmt, NULL, &prec0);
> > + arg1 = handle_operand_addr (arg1, stmt, NULL, &prec1);
> > + int prec2 = ((prec0 < 0 ? -prec0 : prec0)
> > + + (prec1 < 0 ? -prec1 : prec1)
> > + + ((prec0 < 0) != (prec1 < 0)));
> > + tree var = NULL_TREE;
> > + tree orig_obj = obj;
> > + bool force_var = false;
> > + if (obj == NULL_TREE
> > + && TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) >= bitint_prec_large
> > + && m_names
> > + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > + {
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + if (TREE_TYPE (lhs) == type)
> > + orig_obj = obj;
> > + }
> > + else if (obj != NULL_TREE && DECL_P (obj))
> > + {
> > + for (int i = 0; i < 2; ++i)
> > + {
> > + tree arg = i ? arg1 : arg0;
> > + if (TREE_CODE (arg) == ADDR_EXPR)
> > + arg = TREE_OPERAND (arg, 0);
> > + if (get_base_address (arg) == obj)
> > + {
> > + force_var = true;
> > + break;
> > + }
> > + }
> > + }
> > + if (obj == NULL_TREE
> > + || force_var
> > + || TREE_CODE (type) != BITINT_TYPE
> > + || bitint_precision_kind (type) < bitint_prec_large
> > + || prec2 > (CEIL (prec, limb_prec) * limb_prec * (orig_obj ? 1 : 2)))
> > + {
> > + unsigned HOST_WIDE_INT nelts = CEIL (MAX (prec, prec2), limb_prec);
> > + tree atype = build_array_type_nelts (m_limb_type, nelts);
> > + var = create_tmp_var (atype);
> > + }
> > + tree addr = build_fold_addr_expr (var ? var : obj);
> > + addr = force_gimple_operand_gsi (&m_gsi, addr, true,
> > + NULL_TREE, true, GSI_SAME_STMT);
> > + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > + gimple *g
> > + = gimple_build_call_internal (IFN_MULBITINT, 6,
> > + addr, build_int_cst (sitype,
> > + MAX (prec2, prec)),
> > + arg0, build_int_cst (sitype, prec0),
> > + arg1, build_int_cst (sitype, prec1));
> > + insert_before (g);
> > +
> > + unsigned start, end;
> > + bool check_zero;
> > + tree ovf = arith_overflow (MULT_EXPR, type, prec, prec0, prec1, prec2,
> > + &start, &end, &check_zero);
> > + if (ovf == NULL_TREE)
> > + {
> > + unsigned startlimb = start / limb_prec;
> > + unsigned endlimb = (end - 1) / limb_prec;
> > + unsigned cnt;
> > + bool use_loop = false;
> > + if (startlimb == endlimb)
> > + cnt = 1;
> > + else if (startlimb + 1 == endlimb)
> > + cnt = 2;
> > + else if ((end % limb_prec) == 0)
> > + {
> > + cnt = 2;
> > + use_loop = true;
> > + }
> > + else
> > + {
> > + cnt = 3;
> > + use_loop = startlimb + 2 < endlimb;
> > + }
> > + if (cnt == 1)
> > + {
> > + tree l = limb_access (NULL_TREE, var ? var : obj,
> > + size_int (startlimb), true);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> > + insert_before (g);
> > + l = arith_overflow_extract_bits (start, end, gimple_assign_lhs (g),
> > + startlimb, check_zero);
> > + ovf = make_ssa_name (boolean_type_node);
> > + if (check_zero)
> > + g = gimple_build_assign (ovf, NE_EXPR, l,
> > + build_zero_cst (m_limb_type));
> > + else
> > + {
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + PLUS_EXPR, l,
> > + build_int_cst (m_limb_type, 1));
> > + insert_before (g);
> > + g = gimple_build_assign (ovf, GT_EXPR, gimple_assign_lhs (g),
> > + build_int_cst (m_limb_type, 1));
> > + }
> > + insert_before (g);
> > + }
> > + else
> > + {
> > + basic_block edge_bb = NULL;
> > + gimple_stmt_iterator gsi = m_gsi;
> > + gsi_prev (&gsi);
> > + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > + edge_bb = e->src;
> > + m_gsi = gsi_last_bb (edge_bb);
> > + if (!gsi_end_p (m_gsi))
> > + gsi_next (&m_gsi);
> > +
> > + tree cmp = build_zero_cst (m_limb_type);
> > + for (unsigned i = 0; i < cnt; i++)
> > + {
> > + tree idx, idx_next = NULL_TREE;
> > + if (i == 0)
> > + idx = size_int (startlimb);
> > + else if (i == 2)
> > + idx = size_int (endlimb);
> > + else if (use_loop)
> > + idx = create_loop (size_int (startlimb + 1), &idx_next);
> > + else
> > + idx = size_int (startlimb + 1);
> > + tree l = limb_access (NULL_TREE, var ? var : obj, idx, true);
> > + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> > + insert_before (g);
> > + l = gimple_assign_lhs (g);
> > + if (i == 0 || i == 2)
> > + l = arith_overflow_extract_bits (start, end, l,
> > + tree_to_uhwi (idx),
> > + check_zero);
> > + if (i == 0 && !check_zero)
> > + {
> > + cmp = l;
> > + g = gimple_build_assign (make_ssa_name (m_limb_type),
> > + PLUS_EXPR, l,
> > + build_int_cst (m_limb_type, 1));
> > + insert_before (g);
> > + g = gimple_build_cond (GT_EXPR, gimple_assign_lhs (g),
> > + build_int_cst (m_limb_type, 1),
> > + NULL_TREE, NULL_TREE);
> > + }
> > + else
> > + g = gimple_build_cond (NE_EXPR, l, cmp, NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge e1 = split_block (gsi_bb (m_gsi), g);
> > + e1->flags = EDGE_FALSE_VALUE;
> > + edge e2 = make_edge (e1->src, gimple_bb (final_stmt),
> > + EDGE_TRUE_VALUE);
> > + e1->probability = profile_probability::likely ();
> > + e2->probability = e1->probability.invert ();
> > + if (i == 0)
> > + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > + m_gsi = gsi_after_labels (e1->dest);
> > + if (i == 1 && use_loop)
> > + {
> > + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> > + size_one_node);
> > + insert_before (g);
> > + g = gimple_build_cond (NE_EXPR, idx_next,
> > + size_int (endlimb + (cnt == 1)),
> > + NULL_TREE, NULL_TREE);
> > + insert_before (g);
> > + edge true_edge, false_edge;
> > + extract_true_false_edges_from_block (gsi_bb (m_gsi),
> > + &true_edge,
> > + &false_edge);
> > + m_gsi = gsi_after_labels (false_edge->dest);
> > + }
> > + }
> > +
> > + ovf = make_ssa_name (boolean_type_node);
> > + basic_block bb = gimple_bb (final_stmt);
> > + gphi *phi = create_phi_node (ovf, bb);
> > + edge e1 = find_edge (gsi_bb (m_gsi), bb);
> > + edge_iterator ei;
> > + FOR_EACH_EDGE (e, ei, bb->preds)
> > + {
> > + tree val = e == e1 ? boolean_false_node : boolean_true_node;
> > + add_phi_arg (phi, val, e, UNKNOWN_LOCATION);
> > + }
> > + m_gsi = gsi_for_stmt (final_stmt);
> > + }
> > + }
> > +
> > + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, MULT_EXPR);
> > +}
> > +
> > +/* Lower REALPART_EXPR or IMAGPART_EXPR stmt extracting part of result from
> > + .{ADD,SUB,MUL}_OVERFLOW call. */
> > +
> > +void
> > +bitint_large_huge::lower_cplxpart_stmt (tree obj, gimple *stmt)
> > +{
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + rhs1 = TREE_OPERAND (rhs1, 0);
> > + if (obj == NULL_TREE)
> > + {
> > + int part = var_to_partition (m_map, gimple_assign_lhs (stmt));
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + obj = m_vars[part];
> > + }
> > + if (TREE_CODE (rhs1) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > + {
> > + lower_call (obj, SSA_NAME_DEF_STMT (rhs1));
> > + return;
> > + }
> > + int part = var_to_partition (m_map, rhs1);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + tree var = m_vars[part];
> > + unsigned HOST_WIDE_INT nelts
> > + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> > + tree atype = build_array_type_nelts (m_limb_type, nelts);
> > + if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> > + obj = build1 (VIEW_CONVERT_EXPR, atype, obj);
> > + tree off = build_int_cst (build_pointer_type (TREE_TYPE (var)),
> > + gimple_assign_rhs_code (stmt) == REALPART_EXPR
> > + ? 0 : nelts * m_limb_size);
> > + tree v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), off);
> > + gimple *g = gimple_build_assign (obj, v2);
> > + insert_before (g);
> > +}
> > +
> > +/* Lower COMPLEX_EXPR stmt. */
> > +
> > +void
> > +bitint_large_huge::lower_complexexpr_stmt (gimple *stmt)
> > +{
> > + tree lhs = gimple_assign_lhs (stmt);
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + tree rhs2 = gimple_assign_rhs2 (stmt);
> > + int part = var_to_partition (m_map, lhs);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + lhs = m_vars[part];
> > + unsigned HOST_WIDE_INT nelts
> > + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (rhs1))) / limb_prec;
> > + tree atype = build_array_type_nelts (m_limb_type, nelts);
> > + tree zero = build_zero_cst (build_pointer_type (TREE_TYPE (lhs)));
> > + tree v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), zero);
> > + tree v2;
> > + if (TREE_CODE (rhs1) == SSA_NAME)
> > + {
> > + part = var_to_partition (m_map, rhs1);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + v2 = m_vars[part];
> > + }
> > + else if (integer_zerop (rhs1))
> > + v2 = build_zero_cst (atype);
> > + else
> > + v2 = tree_output_constant_def (rhs1);
> > + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> > + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> > + gimple *g = gimple_build_assign (v1, v2);
> > + insert_before (g);
> > + tree off = fold_convert (build_pointer_type (TREE_TYPE (lhs)),
> > + TYPE_SIZE_UNIT (atype));
> > + v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), off);
> > + if (TREE_CODE (rhs2) == SSA_NAME)
> > + {
> > + part = var_to_partition (m_map, rhs2);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + v2 = m_vars[part];
> > + }
> > + else if (integer_zerop (rhs2))
> > + v2 = build_zero_cst (atype);
> > + else
> > + v2 = tree_output_constant_def (rhs2);
> > + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> > + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> > + g = gimple_build_assign (v1, v2);
> > + insert_before (g);
> > +}
> > +
> > +/* Lower a call statement with one or more large/huge _BitInt
> > + arguments or large/huge _BitInt return value. */
> > +
> > +void
> > +bitint_large_huge::lower_call (tree obj, gimple *stmt)
> > +{
> > + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > + unsigned int nargs = gimple_call_num_args (stmt);
> > + if (gimple_call_internal_p (stmt))
> > + switch (gimple_call_internal_fn (stmt))
> > + {
> > + case IFN_ADD_OVERFLOW:
> > + case IFN_SUB_OVERFLOW:
> > + case IFN_UBSAN_CHECK_ADD:
> > + case IFN_UBSAN_CHECK_SUB:
> > + lower_addsub_overflow (obj, stmt);
> > + return;
> > + case IFN_MUL_OVERFLOW:
> > + case IFN_UBSAN_CHECK_MUL:
> > + lower_mul_overflow (obj, stmt);
> > + return;
> > + default:
> > + break;
> > + }
> > + for (unsigned int i = 0; i < nargs; ++i)
> > + {
> > + tree arg = gimple_call_arg (stmt, i);
> > + if (TREE_CODE (arg) != SSA_NAME
> > + || TREE_CODE (TREE_TYPE (arg)) != BITINT_TYPE
> > + || bitint_precision_kind (TREE_TYPE (arg)) <= bitint_prec_middle)
> > + continue;
> > + int p = var_to_partition (m_map, arg);
> > + tree v = m_vars[p];
> > + gcc_assert (v != NULL_TREE);
> > + if (!types_compatible_p (TREE_TYPE (arg), TREE_TYPE (v)))
> > + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (arg), v);
> > + arg = make_ssa_name (TREE_TYPE (arg));
> > + gimple *g = gimple_build_assign (arg, v);
> > + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> > + gimple_call_set_arg (stmt, i, arg);
> > + if (m_preserved == NULL)
> > + m_preserved = BITMAP_ALLOC (NULL);
> > + bitmap_set_bit (m_preserved, SSA_NAME_VERSION (arg));
> > + }
> > + tree lhs = gimple_call_lhs (stmt);
> > + if (lhs
> > + && TREE_CODE (lhs) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > + {
> > + int p = var_to_partition (m_map, lhs);
> > + tree v = m_vars[p];
> > + gcc_assert (v != NULL_TREE);
> > + if (!types_compatible_p (TREE_TYPE (lhs), TREE_TYPE (v)))
> > + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), v);
> > + gimple_call_set_lhs (stmt, v);
> > + SSA_NAME_DEF_STMT (lhs) = gimple_build_nop ();
> > + }
> > + update_stmt (stmt);
> > +}
> > +
> > +/* Lower __asm STMT which involves large/huge _BitInt values. */
> > +
> > +void
> > +bitint_large_huge::lower_asm (gimple *stmt)
> > +{
> > + gasm *g = as_a <gasm *> (stmt);
> > + unsigned noutputs = gimple_asm_noutputs (g);
> > + unsigned ninputs = gimple_asm_ninputs (g);
> > +
> > + for (unsigned i = 0; i < noutputs; ++i)
> > + {
> > + tree t = gimple_asm_output_op (g, i);
> > + tree s = TREE_VALUE (t);
> > + if (TREE_CODE (s) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > + {
> > + int part = var_to_partition (m_map, s);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + TREE_VALUE (t) = m_vars[part];
> > + }
> > + }
> > + for (unsigned i = 0; i < ninputs; ++i)
> > + {
> > + tree t = gimple_asm_input_op (g, i);
> > + tree s = TREE_VALUE (t);
> > + if (TREE_CODE (s) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > + {
> > + int part = var_to_partition (m_map, s);
> > + gcc_assert (m_vars[part] != NULL_TREE);
> > + TREE_VALUE (t) = m_vars[part];
> > + }
> > + }
> > + update_stmt (stmt);
> > +}
> > +
> > +/* Lower statement STMT which involves large/huge _BitInt values
> > + into code accessing individual limbs. */
> > +
> > +void
> > +bitint_large_huge::lower_stmt (gimple *stmt)
> > +{
> > + m_first = true;
> > + m_lhs = NULL_TREE;
> > + m_data.truncate (0);
> > + m_data_cnt = 0;
> > + m_gsi = gsi_for_stmt (stmt);
> > + m_after_stmt = NULL;
> > + m_bb = NULL;
> > + m_init_gsi = m_gsi;
> > + gsi_prev (&m_init_gsi);
> > + m_preheader_bb = NULL;
> > + m_upwards_2limb = 0;
> > + m_var_msb = false;
> > + m_loc = gimple_location (stmt);
> > + if (is_gimple_call (stmt))
> > + {
> > + lower_call (NULL_TREE, stmt);
> > + return;
> > + }
> > + if (gimple_code (stmt) == GIMPLE_ASM)
> > + {
> > + lower_asm (stmt);
> > + return;
> > + }
> > + tree lhs = NULL_TREE, cmp_op1 = NULL_TREE, cmp_op2 = NULL_TREE;
> > + tree_code cmp_code = comparison_op (stmt, &cmp_op1, &cmp_op2);
> > + bool eq_p = (cmp_code == EQ_EXPR || cmp_code == NE_EXPR);
> > + bool mergeable_cast_p = false;
> > + bool final_cast_p = false;
> > + if (gimple_assign_cast_p (stmt))
> > + {
> > + lhs = gimple_assign_lhs (stmt);
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> > + && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
> > + mergeable_cast_p = true;
> > + else if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> > + && INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
> > + {
> > + final_cast_p = true;
> > + if (TREE_CODE (rhs1) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > + {
> > + gimple *g = SSA_NAME_DEF_STMT (rhs1);
> > + if (is_gimple_assign (g)
> > + && gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> > + {
> > + tree rhs2 = TREE_OPERAND (gimple_assign_rhs1 (g), 0);
> > + if (TREE_CODE (rhs2) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs2))))
> > + {
> > + g = SSA_NAME_DEF_STMT (rhs2);
> > + int ovf = optimizable_arith_overflow (g);
> > + if (ovf == 2)
> > + /* If .{ADD,SUB,MUL}_OVERFLOW has both REALPART_EXPR
> > + and IMAGPART_EXPR uses, where the latter is cast to
> > + non-_BitInt, it will be optimized when handling
> > + the REALPART_EXPR. */
> > + return;
> > + if (ovf == 1)
> > + {
> > + lower_call (NULL_TREE, g);
> > + return;
> > + }
> > + }
> > + }
> > + }
> > + }
> > + }
> > + if (gimple_store_p (stmt))
> > + {
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + if (TREE_CODE (rhs1) == SSA_NAME
> > + && (m_names == NULL
> > + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > + {
> > + gimple *g = SSA_NAME_DEF_STMT (rhs1);
> > + m_loc = gimple_location (g);
> > + lhs = gimple_assign_lhs (stmt);
> > + if (is_gimple_assign (g) && !mergeable_op (g))
> > + switch (gimple_assign_rhs_code (g))
> > + {
> > + case LSHIFT_EXPR:
> > + case RSHIFT_EXPR:
> > + lower_shift_stmt (lhs, g);
> > + handled:
> > + m_gsi = gsi_for_stmt (stmt);
> > + unlink_stmt_vdef (stmt);
> > + release_ssa_name (gimple_vdef (stmt));
> > + gsi_remove (&m_gsi, true);
> > + return;
> > + case MULT_EXPR:
> > + case TRUNC_DIV_EXPR:
> > + case TRUNC_MOD_EXPR:
> > + lower_muldiv_stmt (lhs, g);
> > + goto handled;
> > + case FIX_TRUNC_EXPR:
> > + lower_float_conv_stmt (lhs, g);
> > + goto handled;
> > + case REALPART_EXPR:
> > + case IMAGPART_EXPR:
> > + lower_cplxpart_stmt (lhs, g);
> > + goto handled;
> > + default:
> > + break;
> > + }
> > + else if (optimizable_arith_overflow (g) == 3)
> > + {
> > + lower_call (lhs, g);
> > + goto handled;
> > + }
> > + m_loc = gimple_location (stmt);
> > + }
> > + }
> > + if (mergeable_op (stmt)
> > + || gimple_store_p (stmt)
> > + || gimple_assign_load_p (stmt)
> > + || eq_p
> > + || mergeable_cast_p)
> > + {
> > + lhs = lower_mergeable_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> > + if (!eq_p)
> > + return;
> > + }
> > + else if (cmp_code != ERROR_MARK)
> > + lhs = lower_comparison_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> > + if (cmp_code != ERROR_MARK)
> > + {
> > + if (gimple_code (stmt) == GIMPLE_COND)
> > + {
> > + gcond *cstmt = as_a <gcond *> (stmt);
> > + gimple_cond_set_lhs (cstmt, lhs);
> > + gimple_cond_set_rhs (cstmt, boolean_false_node);
> > + gimple_cond_set_code (cstmt, cmp_code);
> > + update_stmt (stmt);
> > + return;
> > + }
> > + if (gimple_assign_rhs_code (stmt) == COND_EXPR)
> > + {
> > + tree cond = build2 (cmp_code, boolean_type_node, lhs,
> > + boolean_false_node);
> > + gimple_assign_set_rhs1 (stmt, cond);
> > + lhs = gimple_assign_lhs (stmt);
> > + gcc_assert (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> > + || (bitint_precision_kind (TREE_TYPE (lhs))
> > + <= bitint_prec_middle));
> > + update_stmt (stmt);
> > + return;
> > + }
> > + gimple_assign_set_rhs1 (stmt, lhs);
> > + gimple_assign_set_rhs2 (stmt, boolean_false_node);
> > + gimple_assign_set_rhs_code (stmt, cmp_code);
> > + update_stmt (stmt);
> > + return;
> > + }
> > + if (final_cast_p)
> > + {
> > + tree lhs_type = TREE_TYPE (lhs);
> > + /* Add support for 3 or more limbs filled in from normal integral
> > + type if this assert fails. If no target chooses limb mode smaller
> > + than half of largest supported normal integral type, this will not
> > + be needed. */
> > + gcc_assert (TYPE_PRECISION (lhs_type) <= 2 * limb_prec);
> > + gimple *g;
> > + if (TREE_CODE (lhs_type) == BITINT_TYPE
> > + && bitint_precision_kind (lhs_type) == bitint_prec_middle)
> > + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (lhs_type),
> > + TYPE_UNSIGNED (lhs_type));
> > + m_data_cnt = 0;
> > + tree rhs1 = gimple_assign_rhs1 (stmt);
> > + tree r1 = handle_operand (rhs1, size_int (0));
> > + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> > + r1 = add_cast (lhs_type, r1);
> > + if (TYPE_PRECISION (lhs_type) > limb_prec)
> > + {
> > + m_data_cnt = 0;
> > + m_first = false;
> > + tree r2 = handle_operand (rhs1, size_int (1));
> > + r2 = add_cast (lhs_type, r2);
> > + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> > + build_int_cst (unsigned_type_node,
> > + limb_prec));
> > + insert_before (g);
> > + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> > + gimple_assign_lhs (g));
> > + insert_before (g);
> > + r1 = gimple_assign_lhs (g);
> > + }
> > + if (lhs_type != TREE_TYPE (lhs))
> > + g = gimple_build_assign (lhs, NOP_EXPR, r1);
> > + else
> > + g = gimple_build_assign (lhs, r1);
> > + gsi_replace (&m_gsi, g, true);
> > + return;
> > + }
> > + if (is_gimple_assign (stmt))
> > + switch (gimple_assign_rhs_code (stmt))
> > + {
> > + case LSHIFT_EXPR:
> > + case RSHIFT_EXPR:
> > + lower_shift_stmt (NULL_TREE, stmt);
> > + return;
> > + case MULT_EXPR:
> > + case TRUNC_DIV_EXPR:
> > + case TRUNC_MOD_EXPR:
> > + lower_muldiv_stmt (NULL_TREE, stmt);
> > + return;
> > + case FIX_TRUNC_EXPR:
> > + case FLOAT_EXPR:
> > + lower_float_conv_stmt (NULL_TREE, stmt);
> > + return;
> > + case REALPART_EXPR:
> > + case IMAGPART_EXPR:
> > + lower_cplxpart_stmt (NULL_TREE, stmt);
> > + return;
> > + case COMPLEX_EXPR:
> > + lower_complexexpr_stmt (stmt);
> > + return;
> > + default:
> > + break;
> > + }
> > + gcc_unreachable ();
> > +}
> > +
> > +/* Helper for walk_non_aliased_vuses. Determine if we arrived at
> > + the desired memory state. */
> > +
> > +void *
> > +vuse_eq (ao_ref *, tree vuse1, void *data)
> > +{
> > + tree vuse2 = (tree) data;
> > + if (vuse1 == vuse2)
> > + return data;
> > +
> > + return NULL;
> > +}
> > +
> > +/* Dominator walker used to discover which large/huge _BitInt
> > + loads could be sunk into all their uses. */
> > +
> > +class bitint_dom_walker : public dom_walker
> > +{
> > +public:
> > + bitint_dom_walker (bitmap names, bitmap loads)
> > + : dom_walker (CDI_DOMINATORS), m_names (names), m_loads (loads) {}
> > +
> > + edge before_dom_children (basic_block) final override;
> > +
> > +private:
> > + bitmap m_names, m_loads;
> > +};
> > +
> > +edge
> > +bitint_dom_walker::before_dom_children (basic_block bb)
> > +{
> > + gphi *phi = get_virtual_phi (bb);
> > + tree vop;
> > + if (phi)
> > + vop = gimple_phi_result (phi);
> > + else if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
> > + vop = NULL_TREE;
> > + else
> > + vop = (tree) get_immediate_dominator (CDI_DOMINATORS, bb)->aux;
> > +
> > + auto_vec<tree, 16> worklist;
> > + for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
> > + !gsi_end_p (gsi); gsi_next (&gsi))
> > + {
> > + gimple *stmt = gsi_stmt (gsi);
> > + if (is_gimple_debug (stmt))
> > + continue;
> > +
> > + if (!vop && gimple_vuse (stmt))
> > + vop = gimple_vuse (stmt);
> > +
> > + tree cvop = vop;
> > + if (gimple_vdef (stmt))
> > + vop = gimple_vdef (stmt);
> > +
> > + tree lhs = gimple_get_lhs (stmt);
> > + if (lhs
> > + && TREE_CODE (lhs) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> > + && !bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > + /* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names,
> > + it means it will be handled in a loop or straight line code
> > + at the location of its (ultimate) immediate use, so for
> > + vop checking purposes check these only at the ultimate
> > + immediate use. */
> > + continue;
> > +
> > + ssa_op_iter oi;
> > + use_operand_p use_p;
> > + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, oi, SSA_OP_USE)
> > + {
> > + tree s = USE_FROM_PTR (use_p);
> > + if (TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > + worklist.safe_push (s);
> > + }
> > +
> > + while (worklist.length () > 0)
> > + {
> > + tree s = worklist.pop ();
> > +
> > + if (!bitmap_bit_p (m_names, SSA_NAME_VERSION (s)))
> > + {
> > + FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
> > + oi, SSA_OP_USE)
> > + {
> > + tree s2 = USE_FROM_PTR (use_p);
> > + if (TREE_CODE (TREE_TYPE (s2)) == BITINT_TYPE
> > + && (bitint_precision_kind (TREE_TYPE (s2))
> > + >= bitint_prec_large))
> > + worklist.safe_push (s2);
> > + }
> > + continue;
> > + }
> > + if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
> > + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
> > + {
> > + tree rhs = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
> > + if (TREE_CODE (rhs) == SSA_NAME
> > + && bitmap_bit_p (m_loads, SSA_NAME_VERSION (rhs)))
> > + s = rhs;
> > + else
> > + continue;
> > + }
> > + else if (!bitmap_bit_p (m_loads, SSA_NAME_VERSION (s)))
> > + continue;
> > +
> > + ao_ref ref;
> > + ao_ref_init (&ref, gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)));
> > + tree lvop = gimple_vuse (SSA_NAME_DEF_STMT (s));
> > + unsigned limit = 64;
> > + tree vuse = cvop;
> > + if (vop != cvop
> > + && is_gimple_assign (stmt)
> > + && gimple_store_p (stmt)
> > + && !operand_equal_p (lhs,
> > + gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)),
> > + 0))
> > + vuse = vop;
> > + if (vuse != lvop
> > + && walk_non_aliased_vuses (&ref, vuse, false, vuse_eq,
> > + NULL, NULL, limit, lvop) == NULL)
> > + bitmap_clear_bit (m_loads, SSA_NAME_VERSION (s));
> > + }
> > + }
> > +
> > + bb->aux = (void *) vop;
> > + return NULL;
> > +}
> > +
> > +}
> > +
> > +/* Replacement for normal processing of STMT in tree-ssa-coalesce.cc
> > + build_ssa_conflict_graph.
> > + The differences are:
> > + 1) don't process assignments with large/huge _BitInt lhs not in NAMES
> > + 2) for large/huge _BitInt multiplication/division/modulo process def
> > + only after processing uses rather than before to make uses conflict
> > + with the definition
> > + 3) for large/huge _BitInt uses not in NAMES mark the uses of their
> > + SSA_NAME_DEF_STMT (recursively), because those uses will be sunk into
> > + the final statement. */
> > +
> > +void
> > +build_bitint_stmt_ssa_conflicts (gimple *stmt, live_track *live,
> > + ssa_conflicts *graph, bitmap names,
> > + void (*def) (live_track *, tree,
> > + ssa_conflicts *),
> > + void (*use) (live_track *, tree))
> > +{
> > + bool muldiv_p = false;
> > + tree lhs = NULL_TREE;
> > + if (is_gimple_assign (stmt))
> > + {
> > + lhs = gimple_assign_lhs (stmt);
> > + if (TREE_CODE (lhs) == SSA_NAME
> > + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > + {
> > + if (!bitmap_bit_p (names, SSA_NAME_VERSION (lhs)))
> > + return;
> > + switch (gimple_assign_rhs_code (stmt))
> > + {
> > + case MULT_EXPR:
> > + case TRUNC_DIV_EXPR:
> > + case TRUNC_MOD_EXPR:
> > + muldiv_p = true;
> > + default:
> > + break;
> > + }
> > + }
> > + }
> > +
> > + ssa_op_iter iter;
> > + tree var;
> > + if (!muldiv_p)
> > + {
> > + /* For stmts with more than one SSA_NAME definition pretend all the
> > + SSA_NAME outputs but the first one are live at this point, so
> > + that conflicts are added in between all those even when they are
> > + actually not really live after the asm, because expansion might
> > + copy those into pseudos after the asm and if multiple outputs
> > + share the same partition, it might overwrite those that should
> > + be live. E.g.
> > + asm volatile (".." : "=r" (a) : "=r" (b) : "0" (a), "1" (a));
> > + return a;
> > + See PR70593. */
> > + bool first = true;
> > + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> > + if (first)
> > + first = false;
> > + else
> > + use (live, var);
> > +
> > + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> > + def (live, var, graph);
> > + }
> > +
> > + auto_vec<tree, 16> worklist;
> > + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_USE)
> > + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> > + {
> > + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> > + use (live, var);
> > + else
> > + worklist.safe_push (var);
> > + }
> > +
> > + while (worklist.length () > 0)
> > + {
> > + tree s = worklist.pop ();
> > + FOR_EACH_SSA_TREE_OPERAND (var, SSA_NAME_DEF_STMT (s), iter, SSA_OP_USE)
> > + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> > + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> > + {
> > + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> > + use (live, var);
> > + else
> > + worklist.safe_push (var);
> > + }
> > + }
> > +
> > + if (muldiv_p)
> > + def (live, lhs, graph);
> > +}
> > +
> > +/* Entry point for _BitInt(N) operation lowering during optimization. */
> > +
> > +static unsigned int
> > +gimple_lower_bitint (void)
> > +{
> > + small_max_prec = mid_min_prec = large_min_prec = huge_min_prec = 0;
> > + limb_prec = 0;
> > +
> > + unsigned int i;
> > + tree vop = gimple_vop (cfun);
> > + for (i = 0; i < num_ssa_names; ++i)
> > + {
> > + tree s = ssa_name (i);
> > + if (s == NULL)
> > + continue;
> > + tree type = TREE_TYPE (s);
> > + if (TREE_CODE (type) == COMPLEX_TYPE)
> > + type = TREE_TYPE (type);
> > + if (TREE_CODE (type) == BITINT_TYPE
> > + && bitint_precision_kind (type) != bitint_prec_small)
> > + break;
> > + /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
> > + into memory. Such functions could have no large/huge SSA_NAMEs. */
> > + if (vop && SSA_NAME_VAR (s) == vop)
>
> SSA_NAME_IS_VIRTUAL_OPERAND (s)
Ok.
>
> > + {
> > + gimple *g = SSA_NAME_DEF_STMT (s);
> > + if (is_gimple_assign (g) && gimple_store_p (g))
> > + {
>
> what about functions returning large _BitInt<N> where the ABI
> specifies it doesn't return by invisible reference?
When we have such a target with _BitInt support we'd see it in testsuite
coverage and I guess checking GIMPLE_RETURN stmts in a function shouldn't
be that hard (first check that the function returns large/huge _BitInt and
if it does, look for preds of EXIT block, or simply say all such functions
do have large/huge _BitInt if they return it).
> The other def not handled are ASMs ...
Indeed, ASMs is what I've realized I won't be able to find so cheaply like
the constant stores into memory. I think it is more important to have the
pass cheap for non-_BitInt sources and so for asm with large/huge _BitInt
INTEGER_CST inputs I've dealt with it in expansion (and intentionally not
in a very optimized way by forcing it into memory, because I don't think
doing anything smarter is worth it for inline asm).
> > + i = 0;
^^^^^^ here
> > + FOR_EACH_VEC_ELT (switch_statements, j, stmt)
> > + {
> > + gswitch *swtch = as_a<gswitch *> (stmt);
> > + tree_switch_conversion::switch_decision_tree dt (swtch);
> > + expanded |= dt.analyze_switch_statement ();
> > + }
> > +
> > + if (expanded)
> > + {
> > + free_dominance_info (CDI_DOMINATORS);
> > + free_dominance_info (CDI_POST_DOMINATORS);
> > + mark_virtual_operands_for_renaming (cfun);
> > + cleanup_tree_cfg (TODO_update_ssa);
> > + }
> > + }
> > +
> > + struct bitint_large_huge large_huge;
> > + bool has_large_huge_parm_result = false;
> > + bool has_large_huge = false;
> > + unsigned int ret = 0, first_large_huge = ~0U;
> > + bool edge_insertions = false;
> > + for (; i < num_ssa_names; ++i)
>
> the above SSA update could end up re-using a smaller SSA name number,
> so I wonder if you can really avoid starting at 1 again.
I do that above. And similarly if I try to "deoptimize" ABS/ABSU/MIN/MAX
or rotates etc., I reset first_large_huge to 0 so the loop after that starts
at 0.
> > + FOR_EACH_BB_REVERSE_FN (bb, cfun)
>
> is reverse in any way important? (not visiting newly created blocks?)
Yeah, that was so that I don't visit the newly created blocks.
The loop continues to iterate with prev which is computed before the
lowering, so if the lowering splits blocks etc. it will continue in the
original block before the code added during the lowering.
> > --- gcc/lto-streamer-in.cc.jj 2023-07-17 09:07:42.078283882 +0200
> > +++ gcc/lto-streamer-in.cc 2023-07-27 15:03:24.255234159 +0200
> > @@ -1888,7 +1888,7 @@ lto_input_tree_1 (class lto_input_block
> >
> > for (i = 0; i < len; i++)
> > a[i] = streamer_read_hwi (ib);
> > - gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
> > + gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
>
> OK to push separately.
Ok.
> > + else
> > + {
> > + SET_TYPE_MODE (type, BLKmode);
> > + cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
> > + }
> > + TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
> > + TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
> > + SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
>
> so when a target allows say TImode we don't align to that larger mode?
> Might be worth documenting in the target hook that the alignment
> which I think is part of the ABI is specified by the limb mode.
Right now there is just x86-64 psABI finalized, which says roughly that
what fits into {,un}signed {char,short,int,long,long long} is passed/laid
out like that, everything else is handled like structure containing n
unsigned long long limbs, so indeed
alignof (__int128) > alignof (_BitInt(128)) there.
Now, e.g. the ARM people don't really like that and are contemplating
to say the limb_mode is TImode for 64-bit code, that would mean that
even _BitInt(128) would be a bitint_small_prec there, no bitint_middle_prec
and _BitInt(129) and above would have 128-bit alignment.
The problem with that is that the double-word support in GCC isn't very good
as you know, tons of operations need libgcc and the implementation using
128-bit limbs in libgcc would be terrible. So, maybe we'll want to split
info.limb_mode into info.abi_limb_mode and info.limb_mode, where the former
would be used just in a few spots for ABI purposes (e.g. the alignment and
sizing), while a smaller info.limb_mode could be used what is used
internally for the loops and semi-internally (GCC ABI) in the libgcc APIs.
Of course precondition would be that the _BitInt endianity matches the
target endianity, otherwise there is no way to do that.
So, AArch64 could then say _BitInt(256) is 128-bit aligned and
_BitInt(257) has same size as _BitInt(384), but still handle it internally
using 64-bit limbs and expect the libgcc APIs to be passed arrays of 64-bit
limbs (with 64-bit alignment).
> Are arrays of _BitInt a thing? _BitInt<8>[10] would have quite some
> padding then which might be unexpected?
Sure, _BitInt(8)[10] is a thing, after all, the testsuite contains tons
of examples of that. In the x86-64 psABI, _BitInt(8) has same
alignment/size as signed char, so there is no padding, but sure,
_BitInt(9)[10] does have a padding, it is like array of 10 unsigned shorts
with 7 bits of padding in each of them. Similarly,
_BitInt(575)[10] is an array with 72 bytes long elements with 1 padding bit
in each.
> > +/* Target properties of _BitInt(N) type. _BitInt(N) is to be represented
> > + as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
> > + ordered from least significant to most significant if !big_endian,
> > + otherwise from most significant to least significant. If extended is
> > + false, the bits above or equal to N are undefined when stored in a register
> > + or memory, otherwise they are zero or sign extended depending on if
> > + it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N). */
> > +
>
> I think this belongs to tm.texi (or duplicated there)
Ok.
> > @@ -6969,8 +6970,14 @@ eliminate_dom_walker::eliminate_stmt (ba
> > || !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
> > && !type_has_mode_precision_p (TREE_TYPE (lhs)))
> > {
> > - if (TREE_CODE (lhs) == COMPONENT_REF
> > - || TREE_CODE (lhs) == MEM_REF)
> > + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > + && (TYPE_PRECISION (TREE_TYPE (lhs))
> > + > (targetm.scalar_mode_supported_p (TImode)
> > + ? GET_MODE_PRECISION (TImode)
> > + : GET_MODE_PRECISION (DImode))))
> > + lookup_lhs = NULL_TREE;
>
> What's the reason for this? You allow non-mode precision
> stores, if you wanted to disallow BLKmode I think the better
> way would be to add != BLKmode above or alternatively
> build a limb-size _BitInt type (instead of
> build_nonstandard_integer_type)?
This was just a quick hack to fix some ICEs. I'm afraid once some people
try csmith on _BitInt we'll get more such spots, and sure, it might be able
to deal with it better, just not too familiar with this to know what that
would be.
> > + this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
> > + g = gimple_build_assign (make_ssa_name (TREE_TYPE (index_expr)),
> > + PLUS_EXPR, index_expr, this_low);
> > + gimple_set_location (g, loc);
> > + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> > + index_expr = gimple_assign_lhs (g);
>
> I suppose using gimple_convert/gimple_build with a sequence would be
> easier to follow.
Guess I could try to use them here, but as I said earlier, changing the
lowering pass to use those everywhere would mean rewriting half of those
6000 lines.
> > --- gcc/ubsan.cc.jj 2023-05-20 15:31:09.240660915 +0200
> > +++ gcc/ubsan.cc 2023-07-27 15:03:24.260234089 +0200
> > @@ -50,6 +50,8 @@ along with GCC; see the file COPYING3.
> > #include "gimple-fold.h"
> > #include "varasm.h"
> > #include "realmpfr.h"
> > +#include "target.h"
> > +#include "langhooks.h"
>
> Sanitizer support into a separate patch?
Ok.
> > @@ -1717,12 +1717,11 @@ simplify_using_ranges::simplify_internal
> > g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
> > else
> > {
> > - int prec = TYPE_PRECISION (type);
> > tree utype = type;
> > if (ovf
> > || !useless_type_conversion_p (type, TREE_TYPE (op0))
> > || !useless_type_conversion_p (type, TREE_TYPE (op1)))
> > - utype = build_nonstandard_integer_type (prec, 1);
> > + utype = unsigned_type_for (type);
> > if (TREE_CODE (op0) == INTEGER_CST)
> > op0 = fold_convert (utype, op0);
> > else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))
>
> Phew. That was big.
Sorry, I hoped it wouldn't take me almost 3 months and would be much shorter
as well, but clearly I'm not good at estimating stuff...
> A lot of it looks OK (I guess nearly all of it). For the overall
> picture I'm unsure esp. how/if we need to keep the distinction for
> small _BitInt<>s and if we maybe want to lower them earlier even?
The reason for current location was to have a few cleanup passes after IPA,
so that e.g. value ranges can be propagated and computed (something that
helps a lot e.g. for multiplications/divisions and __builtin_*_overflow).
Once lowered, ranger is out of luck with these.
Jakub
> Am 04.08.2023 um 18:16 schrieb Jakub Jelinek via Gcc-patches <gcc-patches@gcc.gnu.org>:
>
> On Fri, Aug 04, 2023 at 01:25:07PM +0000, Richard Biener wrote:
>>> @@ -144,6 +144,9 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type
>>> and TYPE_PRECISION (number of bits used by this type). */
>>> DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)
>
> Thanks.
>
>>> +/* Bit-precise integer type. */
>>> +DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
>>> +
>>
>> So what was the main reason to not make BITINT_TYPE equal to INTEGER_TYPE?
>
> The fact that they do or can have different calling conventions from normal
> integers; they e.g. don't promote to integers, so IFN_VA_ARG handling is
> affected (lowered only during stdarg pass after IPA), calling conventions
> depend (with a single finalized target it is premature to hardcode how it
> will behave for all the others, and while on x86_64 the up to 128-bit
> _BitInt pass/return mostly the same, e.g. _BitInt(128) has alignof
> like long long, while __int128 has twice as large alignment.
>
> So, the above was the main reason to make BITINT_TYPE <-> non-BITINT_TYPE
> conversions non-useless such that calls have the right type of arguments.
>
> I'll try to adjust the comments and mention it in generic.texi.
>
>> Maybe note that in the comment as
>>
>> "While bit-precise integer types share the same properties as
>> INTEGER_TYPE ..."
>>
>> ?
>>
>> Note INTEGER_TYPE is documeted in generic.texi but unless I missed
>> it the changelog above doesn't mention documentation for BITINT_TYPE
>> added there.
>
>>> + if (bitint_type_cache == NULL)
>>> + vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
>>> +
>>> + if (precision <= MAX_INT_CACHED_PREC)
>>> + {
>>> + itype = (*bitint_type_cache)[precision + unsignedp];
>>> + if (itype)
>>> + return itype;
>>
>> I think we added this kind of cache for standard INTEGER_TYPE because
>> the middle-end builds those all over the place and going through
>> the type_hash is expensive. Is that true for _BitInt as well? If
>> not it doesn't seem worth the extra caching.
>
> As even the very large _BitInts are used in the pre-IPA passes, IPA passes
> and a few post-IPA passes similarly to other integral types, I think the
> caching is very useful. But if you want, I could gather some statistics
> on those. Most importantly, no price (almost) is paid if one doesn't use
> those types in the source.
>
>> In fact, I wonder whether the middle-end does/should treat
>> _BitInt<N> and an INTEGER_TYPE with precision N any different?
>
> See above.
>
>> Aka, should we build an INTEGER_TYPE whenever N is say less than
>> the number of bits in word_mode?
>>
>>> + if (TREE_CODE (pval) == INTEGER_CST
>>> + && TREE_CODE (TREE_TYPE (pval)) == BITINT_TYPE)
>>> + {
>>> + unsigned int prec = TYPE_PRECISION (TREE_TYPE (pval));
>>> + struct bitint_info info;
>>> + gcc_assert (targetm.c.bitint_type_info (prec, &info));
>>> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
>>> + unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
>>> + if (prec > limb_prec)
>>> + {
>>> + scalar_int_mode arith_mode
>>> + = (targetm.scalar_mode_supported_p (TImode)
>>> + ? TImode : DImode);
>>> + if (prec > GET_MODE_PRECISION (arith_mode))
>>> + pval = tree_output_constant_def (pval);
>>> + }
>>
>> A comment would be helpful to understand what we are doing here.
>
> Ok, will add that. Note, this particular spot is an area for future
> improvement, I've spent half of day on it but then gave up for now.
> In the lowering pass I'm trying to optimize the common case where a lot
> of constants don't need all the limbs and can be represented as one limb
> or several limbs in memory with all the higher limbs then filled with 0s
> or -1s. For the argument passing, it would be even useful to have smaller
> _BitInt constants passed by not having them in memory at all and just
> pushing a couple of constants (i.e. store_by_pieces way). But trying to
> do that in emit_push_insn wasn't really easy...
>
>>> --- gcc/config/i386/i386.cc.jj 2023-07-19 10:01:17.380467993 +0200
>>> +++ gcc/config/i386/i386.cc 2023-07-27 15:03:24.230234508 +0200
>>> @@ -2121,7 +2121,8 @@ classify_argument (machine_mode mode, co
>>> return 0;
>>> }
>>
>> splitting out target support to a separate patch might be helpful
>
> Ok.
>
>>> --- gcc/doc/tm.texi.jj 2023-05-30 17:52:34.474857301 +0200
>>> +++ gcc/doc/tm.texi 2023-07-27 15:03:24.284233753 +0200
>>> @@ -1020,6 +1020,11 @@ Return a value, with the same meaning as
>>> @code{FLT_EVAL_METHOD} that describes which excess precision should be
>>> applied.
>>>
>>> +@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, struct bitint_info *@var{info})
>>> +This target hook returns true if _BitInt(N) is supported and provides some
>>> +details on it.
>>> +@end deftypefn
>>> +
>>
>> document the "details" here please?
>
> Will do.
>
>>> @@ -20523,6 +20546,22 @@ rtl_for_decl_init (tree init, tree type)
>>> return NULL;
>>> }
>>>
>>> + /* RTL can't deal with BLKmode INTEGER_CSTs. */
>>> + if (TREE_CODE (init) == INTEGER_CST
>>> + && TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
>>> + && TYPE_MODE (TREE_TYPE (init)) == BLKmode)
>>> + {
>>> + if (tree_fits_shwi_p (init))
>>> + {
>>> + bool uns = TYPE_UNSIGNED (TREE_TYPE (init));
>>> + tree type
>>> + = build_nonstandard_integer_type (HOST_BITS_PER_WIDE_INT, uns);
>>> + init = fold_convert (type, init);
>>> + }
>>> + else
>>> + return NULL;
>>> + }
>>> +
>>
>> it feels like we should avoid the above and fix expand_expr instead.
>> The assert immediately following seems to "support" a NULL_RTX return
>> value so the above trick should work there, too, and we can possibly
>> avoid creating a new INTEGER_TYPE and INTEGER_CST? Another option
>> would be to simply use immed_wide_int_const or simply
>> build a VOIDmode CONST_INT directly here?
>
> Not really sure in this case. I guess I could instead deal with BLKmode
> BITINT_TYPE INTEGER_CSTs in expand_expr* and emit those into memory, but
> I think dwarf2out would be upset that a constant was forced into memory,
> it really wants some DWARF constant.
> Sure, I could create a CONST_INT directly. What to do for larger ones
> is I'm afraid an area for future DWARF improvements.
>
>>> --- gcc/expr.cc.jj 2023-07-02 12:07:08.455164393 +0200
>>> +++ gcc/expr.cc 2023-07-27 15:03:24.253234187 +0200
>>> @@ -10828,6 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
>>> ssa_name = exp;
>>> decl_rtl = get_rtx_for_ssa_name (ssa_name);
>>> exp = SSA_NAME_VAR (ssa_name);
>>> + if (!exp || VAR_P (exp))
>>> + reduce_bit_field = false;
>>
>> That needs an explanation. Can we do this and related changes
>> as prerequesite instead?
>
> I can add a comment, but those 2 lines are an optimization for the other
> hunks in the same function. The intent is to do the zero/sign extensions
> of _BitInt < mode precision objects (note, this is about the small/middle
> ones which aren't or aren't much lowered in the lowering pass) when reading
> from memory, or function arguments (or RESULT_DECL?) because the ABI says
> those bits are undefined there, but not to do that for temporaries
> (SSA_NAMEs other than the parameters/RESULT_DECLs) because RTL expansion
> has done those extensions already when storing them into the pseudos.
>
>>> goto expand_decl_rtl;
>>>
>>> case VAR_DECL:
>>> @@ -10961,6 +10963,13 @@ expand_expr_real_1 (tree exp, rtx target
>>> temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
>>> MEM_ALIGN (temp), NULL_RTX, NULL);
>>>
>>> + if (TREE_CODE (type) == BITINT_TYPE
>>> + && reduce_bit_field
>>> + && mode != BLKmode
>>> + && modifier != EXPAND_MEMORY
>>> + && modifier != EXPAND_WRITE
>>> + && modifier != EXPAND_CONST_ADDRESS)
>>> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
>>
>> I wonder how much work it would be to "lower" 'reduce_bit_field' earlier
>> on GIMPLE...
>
> I know that the expr.cc hacks aren't nice, but I'm afraid it would be a lot
> of work and lot of code. And not really sure how to make sure further
> GIMPLE passes wouldn't optimize that away.
>>
>>> @@ -11192,6 +11215,13 @@ expand_expr_real_1 (tree exp, rtx target
>>> && align < GET_MODE_ALIGNMENT (mode))
>>> temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
>>> align, NULL_RTX, NULL);
>>> + if (TREE_CODE (type) == BITINT_TYPE
>>> + && reduce_bit_field
>>> + && mode != BLKmode
>>> + && modifier != EXPAND_WRITE
>>> + && modifier != EXPAND_MEMORY
>>> + && modifier != EXPAND_CONST_ADDRESS)
>>> + return reduce_to_bit_field_precision (temp, NULL_RTX, type);
>>
>> so this is quite repetitive, I suppose the checks ensure we apply
>> it to rvalues only, but I don't really get why we only reduce
>> BITINT_TYPE, esp. as we are not considering BLKmode here?
>
> There could be a macro for that or something to avoid the repetitions.
> The reason to do that for BITINT_TYPE only is that for everything else
> unfortunately RTL does it completely differently. There is separate
> code when reading from bit-fields (which does those extensions), but for
> anything else RTL assumes that sub-mode integers are always extended to the
> corresponding mode. Say for the case where the non-mode integers leak into
> code (C long long/__int128 bit-fields larger than 32 bits) and where say
> FRE/SRA optimizes into SSA_NAMEs, everything assumes that when it is spilled
> in memory, it is always extended and re-extends after every binary/unary
> operation.
> Unfortunately, the x86-64 psABI (and the plans in other psABIs) says the
> padding bits are undefined and so for ABI compatibility we can't rely
> on those bits. Now, for the large/huge ones where lowering occurs I believe
> this shouldn't be a problem, those are VCEd to full limbs and then
> explicitly extend from smaller number of bits on reads.
>
>>> @@ -11253,18 +11283,21 @@ expand_expr_real_1 (tree exp, rtx target
>>> set_mem_addr_space (temp, as);
>>> if (TREE_THIS_VOLATILE (exp))
>>> MEM_VOLATILE_P (temp) = 1;
>>> - if (modifier != EXPAND_WRITE
>>> - && modifier != EXPAND_MEMORY
>>> - && !inner_reference_p
>>> + if (modifier == EXPAND_WRITE || modifier == EXPAND_MEMORY)
>>> + return temp;
>>> + if (!inner_reference_p
>>> && mode != BLKmode
>>> && align < GET_MODE_ALIGNMENT (mode))
>>> temp = expand_misaligned_mem_ref (temp, mode, unsignedp, align,
>>> modifier == EXPAND_STACK_PARM
>>> ? NULL_RTX : target, alt_rtl);
>>> - if (reverse
>>> - && modifier != EXPAND_MEMORY
>>> - && modifier != EXPAND_WRITE)
>>> + if (reverse)
>>
>> the above two look like a useful prerequesite, OK to push separately.
>
> Ok, will do.
>
>>> +enum bitint_prec_kind {
>>> + bitint_prec_small,
>>> + bitint_prec_middle,
>>> + bitint_prec_large,
>>> + bitint_prec_huge
>>> +};
>>> +
>>> +/* Caches to speed up bitint_precision_kind. */
>>> +
>>> +static int small_max_prec, mid_min_prec, large_min_prec, huge_min_prec;
>>> +static int limb_prec;
>>
>> I would appreciate the lowering pass to be in a separate patch in
>> case we need to iterate on it.
>
> I guess that is possible, as long as the C + testcases patches go last,
> nothing will really create those types.
>>
>>> +/* Categorize _BitInt(PREC) as small, middle, large or huge. */
>>> +
>>> +static bitint_prec_kind
>>> +bitint_precision_kind (int prec)
>>> +{
>>> + if (prec <= small_max_prec)
>>> + return bitint_prec_small;
>>> + if (huge_min_prec && prec >= huge_min_prec)
>>> + return bitint_prec_huge;
>>> + if (large_min_prec && prec >= large_min_prec)
>>> + return bitint_prec_large;
>>> + if (mid_min_prec && prec >= mid_min_prec)
>>> + return bitint_prec_middle;
>>> +
>>> + struct bitint_info info;
>>> + gcc_assert (targetm.c.bitint_type_info (prec, &info));
>>> + scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
>>> + if (prec <= GET_MODE_PRECISION (limb_mode))
>>> + {
>>> + small_max_prec = prec;
>>> + return bitint_prec_small;
>>> + }
>>> + scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
>>> + ? TImode : DImode);
>>> + if (!large_min_prec
>>> + && GET_MODE_PRECISION (arith_mode) > GET_MODE_PRECISION (limb_mode))
>>> + large_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
>>> + if (!limb_prec)
>>> + limb_prec = GET_MODE_PRECISION (limb_mode);
>>> + if (!huge_min_prec)
>>> + {
>>> + if (4 * limb_prec >= GET_MODE_PRECISION (arith_mode))
>>> + huge_min_prec = 4 * limb_prec;
>>> + else
>>> + huge_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
>>> + }
>>> + if (prec <= GET_MODE_PRECISION (arith_mode))
>>> + {
>>> + if (!mid_min_prec || prec < mid_min_prec)
>>> + mid_min_prec = prec;
>>> + return bitint_prec_middle;
>>> + }
>>> + if (large_min_prec && prec <= large_min_prec)
>>> + return bitint_prec_large;
>>> + return bitint_prec_huge;
>>> +}
>>> +
>>> +/* Same for a TYPE. */
>>> +
>>> +static bitint_prec_kind
>>> +bitint_precision_kind (tree type)
>>> +{
>>> + return bitint_precision_kind (TYPE_PRECISION (type));
>>> +}
>>> +
>>> +/* Return minimum precision needed to describe INTEGER_CST
>>> + CST. All bits above that precision up to precision of
>>> + TREE_TYPE (CST) are cleared if EXT is set to 0, or set
>>> + if EXT is set to -1. */
>>> +
>>> +static unsigned
>>> +bitint_min_cst_precision (tree cst, int &ext)
>>> +{
>>> + ext = tree_int_cst_sgn (cst) < 0 ? -1 : 0;
>>> + wide_int w = wi::to_wide (cst);
>>> + unsigned min_prec = wi::min_precision (w, TYPE_SIGN (TREE_TYPE (cst)));
>>> + /* For signed values, we don't need to count the sign bit,
>>> + we'll use constant 0 or -1 for the upper bits. */
>>> + if (!TYPE_UNSIGNED (TREE_TYPE (cst)))
>>> + --min_prec;
>>> + else
>>> + {
>>> + /* For unsigned values, also try signed min_precision
>>> + in case the constant has lots of most significant bits set. */
>>> + unsigned min_prec2 = wi::min_precision (w, SIGNED) - 1;
>>> + if (min_prec2 < min_prec)
>>> + {
>>> + ext = -1;
>>> + return min_prec2;
>>> + }
>>> + }
>>> + return min_prec;
>>> +}
>>> +
>>> +namespace {
>>> +
>>> +/* If OP is middle _BitInt, cast it to corresponding INTEGER_TYPE
>>> + cached in TYPE and return it. */
>>> +
>>> +tree
>>> +maybe_cast_middle_bitint (gimple_stmt_iterator *gsi, tree op, tree &type)
>>> +{
>>> + if (op == NULL_TREE
>>> + || TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
>>> + || bitint_precision_kind (TREE_TYPE (op)) != bitint_prec_middle)
>>> + return op;
>>> +
>>> + int prec = TYPE_PRECISION (TREE_TYPE (op));
>>> + int uns = TYPE_UNSIGNED (TREE_TYPE (op));
>>> + if (type == NULL_TREE
>>> + || TYPE_PRECISION (type) != prec
>>> + || TYPE_UNSIGNED (type) != uns)
>>> + type = build_nonstandard_integer_type (prec, uns);
>>> +
>>> + if (TREE_CODE (op) != SSA_NAME)
>>> + {
>>> + tree nop = fold_convert (type, op);
>>> + if (is_gimple_val (nop))
>>> + return nop;
>>> + }
>>> +
>>> + tree nop = make_ssa_name (type);
>>> + gimple *g = gimple_build_assign (nop, NOP_EXPR, op);
>>> + gsi_insert_before (gsi, g, GSI_SAME_STMT);
>>> + return nop;
>>> +}
>>> +
>>> +/* Return true if STMT can be handled in a loop from least to most
>>> + significant limb together with its dependencies. */
>>> +
>>> +bool
>>> +mergeable_op (gimple *stmt)
>>> +{
>>> + if (!is_gimple_assign (stmt))
>>> + return false;
>>> + switch (gimple_assign_rhs_code (stmt))
>>> + {
>>> + case PLUS_EXPR:
>>> + case MINUS_EXPR:
>>> + case NEGATE_EXPR:
>>> + case BIT_AND_EXPR:
>>> + case BIT_IOR_EXPR:
>>> + case BIT_XOR_EXPR:
>>> + case BIT_NOT_EXPR:
>>> + case SSA_NAME:
>>> + case INTEGER_CST:
>>> + return true;
>>> + case LSHIFT_EXPR:
>>> + {
>>> + tree cnt = gimple_assign_rhs2 (stmt);
>>> + if (tree_fits_uhwi_p (cnt)
>>> + && tree_to_uhwi (cnt) < (unsigned HOST_WIDE_INT) limb_prec)
>>> + return true;
>>> + }
>>> + break;
>>> + CASE_CONVERT:
>>> + case VIEW_CONVERT_EXPR:
>>> + {
>>> + tree lhs_type = TREE_TYPE (gimple_assign_lhs (stmt));
>>> + tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
>>> + if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
>>> + && TREE_CODE (lhs_type) == BITINT_TYPE
>>> + && TREE_CODE (rhs_type) == BITINT_TYPE
>>> + && bitint_precision_kind (lhs_type) >= bitint_prec_large
>>> + && bitint_precision_kind (rhs_type) >= bitint_prec_large
>>> + && tree_int_cst_equal (TYPE_SIZE (lhs_type), TYPE_SIZE (rhs_type)))
>>> + {
>>> + if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type))
>>> + return true;
>>> + if ((unsigned) TYPE_PRECISION (lhs_type) % (2 * limb_prec) != 0)
>>> + return true;
>>> + if (bitint_precision_kind (lhs_type) == bitint_prec_large)
>>> + return true;
>>> + }
>>> + break;
>>> + }
>>> + default:
>>> + break;
>>> + }
>>> + return false;
>>> +}
>>> +
>>> +/* Return non-zero if stmt is .{ADD,SUB,MUL}_OVERFLOW call with
>>> + _Complex large/huge _BitInt lhs which has at most two immediate uses,
>>> + at most one use in REALPART_EXPR stmt in the same bb and exactly one
>>> + IMAGPART_EXPR use in the same bb with a single use which casts it to
>>> + non-BITINT_TYPE integral type. If there is a REALPART_EXPR use,
>>> + return 2. Such cases (most common uses of those builtins) can be
>>> + optimized by marking their lhs and lhs of IMAGPART_EXPR and maybe lhs
>>> + of REALPART_EXPR as not needed to be backed up by a stack variable.
>>> + For .UBSAN_CHECK_{ADD,SUB,MUL} return 3. */
>>> +
>>> +int
>>> +optimizable_arith_overflow (gimple *stmt)
>>> +{
>>> + bool is_ubsan = false;
>>> + if (!is_gimple_call (stmt) || !gimple_call_internal_p (stmt))
>>> + return false;
>>> + switch (gimple_call_internal_fn (stmt))
>>> + {
>>> + case IFN_ADD_OVERFLOW:
>>> + case IFN_SUB_OVERFLOW:
>>> + case IFN_MUL_OVERFLOW:
>>> + break;
>>> + case IFN_UBSAN_CHECK_ADD:
>>> + case IFN_UBSAN_CHECK_SUB:
>>> + case IFN_UBSAN_CHECK_MUL:
>>> + is_ubsan = true;
>>> + break;
>>> + default:
>>> + return 0;
>>> + }
>>> + tree lhs = gimple_call_lhs (stmt);
>>> + if (!lhs)
>>> + return 0;
>>> + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs))
>>> + return 0;
>>> + tree type = is_ubsan ? TREE_TYPE (lhs) : TREE_TYPE (TREE_TYPE (lhs));
>>> + if (TREE_CODE (type) != BITINT_TYPE
>>> + || bitint_precision_kind (type) < bitint_prec_large)
>>> + return 0;
>>> +
>>> + if (is_ubsan)
>>> + {
>>> + use_operand_p use_p;
>>> + gimple *use_stmt;
>>> + if (!single_imm_use (lhs, &use_p, &use_stmt)
>>> + || gimple_bb (use_stmt) != gimple_bb (stmt)
>>> + || !gimple_store_p (use_stmt)
>>> + || !is_gimple_assign (use_stmt)
>>> + || gimple_has_volatile_ops (use_stmt)
>>> + || stmt_ends_bb_p (use_stmt))
>>> + return 0;
>>> + return 3;
>>> + }
>>> +
>>> + imm_use_iterator ui;
>>> + use_operand_p use_p;
>>> + int seen = 0;
>>> + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
>>> + {
>>> + gimple *g = USE_STMT (use_p);
>>> + if (is_gimple_debug (g))
>>> + continue;
>>> + if (!is_gimple_assign (g) || gimple_bb (g) != gimple_bb (stmt))
>>> + return 0;
>>> + if (gimple_assign_rhs_code (g) == REALPART_EXPR)
>>> + {
>>> + if ((seen & 1) != 0)
>>> + return 0;
>>> + seen |= 1;
>>> + }
>>> + else if (gimple_assign_rhs_code (g) == IMAGPART_EXPR)
>>> + {
>>> + if ((seen & 2) != 0)
>>> + return 0;
>>> + seen |= 2;
>>> +
>>> + use_operand_p use2_p;
>>> + gimple *use_stmt;
>>> + tree lhs2 = gimple_assign_lhs (g);
>>> + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs2))
>>> + return 0;
>>> + if (!single_imm_use (lhs2, &use2_p, &use_stmt)
>>> + || gimple_bb (use_stmt) != gimple_bb (stmt)
>>> + || !gimple_assign_cast_p (use_stmt))
>>> + return 0;
>>> +
>>> + lhs2 = gimple_assign_lhs (use_stmt);
>>> + if (!INTEGRAL_TYPE_P (TREE_TYPE (lhs2))
>>> + || TREE_CODE (TREE_TYPE (lhs2)) == BITINT_TYPE)
>>> + return 0;
>>> + }
>>> + else
>>> + return 0;
>>> + }
>>> + if ((seen & 2) == 0)
>>> + return 0;
>>> + return seen == 3 ? 2 : 1;
>>> +}
>>> +
>>> +/* If STMT is some kind of comparison (GIMPLE_COND, comparison
>>> + assignment or COND_EXPR) comparing large/huge _BitInt types,
>>> + return the comparison code and if non-NULL fill in the comparison
>>> + operands to *POP1 and *POP2. */
>>> +
>>> +tree_code
>>> +comparison_op (gimple *stmt, tree *pop1, tree *pop2)
>>> +{
>>> + tree op1 = NULL_TREE, op2 = NULL_TREE;
>>> + tree_code code = ERROR_MARK;
>>> + if (gimple_code (stmt) == GIMPLE_COND)
>>> + {
>>> + code = gimple_cond_code (stmt);
>>> + op1 = gimple_cond_lhs (stmt);
>>> + op2 = gimple_cond_rhs (stmt);
>>> + }
>>> + else if (is_gimple_assign (stmt))
>>> + {
>>> + code = gimple_assign_rhs_code (stmt);
>>> + op1 = gimple_assign_rhs1 (stmt);
>>> + if (TREE_CODE_CLASS (code) == tcc_comparison
>>> + || TREE_CODE_CLASS (code) == tcc_binary)
>>> + op2 = gimple_assign_rhs2 (stmt);
>>> + switch (code)
>>> + {
>>> + default:
>>> + break;
>>> + case COND_EXPR:
>>> + tree cond = gimple_assign_rhs1 (stmt);
>>> + code = TREE_CODE (cond);
>>> + op1 = TREE_OPERAND (cond, 0);
>>> + op2 = TREE_OPERAND (cond, 1);
>>
>> this should ICE, COND_EXPRs now have is_gimple_reg conditions.
>
> COND_EXPR was a case I haven't managed to reproduce (I think
> usually if it is created at all it is created later).
> I see tree-cfg.cc for this was changed in GCC 13, but I see tons
> of spots which still try to handle COMPARISON_CLASS_P rhs1 of COND_EXPR
> (e.g. in tree-ssa-math-opts.cc). Does the rhs1 have to be boolean,
> or could it be any integral type (so, would I need to e.g. be prepared
> for BITINT_TYPE rhs1 which would need to have lowered != 0 comparison for
> it)?
It should be Boolean, not an integer type.
Yes, there are probably left overs, and at least the vectorizer intermediately uses it ‚wrong‘ still.
>
>>> +/* Return a tree how to access limb IDX of VAR corresponding to BITINT_TYPE
>>> + TYPE. If WRITE_P is true, it will be a store, otherwise a read. */
>>> +
>>> +tree
>>> +bitint_large_huge::limb_access (tree type, tree var, tree idx, bool write_p)
>>> +{
>>> + tree atype = (tree_fits_uhwi_p (idx)
>>> + ? limb_access_type (type, idx) : m_limb_type);
>>> + tree ret;
>>> + if (DECL_P (var) && tree_fits_uhwi_p (idx))
>>> + {
>>> + tree ptype = build_pointer_type (strip_array_types (TREE_TYPE (var)));
>>> + unsigned HOST_WIDE_INT off = tree_to_uhwi (idx) * m_limb_size;
>>> + ret = build2 (MEM_REF, m_limb_type,
>>> + build_fold_addr_expr (var),
>>> + build_int_cst (ptype, off));
>>> + if (TREE_THIS_VOLATILE (var) || TREE_THIS_VOLATILE (TREE_TYPE (var)))
>>> + TREE_THIS_VOLATILE (ret) = 1;
>>
>> Note if we have
>>
>> volatile int i;
>> x = *(int *)&i;
>>
>> we get a non-volatile load from 'i', likewise in the reverse case
>> where we get a volatile load from a non-volatile decl. The above
>> gets this wrong - the volatileness should be derived from the
>> original reference with just TREE_THIS_VOLATILE checking
>> (and not on the type).
>>
>> You possibly also want to set TREE_SIDE_EFFECTS (not sure when
>> that was exactly set), forwprop for example makes sure to copy
>> that (and also TREE_THIS_NOTRAP in some cases).
>
> Ok.
>
>> How do "volatile" _BitInt(n) work? People expect 'volatile'
>> objects to be operated on in whole, thus a 'volatile int'
>> load not split into two, etc. I guess if we split a volatile
>> _BitInt access it's reasonable to remove the 'volatile'?
>
> They work like volatile bitfields or volatile __int128 or long long
> on 32-bit arches, we don't really guarantee a single load or store there
> (unless one uses __atomic* APIs which are lock-free).
> The intent for volatile and what I've checked e.g. by eyeballing dumps
> was that the volatile _BitInt loads or stores aren't merged with other
> operations (if they were merged and we e.g. had z = x + y where all 3
> vars would be volatile, we'd first read LSB limb of all those and store
> result etc., when not merged each "load" or "store" isn't interleaved
> with others) and e.g. even _BitInt bit-field loads/stores aren't reading
> the same memory multiple times (which is what can happen e.g. for shifts
> or </<=/>/>= comparisons when they aren't iterating on limbs strictly
> upwards from least significant to most).
>
>>> + else
>>> + {
>>> + var = unshare_expr (var);
>>> + if (TREE_CODE (TREE_TYPE (var)) != ARRAY_TYPE
>>> + || !useless_type_conversion_p (m_limb_type,
>>> + TREE_TYPE (TREE_TYPE (var))))
>>> + {
>>> + unsigned HOST_WIDE_INT nelts
>>> + = tree_to_uhwi (TYPE_SIZE (type)) / limb_prec;
>>> + tree atype = build_array_type_nelts (m_limb_type, nelts);
>>> + var = build1 (VIEW_CONVERT_EXPR, atype, var);
>>> + }
>>> + ret = build4 (ARRAY_REF, m_limb_type, var, idx, NULL_TREE, NULL_TREE);
>>> + }
>>
>> maybe the volatile handling can be commonized here?
>
> From my experience with it, the volatile handling didn't have to be added
> in this case because it works from the VIEW_CONVERT_EXPRs.
> It was just the optimizations for decls and MEM_REFs with constant indexes
> where I had to do something about volatile.
>
>>> + case SSA_NAME:
>>> + if (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
>>> + {
>>> + if (gimple_code (SSA_NAME_DEF_STMT (op)) == GIMPLE_NOP)
>>
>> SSA_NAME_IS_DEFAULT_DEF
>
> Ok.
>>
>>> + {
>>> + if (m_first)
>>> + {
>>> + tree v = create_tmp_var (m_limb_type);
>>
>> create_tmp_reg?
>
> I see create_tmp_reg just calls create_tmp_var, but if you prefer it,
> sure, it isn't an addressable var and so either is fine.
It’s the same now, yes. Guess I don’t mind, maybe we should remove one or assert we’re calling the _reg variant with a register type.
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
>>> + e3->probability = profile_probability::likely ();
>>> + if (min_prec >= (prec - rem) / 2)
>>> + e3->probability = e3->probability.invert ();
>>> + e1->flags = EDGE_FALSE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + if (min_prec > (unsigned) limb_prec)
>>> + {
>>> + c = limb_access (TREE_TYPE (op), c, idx, false);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (c)), c);
>>> + insert_before (g);
>>> + c = gimple_assign_lhs (g);
>>> + }
>>> + tree c2 = build_int_cst (m_limb_type, ext);
>>> + m_gsi = gsi_after_labels (e2->dest);
>>> + t = make_ssa_name (m_limb_type);
>>> + gphi *phi = create_phi_node (t, e2->dest);
>>> + add_phi_arg (phi, c, e2, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, c2, e3, UNKNOWN_LOCATION);
>>
>> Not sure if I get to see more than the two cases above but maybe
>> a helper to emit a (half-)diamond for N values (PHI results) would be
>> helpful (possibly indicating the fallthru edge truth value if any)?
>
> I've added a helper to create a loop, but indeed doing this for the
> ifs might be a good idea too, just quite a lot of work to get it right
> because it is now used in many places.
> I think the code uses 3 cases, one is to create
> C1
> |\
> |B1
> |/
> +
> another
> C1
> / \
> B1 B2
> \ /
> +
> and another
> C1
> / \
> | C2
> | |\
> | | \
> |B1 B2
> \ | /
> \|/
> +
> and needs to remember for later the edges to create phis if needed.
> And, sometimes the B1 or B2 bbs are split to deal with EH edges. So will
> need to think about best interface for these. Could this be done
> incrementally when/if it is committed to trunk?
Yes.
>
>>> + tree in = add_cast (rhs1_type, data_in);
>>> + lhs = make_ssa_name (rhs1_type);
>>> + g = gimple_build_assign (lhs, code, rhs1, rhs2);
>>> + insert_before (g);
>>> + rhs1 = make_ssa_name (rhs1_type);
>>> + g = gimple_build_assign (rhs1, code, lhs, in);
>>> + insert_before (g);
>>
>> I'll just note there's now gimple_build overloads inserting at an
>> iterator:
>>
>> extern tree gimple_build (gimple_stmt_iterator *, bool,
>> enum gsi_iterator_update,
>> location_t, code_helper, tree, tree, tree);
>>
>> I guess there's not much folding possibilities during the building,
>> but it would allow to write
>
> Changing that would mean rewriting everything I'm afraid. Indeed as you
> wrote, it is very rare that something could be folded during the lowering.
>>
>> rhs1 = gimple_build (&gsi, true, GSI_SAME_STMT, m_loc, code, rhs1_type,
>> lhs, in);
>>
>> instead of
>>
>>> + rhs1 = make_ssa_name (rhs1_type);
>>> + g = gimple_build_assign (rhs1, code, lhs, in);
>>> + insert_before (g);
>>
>> just in case you forgot about those. I think we're missing some
>> gimple-build "state" class to keep track of common arguments, like
>>
>> gimple_build gb (&gsi, true, GSI_SAME_STMT, m_loc);
>> rhs1 = gb.build (code, rhs1_type, lhs, in);
>> ...
>>
>> anyway, just wanted to note this - no need to change the patch.
>
>>> + switch (gimple_code (stmt))
>>> + {
>>> + case GIMPLE_ASSIGN:
>>> + if (gimple_assign_load_p (stmt))
>>> + {
>>> + rhs1 = gimple_assign_rhs1 (stmt);
>>
>> so TREE_THIS_VOLATILE/TREE_SIDE_EFFECTS (rhs1) would be the thing
>> to eventually preserve
>
> limb_access should do that.
>
>>> +tree
>>> +bitint_large_huge::create_loop (tree init, tree *idx_next)
>>> +{
>>> + if (!gsi_end_p (m_gsi))
>>> + gsi_prev (&m_gsi);
>>> + else
>>> + m_gsi = gsi_last_bb (gsi_bb (m_gsi));
>>> + edge e1 = split_block (gsi_bb (m_gsi), gsi_stmt (m_gsi));
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->dest, e1->dest, EDGE_TRUE_VALUE);
>>> + e3->probability = profile_probability::very_unlikely ();
>>> + e2->flags = EDGE_FALSE_VALUE;
>>> + e2->probability = e3->probability.invert ();
>>> + tree idx = make_ssa_name (sizetype);
>>
>> maybe you want integer_type_node instead?
>
> The indexes are certainly unsigned, and given that they are used
> as array indexes, I thought sizetype would avoid zero or sign extensions
> in lots of places.
Ah, yeah, that might be the case
>>> + gphi *phi = create_phi_node (idx, e1->dest);
>>> + add_phi_arg (phi, init, e1, UNKNOWN_LOCATION);
>>> + *idx_next = make_ssa_name (sizetype);
>>> + add_phi_arg (phi, *idx_next, e3, UNKNOWN_LOCATION);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + m_bb = e1->dest;
>>> + m_preheader_bb = e1->src;
>>> + class loop *loop = alloc_loop ();
>>> + loop->header = e1->dest;
>>> + add_loop (loop, e1->src->loop_father);
>>
>> There is create_empty_loop_on_edge, it does a little bit more
>> than the above though.
>
> That looks much larger than what I need.
>>
>>> + return idx;
>>> +}
>>> +
>>> +/* Lower large/huge _BitInt statement mergeable or similar STMT which can be
>>> + lowered using iteration from the least significant limb up to the most
>>> + significant limb. For large _BitInt it is emitted as straight line code
>>> + before current location, for huge _BitInt as a loop handling two limbs
>>> + at once, followed by handling up to limbs in straight line code (at most
>>> + one full and one partial limb). It can also handle EQ_EXPR/NE_EXPR
>>> + comparisons, in that case CMP_CODE should be the comparison code and
>>> + CMP_OP1/CMP_OP2 the comparison operands. */
>>> +
>>> +tree
>>> +bitint_large_huge::lower_mergeable_stmt (gimple *stmt, tree_code &cmp_code,
>>> + tree cmp_op1, tree cmp_op2)
>>> +{
>>> + bool eq_p = cmp_code != ERROR_MARK;
>>> + tree type;
>>> + if (eq_p)
>>> + type = TREE_TYPE (cmp_op1);
>>> + else
>>> + type = TREE_TYPE (gimple_assign_lhs (stmt));
>>> + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
>>> + bitint_prec_kind kind = bitint_precision_kind (type);
>>> + gcc_assert (kind >= bitint_prec_large);
>>> + gimple *g;
>>> + tree lhs = gimple_get_lhs (stmt);
>>> + tree rhs1, lhs_type = lhs ? TREE_TYPE (lhs) : NULL_TREE;
>>> + if (lhs
>>> + && TREE_CODE (lhs) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
>>> + {
>>> + int p = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[p] != NULL_TREE);
>>> + m_lhs = lhs = m_vars[p];
>>> + }
>>> + unsigned cnt, rem = 0, end = 0, prec = TYPE_PRECISION (type);
>>> + bool sext = false;
>>> + tree ext = NULL_TREE, store_operand = NULL_TREE;
>>> + bool eh = false;
>>> + basic_block eh_pad = NULL;
>>> + if (gimple_store_p (stmt))
>>> + {
>>> + store_operand = gimple_assign_rhs1 (stmt);
>>> + eh = stmt_ends_bb_p (stmt);
>>> + if (eh)
>>> + {
>>> + edge e;
>>> + edge_iterator ei;
>>> + basic_block bb = gimple_bb (stmt);
>>> +
>>> + FOR_EACH_EDGE (e, ei, bb->succs)
>>> + if (e->flags & EDGE_EH)
>>> + {
>>> + eh_pad = e->dest;
>>> + break;
>>> + }
>>> + }
>>> + }
>>> + if ((store_operand
>>> + && TREE_CODE (store_operand) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (store_operand)))
>>> + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (store_operand)))
>>> + || gimple_assign_cast_p (stmt))
>>> + {
>>> + rhs1 = gimple_assign_rhs1 (store_operand
>>> + ? SSA_NAME_DEF_STMT (store_operand)
>>> + : stmt);
>>> + /* Optimize mergeable ops ending with widening cast to _BitInt
>>> + (or followed by store). We can lower just the limbs of the
>>> + cast operand and widen afterwards. */
>>> + if (TREE_CODE (rhs1) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1)))
>>> + && TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
>>> + && (CEIL ((unsigned) TYPE_PRECISION (TREE_TYPE (rhs1)),
>>> + limb_prec) < CEIL (prec, limb_prec)
>>> + || (kind == bitint_prec_huge
>>> + && TYPE_PRECISION (TREE_TYPE (rhs1)) < prec)))
>>> + {
>>> + store_operand = rhs1;
>>> + prec = TYPE_PRECISION (TREE_TYPE (rhs1));
>>> + kind = bitint_precision_kind (TREE_TYPE (rhs1));
>>> + if (!TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>>> + sext = true;
>>> + }
>>> + }
>>> + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
>>> + if (kind == bitint_prec_large)
>>> + cnt = CEIL (prec, limb_prec);
>>> + else
>>> + {
>>> + rem = (prec % (2 * limb_prec));
>>> + end = (prec - rem) / limb_prec;
>>> + cnt = 2 + CEIL (rem, limb_prec);
>>> + idx = idx_first = create_loop (size_zero_node, &idx_next);
>>> + }
>>> +
>>> + basic_block edge_bb = NULL;
>>> + if (eq_p)
>>> + {
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>>> + gsi_prev (&gsi);
>>> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
>>> + edge_bb = e->src;
>>> + if (kind == bitint_prec_large)
>>> + {
>>> + m_gsi = gsi_last_bb (edge_bb);
>>> + if (!gsi_end_p (m_gsi))
>>> + gsi_next (&m_gsi);
>>> + }
>>> + }
>>> + else
>>> + m_after_stmt = stmt;
>>> + if (kind != bitint_prec_large)
>>> + m_upwards_2limb = end;
>>> +
>>> + for (unsigned i = 0; i < cnt; i++)
>>> + {
>>> + m_data_cnt = 0;
>>> + if (kind == bitint_prec_large)
>>> + idx = size_int (i);
>>> + else if (i >= 2)
>>> + idx = size_int (end + (i > 2));
>>> + if (eq_p)
>>> + {
>>> + rhs1 = handle_operand (cmp_op1, idx);
>>> + tree rhs2 = handle_operand (cmp_op2, idx);
>>> + g = gimple_build_cond (NE_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + e1->flags = EDGE_FALSE_VALUE;
>>> + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
>>> + e1->probability = profile_probability::unlikely ();
>>> + e2->probability = e1->probability.invert ();
>>> + if (i == 0)
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + }
>>> + else
>>> + {
>>> + if (store_operand)
>>> + rhs1 = handle_operand (store_operand, idx);
>>> + else
>>> + rhs1 = handle_stmt (stmt, idx);
>>> + tree l = limb_access (lhs_type, lhs, idx, true);
>>> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
>>> + rhs1 = add_cast (TREE_TYPE (l), rhs1);
>>> + if (sext && i == cnt - 1)
>>> + ext = rhs1;
>>> + g = gimple_build_assign (l, rhs1);
>>> + insert_before (g);
>>> + if (eh)
>>> + {
>>> + maybe_duplicate_eh_stmt (g, stmt);
>>> + if (eh_pad)
>>> + {
>>> + edge e = split_block (gsi_bb (m_gsi), g);
>>> + m_gsi = gsi_after_labels (e->dest);
>>> + make_edge (e->src, eh_pad, EDGE_EH)->probability
>>> + = profile_probability::very_unlikely ();
>>> + }
>>> + }
>>> + }
>>> + m_first = false;
>>> + if (kind == bitint_prec_huge && i <= 1)
>>> + {
>>> + if (i == 0)
>>> + {
>>> + idx = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
>>> + size_one_node);
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
>>> + size_int (2));
>>> + insert_before (g);
>>> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + if (eq_p)
>>> + m_gsi = gsi_after_labels (edge_bb);
>>> + else
>>> + m_gsi = gsi_for_stmt (stmt);
>>> + }
>>> + }
>>> + }
>>> +
>>> + if (prec != (unsigned) TYPE_PRECISION (type)
>>> + && (CEIL ((unsigned) TYPE_PRECISION (type), limb_prec)
>>> + > CEIL (prec, limb_prec)))
>>> + {
>>> + if (sext)
>>> + {
>>> + ext = add_cast (signed_type_for (m_limb_type), ext);
>>> + tree lpm1 = build_int_cst (unsigned_type_node,
>>> + limb_prec - 1);
>>> + tree n = make_ssa_name (TREE_TYPE (ext));
>>> + g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
>>> + insert_before (g);
>>> + ext = add_cast (m_limb_type, n);
>>> + }
>>> + else
>>> + ext = build_zero_cst (m_limb_type);
>>> + kind = bitint_precision_kind (type);
>>> + unsigned start = CEIL (prec, limb_prec);
>>> + prec = TYPE_PRECISION (type);
>>> + idx = idx_first = idx_next = NULL_TREE;
>>> + if (prec <= (start + 2) * limb_prec)
>>> + kind = bitint_prec_large;
>>> + if (kind == bitint_prec_large)
>>> + cnt = CEIL (prec, limb_prec) - start;
>>> + else
>>> + {
>>> + rem = prec % limb_prec;
>>> + end = (prec - rem) / limb_prec;
>>> + cnt = 1 + (rem != 0);
>>> + idx = create_loop (size_int (start), &idx_next);
>>> + }
>>> + for (unsigned i = 0; i < cnt; i++)
>>> + {
>>> + if (kind == bitint_prec_large)
>>> + idx = size_int (start + i);
>>> + else if (i == 1)
>>> + idx = size_int (end);
>>> + rhs1 = ext;
>>> + tree l = limb_access (lhs_type, lhs, idx, true);
>>> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
>>> + rhs1 = add_cast (TREE_TYPE (l), rhs1);
>>> + g = gimple_build_assign (l, rhs1);
>>> + insert_before (g);
>>> + if (kind == bitint_prec_huge && i == 0)
>>> + {
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
>>> + size_one_node);
>>> + insert_before (g);
>>> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + m_gsi = gsi_for_stmt (stmt);
>>> + }
>>> + }
>>> + }
>>> +
>>> + if (gimple_store_p (stmt))
>>> + {
>>> + unlink_stmt_vdef (stmt);
>>> + release_ssa_name (gimple_vdef (stmt));
>>> + gsi_remove (&m_gsi, true);
>>> + }
>>> + if (eq_p)
>>> + {
>>> + lhs = make_ssa_name (boolean_type_node);
>>> + basic_block bb = gimple_bb (stmt);
>>> + gphi *phi = create_phi_node (lhs, bb);
>>> + edge e = find_edge (gsi_bb (m_gsi), bb);
>>> + unsigned int n = EDGE_COUNT (bb->preds);
>>> + for (unsigned int i = 0; i < n; i++)
>>> + {
>>> + edge e2 = EDGE_PRED (bb, i);
>>> + add_phi_arg (phi, e == e2 ? boolean_true_node : boolean_false_node,
>>> + e2, UNKNOWN_LOCATION);
>>> + }
>>> + cmp_code = cmp_code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
>>> + return lhs;
>>> + }
>>> + else
>>> + return NULL_TREE;
>>> +}
>>> +
>>> +/* Handle a large/huge _BitInt comparison statement STMT other than
>>> + EQ_EXPR/NE_EXPR. CMP_CODE, CMP_OP1 and CMP_OP2 meaning is like in
>>> + lower_mergeable_stmt. The {GT,GE,LT,LE}_EXPR comparisons are
>>> + lowered by iteration from the most significant limb downwards to
>>> + the least significant one, for large _BitInt in straight line code,
>>> + otherwise with most significant limb handled in
>>> + straight line code followed by a loop handling one limb at a time.
>>> + Comparisons with unsigned huge _BitInt with precisions which are
>>> + multiples of limb precision can use just the loop and don't need to
>>> + handle most significant limb before the loop. The loop or straight
>>> + line code jumps to final basic block if a particular pair of limbs
>>> + is not equal. */
>>> +
>>> +tree
>>> +bitint_large_huge::lower_comparison_stmt (gimple *stmt, tree_code &cmp_code,
>>> + tree cmp_op1, tree cmp_op2)
>>> +{
>>> + tree type = TREE_TYPE (cmp_op1);
>>> + gcc_assert (TREE_CODE (type) == BITINT_TYPE);
>>> + bitint_prec_kind kind = bitint_precision_kind (type);
>>> + gcc_assert (kind >= bitint_prec_large);
>>> + gimple *g;
>>> + if (!TYPE_UNSIGNED (type)
>>> + && integer_zerop (cmp_op2)
>>> + && (cmp_code == GE_EXPR || cmp_code == LT_EXPR))
>>> + {
>>> + unsigned end = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec) - 1;
>>> + tree idx = size_int (end);
>>> + m_data_cnt = 0;
>>> + tree rhs1 = handle_operand (cmp_op1, idx);
>>> + if (TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>>> + {
>>> + tree stype = signed_type_for (TREE_TYPE (rhs1));
>>> + rhs1 = add_cast (stype, rhs1);
>>> + }
>>> + tree lhs = make_ssa_name (boolean_type_node);
>>> + g = gimple_build_assign (lhs, cmp_code, rhs1,
>>> + build_zero_cst (TREE_TYPE (rhs1)));
>>> + insert_before (g);
>>> + cmp_code = NE_EXPR;
>>> + return lhs;
>>> + }
>>> +
>>> + unsigned cnt, rem = 0, end = 0;
>>> + tree idx = NULL_TREE, idx_next = NULL_TREE;
>>> + if (kind == bitint_prec_large)
>>> + cnt = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec);
>>> + else
>>> + {
>>> + rem = ((unsigned) TYPE_PRECISION (type) % limb_prec);
>>> + if (rem == 0 && !TYPE_UNSIGNED (type))
>>> + rem = limb_prec;
>>> + end = ((unsigned) TYPE_PRECISION (type) - rem) / limb_prec;
>>> + cnt = 1 + (rem != 0);
>>> + }
>>> +
>>> + basic_block edge_bb = NULL;
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>>> + gsi_prev (&gsi);
>>> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
>>> + edge_bb = e->src;
>>> + m_gsi = gsi_last_bb (edge_bb);
>>> + if (!gsi_end_p (m_gsi))
>>> + gsi_next (&m_gsi);
>>> +
>>> + edge *edges = XALLOCAVEC (edge, cnt * 2);
>>> + for (unsigned i = 0; i < cnt; i++)
>>> + {
>>> + m_data_cnt = 0;
>>> + if (kind == bitint_prec_large)
>>> + idx = size_int (cnt - i - 1);
>>> + else if (i == cnt - 1)
>>> + idx = create_loop (size_int (end - 1), &idx_next);
>>> + else
>>> + idx = size_int (end);
>>> + tree rhs1 = handle_operand (cmp_op1, idx);
>>> + tree rhs2 = handle_operand (cmp_op2, idx);
>>> + if (i == 0
>>> + && !TYPE_UNSIGNED (type)
>>> + && TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>>> + {
>>> + tree stype = signed_type_for (TREE_TYPE (rhs1));
>>> + rhs1 = add_cast (stype, rhs1);
>>> + rhs2 = add_cast (stype, rhs2);
>>> + }
>>> + g = gimple_build_cond (GT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + e1->flags = EDGE_FALSE_VALUE;
>>> + edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
>>> + e1->probability = profile_probability::likely ();
>>> + e2->probability = e1->probability.invert ();
>>> + if (i == 0)
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + edges[2 * i] = e2;
>>> + g = gimple_build_cond (LT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e1 = split_block (gsi_bb (m_gsi), g);
>>> + e1->flags = EDGE_FALSE_VALUE;
>>> + e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
>>> + e1->probability = profile_probability::unlikely ();
>>> + e2->probability = e1->probability.invert ();
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + edges[2 * i + 1] = e2;
>>> + m_first = false;
>>> + if (kind == bitint_prec_huge && i == cnt - 1)
>>> + {
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
>>> + insert_before (g);
>>> + g = gimple_build_cond (NE_EXPR, idx, size_zero_node,
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge true_edge, false_edge;
>>> + extract_true_false_edges_from_block (gsi_bb (m_gsi),
>>> + &true_edge, &false_edge);
>>> + m_gsi = gsi_after_labels (false_edge->dest);
>>> + }
>>> + }
>>> +
>>> + tree lhs = make_ssa_name (boolean_type_node);
>>> + basic_block bb = gimple_bb (stmt);
>>> + gphi *phi = create_phi_node (lhs, bb);
>>> + for (unsigned int i = 0; i < cnt * 2; i++)
>>> + {
>>> + tree val = ((cmp_code == GT_EXPR || cmp_code == GE_EXPR)
>>> + ^ (i & 1)) ? boolean_true_node : boolean_false_node;
>>> + add_phi_arg (phi, val, edges[i], UNKNOWN_LOCATION);
>>> + }
>>> + add_phi_arg (phi, (cmp_code == GE_EXPR || cmp_code == LE_EXPR)
>>> + ? boolean_true_node : boolean_false_node,
>>> + find_edge (gsi_bb (m_gsi), bb), UNKNOWN_LOCATION);
>>> + cmp_code = NE_EXPR;
>>> + return lhs;
>>> +}
>>> +
>>> +/* Lower large/huge _BitInt left and right shift except for left
>>> + shift by < limb_prec constant. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_shift_stmt (tree obj, gimple *stmt)
>>> +{
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + tree lhs = gimple_assign_lhs (stmt);
>>> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
>>> + tree type = TREE_TYPE (rhs1);
>>> + gimple *final_stmt = gsi_stmt (m_gsi);
>>> + gcc_assert (TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) >= bitint_prec_large);
>>> + int prec = TYPE_PRECISION (type);
>>> + tree n = gimple_assign_rhs2 (stmt), n1, n2, n3, n4;
>>> + gimple *g;
>>> + if (obj == NULL_TREE)
>>> + {
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + }
>>> + /* Preparation code common for both left and right shifts.
>>> + unsigned n1 = n % limb_prec;
>>> + size_t n2 = n / limb_prec;
>>> + size_t n3 = n1 != 0;
>>> + unsigned n4 = (limb_prec - n1) % limb_prec;
>>> + (for power of 2 limb_prec n4 can be -n1 & (limb_prec)). */
>>> + if (TREE_CODE (n) == INTEGER_CST)
>>> + {
>>> + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
>>> + n1 = int_const_binop (TRUNC_MOD_EXPR, n, lp);
>>> + n2 = fold_convert (sizetype, int_const_binop (TRUNC_DIV_EXPR, n, lp));
>>> + n3 = size_int (!integer_zerop (n1));
>>> + n4 = int_const_binop (TRUNC_MOD_EXPR,
>>> + int_const_binop (MINUS_EXPR, lp, n1), lp);
>>> + }
>>> + else
>>> + {
>>> + n1 = make_ssa_name (TREE_TYPE (n));
>>> + n2 = make_ssa_name (sizetype);
>>> + n3 = make_ssa_name (sizetype);
>>> + n4 = make_ssa_name (TREE_TYPE (n));
>>> + if (pow2p_hwi (limb_prec))
>>> + {
>>> + tree lpm1 = build_int_cst (TREE_TYPE (n), limb_prec - 1);
>>> + g = gimple_build_assign (n1, BIT_AND_EXPR, n, lpm1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (useless_type_conversion_p (sizetype,
>>> + TREE_TYPE (n))
>>> + ? n2 : make_ssa_name (TREE_TYPE (n)),
>>> + RSHIFT_EXPR, n,
>>> + build_int_cst (TREE_TYPE (n),
>>> + exact_log2 (limb_prec)));
>>> + insert_before (g);
>>> + if (gimple_assign_lhs (g) != n2)
>>> + {
>>> + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
>>> + insert_before (g);
>>> + }
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
>>> + NEGATE_EXPR, n1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (n4, BIT_AND_EXPR, gimple_assign_lhs (g),
>>> + lpm1);
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
>>> + g = gimple_build_assign (n1, TRUNC_MOD_EXPR, n, lp);
>>> + insert_before (g);
>>> + g = gimple_build_assign (useless_type_conversion_p (sizetype,
>>> + TREE_TYPE (n))
>>> + ? n2 : make_ssa_name (TREE_TYPE (n)),
>>> + TRUNC_DIV_EXPR, n, lp);
>>> + insert_before (g);
>>> + if (gimple_assign_lhs (g) != n2)
>>> + {
>>> + g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
>>> + insert_before (g);
>>> + }
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
>>> + MINUS_EXPR, lp, n1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (n4, TRUNC_MOD_EXPR, gimple_assign_lhs (g),
>>> + lp);
>>> + insert_before (g);
>>> + }
>>> + g = gimple_build_assign (make_ssa_name (boolean_type_node), NE_EXPR, n1,
>>> + build_zero_cst (TREE_TYPE (n)));
>>> + insert_before (g);
>>> + g = gimple_build_assign (n3, NOP_EXPR, gimple_assign_lhs (g));
>>> + insert_before (g);
>>> + }
>>> + tree p = build_int_cst (sizetype,
>>> + prec / limb_prec - (prec % limb_prec == 0));
>>> + if (rhs_code == RSHIFT_EXPR)
>>> + {
>>> + /* Lower
>>> + dst = src >> n;
>>> + as
>>> + unsigned n1 = n % limb_prec;
>>> + size_t n2 = n / limb_prec;
>>> + size_t n3 = n1 != 0;
>>> + unsigned n4 = (limb_prec - n1) % limb_prec;
>>> + size_t idx;
>>> + size_t p = prec / limb_prec - (prec % limb_prec == 0);
>>> + int signed_p = (typeof (src) -1) < 0;
>>> + for (idx = n2; idx < ((!signed_p && (prec % limb_prec == 0))
>>> + ? p : p - n3); ++idx)
>>> + dst[idx - n2] = (src[idx] >> n1) | (src[idx + n3] << n4);
>>> + limb_type ext;
>>> + if (prec % limb_prec == 0)
>>> + ext = src[p];
>>> + else if (signed_p)
>>> + ext = ((signed limb_type) (src[p] << (limb_prec
>>> + - (prec % limb_prec))))
>>> + >> (limb_prec - (prec % limb_prec));
>>> + else
>>> + ext = src[p] & (((limb_type) 1 << (prec % limb_prec)) - 1);
>>> + if (!signed_p && (prec % limb_prec == 0))
>>> + ;
>>> + else if (idx < prec / 64)
>>> + {
>>> + dst[idx - n2] = (src[idx] >> n1) | (ext << n4);
>>> + ++idx;
>>> + }
>>> + idx -= n2;
>>> + if (signed_p)
>>> + {
>>> + dst[idx] = ((signed limb_type) ext) >> n1;
>>> + ext = ((signed limb_type) ext) >> (limb_prec - 1);
>>> + }
>>> + else
>>> + {
>>> + dst[idx] = ext >> n1;
>>> + ext = 0;
>>> + }
>>> + for (++idx; idx <= p; ++idx)
>>> + dst[idx] = ext; */
>>> + tree pmn3;
>>> + if (TYPE_UNSIGNED (type) && prec % limb_prec == 0)
>>> + pmn3 = p;
>>> + else if (TREE_CODE (n3) == INTEGER_CST)
>>> + pmn3 = int_const_binop (MINUS_EXPR, p, n3);
>>> + else
>>> + {
>>> + pmn3 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (pmn3, MINUS_EXPR, p, n3);
>>> + insert_before (g);
>>> + }
>>> + g = gimple_build_cond (LT_EXPR, n2, pmn3, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + tree idx_next;
>>> + tree idx = create_loop (n2, &idx_next);
>>> + tree idxmn2 = make_ssa_name (sizetype);
>>> + tree idxpn3 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
>>> + insert_before (g);
>>> + g = gimple_build_assign (idxpn3, PLUS_EXPR, idx, n3);
>>> + insert_before (g);
>>> + m_data_cnt = 0;
>>> + tree t1 = handle_operand (rhs1, idx);
>>> + m_first = false;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + RSHIFT_EXPR, t1, n1);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + if (!integer_zerop (n3))
>>> + {
>>> + m_data_cnt = 0;
>>> + tree t2 = handle_operand (rhs1, idxpn3);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, t2, n4);
>>> + insert_before (g);
>>> + t2 = gimple_assign_lhs (g);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + BIT_IOR_EXPR, t1, t2);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + }
>>> + tree l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
>>> + g = gimple_build_assign (l, t1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
>>> + insert_before (g);
>>> + g = gimple_build_cond (LT_EXPR, idx_next, pmn3, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + idx = make_ssa_name (sizetype);
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
>>> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
>>> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
>>> + add_phi_arg (phi, n2, e1, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
>>> + m_data_cnt = 0;
>>> + tree ms = handle_operand (rhs1, p);
>>> + tree ext = ms;
>>> + if (!types_compatible_p (TREE_TYPE (ms), m_limb_type))
>>> + ext = add_cast (m_limb_type, ms);
>>> + if (!(TYPE_UNSIGNED (type) && prec % limb_prec == 0)
>>> + && !integer_zerop (n3))
>>> + {
>>> + g = gimple_build_cond (LT_EXPR, idx, p, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e1 = split_block (gsi_bb (m_gsi), g);
>>> + e2 = split_block (e1->dest, (gimple *) NULL);
>>> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + m_data_cnt = 0;
>>> + t1 = handle_operand (rhs1, idx);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + RSHIFT_EXPR, t1, n1);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, ext, n4);
>>> + insert_before (g);
>>> + tree t2 = gimple_assign_lhs (g);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + BIT_IOR_EXPR, t1, t2);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + idxmn2 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
>>> + insert_before (g);
>>> + l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
>>> + g = gimple_build_assign (l, t1);
>>> + insert_before (g);
>>> + idx_next = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
>>> + insert_before (g);
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + tree nidx = make_ssa_name (sizetype);
>>> + phi = create_phi_node (nidx, gsi_bb (m_gsi));
>>> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
>>> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
>>> + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
>>> + idx = nidx;
>>> + }
>>> + g = gimple_build_assign (make_ssa_name (sizetype), MINUS_EXPR, idx, n2);
>>> + insert_before (g);
>>> + idx = gimple_assign_lhs (g);
>>> + tree sext = ext;
>>> + if (!TYPE_UNSIGNED (type))
>>> + sext = add_cast (signed_type_for (m_limb_type), ext);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
>>> + RSHIFT_EXPR, sext, n1);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + if (!TYPE_UNSIGNED (type))
>>> + {
>>> + t1 = add_cast (m_limb_type, t1);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
>>> + RSHIFT_EXPR, sext,
>>> + build_int_cst (TREE_TYPE (n),
>>> + limb_prec - 1));
>>> + insert_before (g);
>>> + ext = add_cast (m_limb_type, gimple_assign_lhs (g));
>>> + }
>>> + else
>>> + ext = build_zero_cst (m_limb_type);
>>> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
>>> + g = gimple_build_assign (l, t1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (make_ssa_name (sizetype), PLUS_EXPR, idx,
>>> + size_one_node);
>>> + insert_before (g);
>>> + idx = gimple_assign_lhs (g);
>>> + g = gimple_build_cond (LE_EXPR, idx, p, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e1 = split_block (gsi_bb (m_gsi), g);
>>> + e2 = split_block (e1->dest, (gimple *) NULL);
>>> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + idx = create_loop (idx, &idx_next);
>>> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
>>> + g = gimple_build_assign (l, ext);
>>> + insert_before (g);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
>>> + insert_before (g);
>>> + g = gimple_build_cond (LE_EXPR, idx_next, p, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + /* Lower
>>> + dst = src << n;
>>> + as
>>> + unsigned n1 = n % limb_prec;
>>> + size_t n2 = n / limb_prec;
>>> + size_t n3 = n1 != 0;
>>> + unsigned n4 = (limb_prec - n1) % limb_prec;
>>> + size_t idx;
>>> + size_t p = prec / limb_prec - (prec % limb_prec == 0);
>>> + for (idx = p; (ssize_t) idx >= (ssize_t) (n2 + n3); --idx)
>>> + dst[idx] = (src[idx - n2] << n1) | (src[idx - n2 - n3] >> n4);
>>> + if (n1)
>>> + {
>>> + dst[idx] = src[idx - n2] << n1;
>>> + --idx;
>>> + }
>>> + for (; (ssize_t) idx >= 0; --idx)
>>> + dst[idx] = 0; */
>>> + tree n2pn3;
>>> + if (TREE_CODE (n2) == INTEGER_CST && TREE_CODE (n3) == INTEGER_CST)
>>> + n2pn3 = int_const_binop (PLUS_EXPR, n2, n3);
>>> + else
>>> + {
>>> + n2pn3 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (n2pn3, PLUS_EXPR, n2, n3);
>>> + insert_before (g);
>>> + }
>>> + /* For LSHIFT_EXPR, we can use handle_operand with non-INTEGER_CST
>>> + idx even to access the most significant partial limb. */
>>> + m_var_msb = true;
>>> + if (integer_zerop (n3))
>>> + /* For n3 == 0 p >= n2 + n3 is always true for all valid shift
>>> + counts. Emit if (true) condition that can be optimized later. */
>>> + g = gimple_build_cond (NE_EXPR, boolean_true_node, boolean_false_node,
>>> + NULL_TREE, NULL_TREE);
>>> + else
>>> + g = gimple_build_cond (LE_EXPR, n2pn3, p, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + tree idx_next;
>>> + tree idx = create_loop (p, &idx_next);
>>> + tree idxmn2 = make_ssa_name (sizetype);
>>> + tree idxmn2mn3 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
>>> + insert_before (g);
>>> + g = gimple_build_assign (idxmn2mn3, MINUS_EXPR, idxmn2, n3);
>>> + insert_before (g);
>>> + m_data_cnt = 0;
>>> + tree t1 = handle_operand (rhs1, idxmn2);
>>> + m_first = false;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, t1, n1);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + if (!integer_zerop (n3))
>>> + {
>>> + m_data_cnt = 0;
>>> + tree t2 = handle_operand (rhs1, idxmn2mn3);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + RSHIFT_EXPR, t2, n4);
>>> + insert_before (g);
>>> + t2 = gimple_assign_lhs (g);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + BIT_IOR_EXPR, t1, t2);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + }
>>> + tree l = limb_access (TREE_TYPE (lhs), obj, idx, true);
>>> + g = gimple_build_assign (l, t1);
>>> + insert_before (g);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
>>> + insert_before (g);
>>> + tree sn2pn3 = add_cast (ssizetype, n2pn3);
>>> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next), sn2pn3,
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + idx = make_ssa_name (sizetype);
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
>>> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
>>> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
>>> + add_phi_arg (phi, p, e1, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
>>> + m_data_cnt = 0;
>>> + if (!integer_zerop (n3))
>>> + {
>>> + g = gimple_build_cond (NE_EXPR, n3, size_zero_node,
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e1 = split_block (gsi_bb (m_gsi), g);
>>> + e2 = split_block (e1->dest, (gimple *) NULL);
>>> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + idxmn2 = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
>>> + insert_before (g);
>>> + m_data_cnt = 0;
>>> + t1 = handle_operand (rhs1, idxmn2);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, t1, n1);
>>> + insert_before (g);
>>> + t1 = gimple_assign_lhs (g);
>>> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
>>> + g = gimple_build_assign (l, t1);
>>> + insert_before (g);
>>> + idx_next = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
>>> + insert_before (g);
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + tree nidx = make_ssa_name (sizetype);
>>> + phi = create_phi_node (nidx, gsi_bb (m_gsi));
>>> + e1 = find_edge (e1->src, gsi_bb (m_gsi));
>>> + e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
>>> + add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
>>> + idx = nidx;
>>> + }
>>> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx),
>>> + ssize_int (0), NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e1 = split_block (gsi_bb (m_gsi), g);
>>> + e2 = split_block (e1->dest, (gimple *) NULL);
>>> + e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + idx = create_loop (idx, &idx_next);
>>> + l = limb_access (TREE_TYPE (lhs), obj, idx, true);
>>> + g = gimple_build_assign (l, build_zero_cst (m_limb_type));
>>> + insert_before (g);
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
>>> + insert_before (g);
>>> + g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next),
>>> + ssize_int (0), NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + }
>>> +}
>>> +
>>> +/* Lower large/huge _BitInt multiplication or division. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_muldiv_stmt (tree obj, gimple *stmt)
>>> +{
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + tree rhs2 = gimple_assign_rhs2 (stmt);
>>> + tree lhs = gimple_assign_lhs (stmt);
>>> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
>>> + tree type = TREE_TYPE (rhs1);
>>> + gcc_assert (TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) >= bitint_prec_large);
>>> + int prec = TYPE_PRECISION (type), prec1, prec2;
>>> + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec1);
>>> + rhs2 = handle_operand_addr (rhs2, stmt, NULL, &prec2);
>>> + if (obj == NULL_TREE)
>>> + {
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + lhs = build_fold_addr_expr (obj);
>>> + }
>>> + else
>>> + {
>>> + lhs = build_fold_addr_expr (obj);
>>> + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
>>> + NULL_TREE, true, GSI_SAME_STMT);
>>> + }
>>> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
>>> + gimple *g;
>>> + switch (rhs_code)
>>> + {
>>> + case MULT_EXPR:
>>> + g = gimple_build_call_internal (IFN_MULBITINT, 6,
>>> + lhs, build_int_cst (sitype, prec),
>>> + rhs1, build_int_cst (sitype, prec1),
>>> + rhs2, build_int_cst (sitype, prec2));
>>> + insert_before (g);
>>> + break;
>>> + case TRUNC_DIV_EXPR:
>>> + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8,
>>> + lhs, build_int_cst (sitype, prec),
>>> + null_pointer_node,
>>> + build_int_cst (sitype, 0),
>>> + rhs1, build_int_cst (sitype, prec1),
>>> + rhs2, build_int_cst (sitype, prec2));
>>> + if (!stmt_ends_bb_p (stmt))
>>> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
>>> + insert_before (g);
>>> + break;
>>> + case TRUNC_MOD_EXPR:
>>> + g = gimple_build_call_internal (IFN_DIVMODBITINT, 8, null_pointer_node,
>>> + build_int_cst (sitype, 0),
>>> + lhs, build_int_cst (sitype, prec),
>>> + rhs1, build_int_cst (sitype, prec1),
>>> + rhs2, build_int_cst (sitype, prec2));
>>> + if (!stmt_ends_bb_p (stmt))
>>> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
>>> + insert_before (g);
>>> + break;
>>> + default:
>>> + gcc_unreachable ();
>>> + }
>>> + if (stmt_ends_bb_p (stmt))
>>> + {
>>> + maybe_duplicate_eh_stmt (g, stmt);
>>> + edge e1;
>>> + edge_iterator ei;
>>> + basic_block bb = gimple_bb (stmt);
>>> +
>>> + FOR_EACH_EDGE (e1, ei, bb->succs)
>>> + if (e1->flags & EDGE_EH)
>>> + break;
>>> + if (e1)
>>> + {
>>> + edge e2 = split_block (gsi_bb (m_gsi), g);
>>> + m_gsi = gsi_after_labels (e2->dest);
>>> + make_edge (e2->src, e1->dest, EDGE_EH)->probability
>>> + = profile_probability::very_unlikely ();
>>> + }
>>> + }
>>> +}
>>> +
>>> +/* Lower large/huge _BitInt conversion to/from floating point. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_float_conv_stmt (tree obj, gimple *stmt)
>>> +{
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + tree lhs = gimple_assign_lhs (stmt);
>>> + tree_code rhs_code = gimple_assign_rhs_code (stmt);
>>> + if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (rhs1)))
>>> + || DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (lhs))))
>>> + {
>>> + sorry_at (gimple_location (stmt),
>>> + "unsupported conversion between %<_BitInt(%d)%> and %qT",
>>> + rhs_code == FIX_TRUNC_EXPR
>>> + ? TYPE_PRECISION (TREE_TYPE (lhs))
>>> + : TYPE_PRECISION (TREE_TYPE (rhs1)),
>>> + rhs_code == FIX_TRUNC_EXPR
>>> + ? TREE_TYPE (rhs1) : TREE_TYPE (lhs));
>>> + if (rhs_code == FLOAT_EXPR)
>>> + {
>>> + gimple *g
>>> + = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
>>> + gsi_replace (&m_gsi, g, true);
>>> + }
>>> + return;
>>> + }
>>> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
>>> + gimple *g;
>>> + if (rhs_code == FIX_TRUNC_EXPR)
>>> + {
>>> + int prec = TYPE_PRECISION (TREE_TYPE (lhs));
>>> + if (!TYPE_UNSIGNED (TREE_TYPE (lhs)))
>>> + prec = -prec;
>>> + if (obj == NULL_TREE)
>>> + {
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + lhs = build_fold_addr_expr (obj);
>>> + }
>>> + else
>>> + {
>>> + lhs = build_fold_addr_expr (obj);
>>> + lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
>>> + NULL_TREE, true, GSI_SAME_STMT);
>>> + }
>>> + scalar_mode from_mode
>>> + = as_a <scalar_mode> (TYPE_MODE (TREE_TYPE (rhs1)));
>>> +#ifdef HAVE_SFmode
>>> + /* IEEE single is a full superset of both IEEE half and
>>> + bfloat formats, convert to float first and then to _BitInt
>>> + to avoid the need of another 2 library routines. */
>>> + if ((REAL_MODE_FORMAT (from_mode) == &arm_bfloat_half_format
>>> + || REAL_MODE_FORMAT (from_mode) == &ieee_half_format)
>>> + && REAL_MODE_FORMAT (SFmode) == &ieee_single_format)
>>> + {
>>> + tree type = lang_hooks.types.type_for_mode (SFmode, 0);
>>> + if (type)
>>> + rhs1 = add_cast (type, rhs1);
>>> + }
>>> +#endif
>>> + g = gimple_build_call_internal (IFN_FLOATTOBITINT, 3,
>>> + lhs, build_int_cst (sitype, prec),
>>> + rhs1);
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + int prec;
>>> + rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec);
>>> + g = gimple_build_call_internal (IFN_BITINTTOFLOAT, 2,
>>> + rhs1, build_int_cst (sitype, prec));
>>> + gimple_call_set_lhs (g, lhs);
>>> + if (!stmt_ends_bb_p (stmt))
>>> + gimple_call_set_nothrow (as_a <gcall *> (g), true);
>>> + gsi_replace (&m_gsi, g, true);
>>> + }
>>> +}
>>> +
>>> +/* Helper method for lower_addsub_overflow and lower_mul_overflow.
>>> + If check_zero is true, caller wants to check if all bits in [start, end)
>>> + are zero, otherwise if bits in [start, end) are either all zero or
>>> + all ones. L is the limb with index LIMB, START and END are measured
>>> + in bits. */
>>> +
>>> +tree
>>> +bitint_large_huge::arith_overflow_extract_bits (unsigned int start,
>>> + unsigned int end, tree l,
>>> + unsigned int limb,
>>> + bool check_zero)
>>> +{
>>> + unsigned startlimb = start / limb_prec;
>>> + unsigned endlimb = (end - 1) / limb_prec;
>>> + gimple *g;
>>> +
>>> + if ((start % limb_prec) == 0 && (end % limb_prec) == 0)
>>> + return l;
>>> + if (startlimb == endlimb && limb == startlimb)
>>> + {
>>> + if (check_zero)
>>> + {
>>> + wide_int w = wi::shifted_mask (start % limb_prec,
>>> + end - start, false, limb_prec);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + BIT_AND_EXPR, l,
>>> + wide_int_to_tree (m_limb_type, w));
>>> + insert_before (g);
>>> + return gimple_assign_lhs (g);
>>> + }
>>> + unsigned int shift = start % limb_prec;
>>> + if ((end % limb_prec) != 0)
>>> + {
>>> + unsigned int lshift = (-end) % limb_prec;
>>> + shift += lshift;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, l,
>>> + build_int_cst (unsigned_type_node,
>>> + lshift));
>>> + insert_before (g);
>>> + l = gimple_assign_lhs (g);
>>> + }
>>> + l = add_cast (signed_type_for (m_limb_type), l);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
>>> + RSHIFT_EXPR, l,
>>> + build_int_cst (unsigned_type_node, shift));
>>> + insert_before (g);
>>> + return add_cast (m_limb_type, gimple_assign_lhs (g));
>>> + }
>>> + else if (limb == startlimb)
>>> + {
>>> + if ((start % limb_prec) == 0)
>>> + return l;
>>> + if (!check_zero)
>>> + l = add_cast (signed_type_for (m_limb_type), l);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
>>> + RSHIFT_EXPR, l,
>>> + build_int_cst (unsigned_type_node,
>>> + start % limb_prec));
>>> + insert_before (g);
>>> + l = gimple_assign_lhs (g);
>>> + if (!check_zero)
>>> + l = add_cast (m_limb_type, l);
>>> + return l;
>>> + }
>>> + else if (limb == endlimb)
>>> + {
>>> + if ((end % limb_prec) == 0)
>>> + return l;
>>> + if (check_zero)
>>> + {
>>> + wide_int w = wi::mask (end % limb_prec, false, limb_prec);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + BIT_AND_EXPR, l,
>>> + wide_int_to_tree (m_limb_type, w));
>>> + insert_before (g);
>>> + return gimple_assign_lhs (g);
>>> + }
>>> + unsigned int shift = (-end) % limb_prec;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + LSHIFT_EXPR, l,
>>> + build_int_cst (unsigned_type_node, shift));
>>> + insert_before (g);
>>> + l = add_cast (signed_type_for (m_limb_type), gimple_assign_lhs (g));
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
>>> + RSHIFT_EXPR, l,
>>> + build_int_cst (unsigned_type_node, shift));
>>> + insert_before (g);
>>> + return add_cast (m_limb_type, gimple_assign_lhs (g));
>>> + }
>>> + return l;
>>> +}
>>> +
>>> +/* Helper method for lower_addsub_overflow and lower_mul_overflow. Store
>>> + result including overflow flag into the right locations. */
>>> +
>>> +void
>>> +bitint_large_huge::finish_arith_overflow (tree var, tree obj, tree type,
>>> + tree ovf, tree lhs, tree orig_obj,
>>> + gimple *stmt, tree_code code)
>>> +{
>>> + gimple *g;
>>> +
>>> + if (obj == NULL_TREE
>>> + && (TREE_CODE (type) != BITINT_TYPE
>>> + || bitint_precision_kind (type) < bitint_prec_large))
>>> + {
>>> + /* Add support for 3 or more limbs filled in from normal integral
>>> + type if this assert fails. If no target chooses limb mode smaller
>>> + than half of largest supported normal integral type, this will not
>>> + be needed. */
>>> + gcc_assert (TYPE_PRECISION (type) <= 2 * limb_prec);
>>> + tree lhs_type = type;
>>> + if (TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) == bitint_prec_middle)
>>> + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (type),
>>> + TYPE_UNSIGNED (type));
>>> + tree r1 = limb_access (NULL_TREE, var, size_int (0), true);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type), r1);
>>> + insert_before (g);
>>> + r1 = gimple_assign_lhs (g);
>>> + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
>>> + r1 = add_cast (lhs_type, r1);
>>> + if (TYPE_PRECISION (lhs_type) > limb_prec)
>>> + {
>>> + tree r2 = limb_access (NULL_TREE, var, size_int (1), true);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type), r2);
>>> + insert_before (g);
>>> + r2 = gimple_assign_lhs (g);
>>> + r2 = add_cast (lhs_type, r2);
>>> + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
>>> + build_int_cst (unsigned_type_node,
>>> + limb_prec));
>>> + insert_before (g);
>>> + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
>>> + gimple_assign_lhs (g));
>>> + insert_before (g);
>>> + r1 = gimple_assign_lhs (g);
>>> + }
>>> + if (lhs_type != type)
>>> + r1 = add_cast (type, r1);
>>> + ovf = add_cast (lhs_type, ovf);
>>> + if (lhs_type != type)
>>> + ovf = add_cast (type, ovf);
>>> + g = gimple_build_assign (lhs, COMPLEX_EXPR, r1, ovf);
>>> + m_gsi = gsi_for_stmt (stmt);
>>> + gsi_replace (&m_gsi, g, true);
>>> + }
>>> + else
>>> + {
>>> + unsigned HOST_WIDE_INT nelts = 0;
>>> + tree atype = NULL_TREE;
>>> + if (obj)
>>> + {
>>> + nelts = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
>>> + if (orig_obj == NULL_TREE)
>>> + nelts >>= 1;
>>> + atype = build_array_type_nelts (m_limb_type, nelts);
>>> + }
>>> + if (var && obj)
>>> + {
>>> + tree v1, v2;
>>> + tree zero;
>>> + if (orig_obj == NULL_TREE)
>>> + {
>>> + zero = build_zero_cst (build_pointer_type (TREE_TYPE (obj)));
>>> + v1 = build2 (MEM_REF, atype,
>>> + build_fold_addr_expr (unshare_expr (obj)), zero);
>>> + }
>>> + else if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
>>> + v1 = build1 (VIEW_CONVERT_EXPR, atype, unshare_expr (obj));
>>> + else
>>> + v1 = unshare_expr (obj);
>>> + zero = build_zero_cst (build_pointer_type (TREE_TYPE (var)));
>>> + v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), zero);
>>> + g = gimple_build_assign (v1, v2);
>>> + insert_before (g);
>>> + }
>>> + if (orig_obj == NULL_TREE && obj)
>>> + {
>>> + ovf = add_cast (m_limb_type, ovf);
>>> + tree l = limb_access (NULL_TREE, obj, size_int (nelts), true);
>>> + g = gimple_build_assign (l, ovf);
>>> + insert_before (g);
>>> + if (nelts > 1)
>>> + {
>>> + atype = build_array_type_nelts (m_limb_type, nelts - 1);
>>> + tree off = build_int_cst (build_pointer_type (TREE_TYPE (obj)),
>>> + (nelts + 1) * m_limb_size);
>>> + tree v1 = build2 (MEM_REF, atype,
>>> + build_fold_addr_expr (unshare_expr (obj)),
>>> + off);
>>> + g = gimple_build_assign (v1, build_zero_cst (atype));
>>> + insert_before (g);
>>> + }
>>> + }
>>> + else if (TREE_CODE (TREE_TYPE (lhs)) == COMPLEX_TYPE)
>>> + {
>>> + imm_use_iterator ui;
>>> + use_operand_p use_p;
>>> + FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
>>> + {
>>> + g = USE_STMT (use_p);
>>> + if (!is_gimple_assign (g)
>>> + || gimple_assign_rhs_code (g) != IMAGPART_EXPR)
>>> + continue;
>>> + tree lhs2 = gimple_assign_lhs (g);
>>> + gimple *use_stmt;
>>> + single_imm_use (lhs2, &use_p, &use_stmt);
>>> + lhs2 = gimple_assign_lhs (use_stmt);
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (use_stmt);
>>> + if (useless_type_conversion_p (TREE_TYPE (lhs2), TREE_TYPE (ovf)))
>>> + g = gimple_build_assign (lhs2, ovf);
>>> + else
>>> + g = gimple_build_assign (lhs2, NOP_EXPR, ovf);
>>> + gsi_replace (&gsi, g, true);
>>> + break;
>>> + }
>>> + }
>>> + else if (ovf != boolean_false_node)
>>> + {
>>> + g = gimple_build_cond (NE_EXPR, ovf, boolean_false_node,
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + e3->probability = profile_probability::very_likely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + tree zero = build_zero_cst (TREE_TYPE (lhs));
>>> + tree fn = ubsan_build_overflow_builtin (code, m_loc,
>>> + TREE_TYPE (lhs),
>>> + zero, zero, NULL);
>>> + force_gimple_operand_gsi (&m_gsi, fn, true, NULL_TREE,
>>> + true, GSI_SAME_STMT);
>>> + m_gsi = gsi_after_labels (e2->dest);
>>> + }
>>> + }
>>> + if (var)
>>> + {
>>> + tree clobber = build_clobber (TREE_TYPE (var), CLOBBER_EOL);
>>> + g = gimple_build_assign (var, clobber);
>>> + gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
>>> + }
>>> +}
>>> +
>>> +/* Helper function for lower_addsub_overflow and lower_mul_overflow.
>>> + Given precisions of result TYPE (PREC), argument 0 precision PREC0,
>>> + argument 1 precision PREC1 and minimum precision for the result
>>> + PREC2, compute *START, *END, *CHECK_ZERO and return OVF. */
>>> +
>>> +static tree
>>> +arith_overflow (tree_code code, tree type, int prec, int prec0, int prec1,
>>> + int prec2, unsigned *start, unsigned *end, bool *check_zero)
>>> +{
>>> + *start = 0;
>>> + *end = 0;
>>> + *check_zero = true;
>>> + /* Ignore this special rule for subtraction, even if both
>>> + prec0 >= 0 and prec1 >= 0, their subtraction can be negative
>>> + in infinite precision. */
>>> + if (code != MINUS_EXPR && prec0 >= 0 && prec1 >= 0)
>>> + {
>>> + /* Result in [0, prec2) is unsigned, if prec > prec2,
>>> + all bits above it will be zero. */
>>> + if ((prec - !TYPE_UNSIGNED (type)) >= prec2)
>>> + return boolean_false_node;
>>> + else
>>> + {
>>> + /* ovf if any of bits in [start, end) is non-zero. */
>>> + *start = prec - !TYPE_UNSIGNED (type);
>>> + *end = prec2;
>>> + }
>>> + }
>>> + else if (TYPE_UNSIGNED (type))
>>> + {
>>> + /* If result in [0, prec2) is signed and if prec > prec2,
>>> + all bits above it will be sign bit copies. */
>>> + if (prec >= prec2)
>>> + {
>>> + /* ovf if bit prec - 1 is non-zero. */
>>> + *start = prec - 1;
>>> + *end = prec;
>>> + }
>>> + else
>>> + {
>>> + /* ovf if any of bits in [start, end) is non-zero. */
>>> + *start = prec;
>>> + *end = prec2;
>>> + }
>>> + }
>>> + else if (prec >= prec2)
>>> + return boolean_false_node;
>>> + else
>>> + {
>>> + /* ovf if [start, end) bits aren't all zeros or all ones. */
>>> + *start = prec - 1;
>>> + *end = prec2;
>>> + *check_zero = false;
>>> + }
>>> + return NULL_TREE;
>>> +}
>>> +
>>> +/* Lower a .{ADD,SUB}_OVERFLOW call with at least one large/huge _BitInt
>>> + argument or return type _Complex large/huge _BitInt. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_addsub_overflow (tree obj, gimple *stmt)
>>> +{
>>> + tree arg0 = gimple_call_arg (stmt, 0);
>>> + tree arg1 = gimple_call_arg (stmt, 1);
>>> + tree lhs = gimple_call_lhs (stmt);
>>> + gimple *g;
>>> +
>>> + if (!lhs)
>>> + {
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>>> + gsi_remove (&gsi, true);
>>> + return;
>>> + }
>>> + gimple *final_stmt = gsi_stmt (m_gsi);
>>> + tree type = TREE_TYPE (lhs);
>>> + if (TREE_CODE (type) == COMPLEX_TYPE)
>>> + type = TREE_TYPE (type);
>>> + int prec = TYPE_PRECISION (type);
>>> + int prec0 = range_to_prec (arg0, stmt);
>>> + int prec1 = range_to_prec (arg1, stmt);
>>> + int prec2 = ((prec0 < 0) == (prec1 < 0)
>>> + ? MAX (prec0 < 0 ? -prec0 : prec0,
>>> + prec1 < 0 ? -prec1 : prec1) + 1
>>> + : MAX (prec0 < 0 ? -prec0 : prec0 + 1,
>>> + prec1 < 0 ? -prec1 : prec1 + 1) + 1);
>>> + int prec3 = MAX (prec0 < 0 ? -prec0 : prec0,
>>> + prec1 < 0 ? -prec1 : prec1);
>>> + prec3 = MAX (prec3, prec);
>>> + tree var = NULL_TREE;
>>> + tree orig_obj = obj;
>>> + if (obj == NULL_TREE
>>> + && TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) >= bitint_prec_large
>>> + && m_names
>>> + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
>>> + {
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + if (TREE_TYPE (lhs) == type)
>>> + orig_obj = obj;
>>> + }
>>> + if (TREE_CODE (type) != BITINT_TYPE
>>> + || bitint_precision_kind (type) < bitint_prec_large)
>>> + {
>>> + unsigned HOST_WIDE_INT nelts = CEIL (prec, limb_prec);
>>> + tree atype = build_array_type_nelts (m_limb_type, nelts);
>>> + var = create_tmp_var (atype);
>>> + }
>>> +
>>> + enum tree_code code;
>>> + switch (gimple_call_internal_fn (stmt))
>>> + {
>>> + case IFN_ADD_OVERFLOW:
>>> + case IFN_UBSAN_CHECK_ADD:
>>> + code = PLUS_EXPR;
>>> + break;
>>> + case IFN_SUB_OVERFLOW:
>>> + case IFN_UBSAN_CHECK_SUB:
>>> + code = MINUS_EXPR;
>>> + break;
>>> + default:
>>> + gcc_unreachable ();
>>> + }
>>> + unsigned start, end;
>>> + bool check_zero;
>>> + tree ovf = arith_overflow (code, type, prec, prec0, prec1, prec2,
>>> + &start, &end, &check_zero);
>>> +
>>> + unsigned startlimb, endlimb;
>>> + if (ovf)
>>> + {
>>> + startlimb = ~0U;
>>> + endlimb = ~0U;
>>> + }
>>> + else
>>> + {
>>> + startlimb = start / limb_prec;
>>> + endlimb = (end - 1) / limb_prec;
>>> + }
>>> +
>>> + int prec4 = ovf != NULL_TREE ? prec : prec3;
>>> + bitint_prec_kind kind = bitint_precision_kind (prec4);
>>> + unsigned cnt, rem = 0, fin = 0;
>>> + tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
>>> + bool last_ovf = (ovf == NULL_TREE
>>> + && CEIL (prec2, limb_prec) > CEIL (prec3, limb_prec));
>>> + if (kind != bitint_prec_huge)
>>> + cnt = CEIL (prec4, limb_prec) + last_ovf;
>>> + else
>>> + {
>>> + rem = (prec4 % (2 * limb_prec));
>>> + fin = (prec4 - rem) / limb_prec;
>>> + cnt = 2 + CEIL (rem, limb_prec) + last_ovf;
>>> + idx = idx_first = create_loop (size_zero_node, &idx_next);
>>> + }
>>> +
>>> + if (kind == bitint_prec_huge)
>>> + m_upwards_2limb = fin;
>>> +
>>> + tree type0 = TREE_TYPE (arg0);
>>> + tree type1 = TREE_TYPE (arg1);
>>> + if (TYPE_PRECISION (type0) < prec3)
>>> + {
>>> + type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
>>> + if (TREE_CODE (arg0) == INTEGER_CST)
>>> + arg0 = fold_convert (type0, arg0);
>>> + }
>>> + if (TYPE_PRECISION (type1) < prec3)
>>> + {
>>> + type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
>>> + if (TREE_CODE (arg1) == INTEGER_CST)
>>> + arg1 = fold_convert (type1, arg1);
>>> + }
>>> + unsigned int data_cnt = 0;
>>> + tree last_rhs1 = NULL_TREE, last_rhs2 = NULL_TREE;
>>> + tree cmp = build_zero_cst (m_limb_type);
>>> + unsigned prec_limbs = CEIL ((unsigned) prec, limb_prec);
>>> + tree ovf_out = NULL_TREE, cmp_out = NULL_TREE;
>>> + for (unsigned i = 0; i < cnt; i++)
>>> + {
>>> + m_data_cnt = 0;
>>> + tree rhs1, rhs2;
>>> + if (kind != bitint_prec_huge)
>>> + idx = size_int (i);
>>> + else if (i >= 2)
>>> + idx = size_int (fin + (i > 2));
>>> + if (!last_ovf || i < cnt - 1)
>>> + {
>>> + if (type0 != TREE_TYPE (arg0))
>>> + rhs1 = handle_cast (type0, arg0, idx);
>>> + else
>>> + rhs1 = handle_operand (arg0, idx);
>>> + if (type1 != TREE_TYPE (arg1))
>>> + rhs2 = handle_cast (type1, arg1, idx);
>>> + else
>>> + rhs2 = handle_operand (arg1, idx);
>>> + if (i == 0)
>>> + data_cnt = m_data_cnt;
>>> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
>>> + rhs1 = add_cast (m_limb_type, rhs1);
>>> + if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs2)))
>>> + rhs2 = add_cast (m_limb_type, rhs2);
>>> + last_rhs1 = rhs1;
>>> + last_rhs2 = rhs2;
>>> + }
>>> + else
>>> + {
>>> + m_data_cnt = data_cnt;
>>> + if (TYPE_UNSIGNED (type0))
>>> + rhs1 = build_zero_cst (m_limb_type);
>>> + else
>>> + {
>>> + rhs1 = add_cast (signed_type_for (m_limb_type), last_rhs1);
>>> + if (TREE_CODE (rhs1) == INTEGER_CST)
>>> + rhs1 = build_int_cst (m_limb_type,
>>> + tree_int_cst_sgn (rhs1) < 0 ? -1 : 0);
>>> + else
>>> + {
>>> + tree lpm1 = build_int_cst (unsigned_type_node,
>>> + limb_prec - 1);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
>>> + RSHIFT_EXPR, rhs1, lpm1);
>>> + insert_before (g);
>>> + rhs1 = add_cast (m_limb_type, gimple_assign_lhs (g));
>>> + }
>>> + }
>>> + if (TYPE_UNSIGNED (type1))
>>> + rhs2 = build_zero_cst (m_limb_type);
>>> + else
>>> + {
>>> + rhs2 = add_cast (signed_type_for (m_limb_type), last_rhs2);
>>> + if (TREE_CODE (rhs2) == INTEGER_CST)
>>> + rhs2 = build_int_cst (m_limb_type,
>>> + tree_int_cst_sgn (rhs2) < 0 ? -1 : 0);
>>> + else
>>> + {
>>> + tree lpm1 = build_int_cst (unsigned_type_node,
>>> + limb_prec - 1);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs2)),
>>> + RSHIFT_EXPR, rhs2, lpm1);
>>> + insert_before (g);
>>> + rhs2 = add_cast (m_limb_type, gimple_assign_lhs (g));
>>> + }
>>> + }
>>> + }
>>> + tree rhs = handle_plus_minus (code, rhs1, rhs2, idx);
>>> + if (ovf != boolean_false_node)
>>> + {
>>> + if (tree_fits_uhwi_p (idx))
>>> + {
>>> + unsigned limb = tree_to_uhwi (idx);
>>> + if (limb >= startlimb && limb <= endlimb)
>>> + {
>>> + tree l = arith_overflow_extract_bits (start, end, rhs,
>>> + limb, check_zero);
>>> + tree this_ovf = make_ssa_name (boolean_type_node);
>>> + if (ovf == NULL_TREE && !check_zero)
>>> + {
>>> + cmp = l;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + PLUS_EXPR, l,
>>> + build_int_cst (m_limb_type, 1));
>>> + insert_before (g);
>>> + g = gimple_build_assign (this_ovf, GT_EXPR,
>>> + gimple_assign_lhs (g),
>>> + build_int_cst (m_limb_type, 1));
>>> + }
>>> + else
>>> + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
>>> + insert_before (g);
>>> + if (ovf == NULL_TREE)
>>> + ovf = this_ovf;
>>> + else
>>> + {
>>> + tree b = make_ssa_name (boolean_type_node);
>>> + g = gimple_build_assign (b, BIT_IOR_EXPR, ovf, this_ovf);
>>> + insert_before (g);
>>> + ovf = b;
>>> + }
>>> + }
>>> + }
>>> + else if (startlimb < fin)
>>> + {
>>> + if (m_first && startlimb + 2 < fin)
>>> + {
>>> + tree data_out;
>>> + ovf = prepare_data_in_out (boolean_false_node, idx, &data_out);
>>> + ovf_out = m_data.pop ();
>>> + m_data.pop ();
>>> + if (!check_zero)
>>> + {
>>> + cmp = prepare_data_in_out (cmp, idx, &data_out);
>>> + cmp_out = m_data.pop ();
>>> + m_data.pop ();
>>> + }
>>> + }
>>> + if (i != 0 || startlimb != fin - 1)
>>> + {
>>> + tree_code cmp_code;
>>> + bool single_comparison
>>> + = (startlimb + 2 >= fin || (startlimb & 1) != (i & 1));
>>> + if (!single_comparison)
>>> + {
>>> + cmp_code = GE_EXPR;
>>> + if (!check_zero && (start % limb_prec) == 0)
>>> + single_comparison = true;
>>> + }
>>> + else if ((startlimb & 1) == (i & 1))
>>> + cmp_code = EQ_EXPR;
>>> + else
>>> + cmp_code = GT_EXPR;
>>> + g = gimple_build_cond (cmp_code, idx, size_int (startlimb),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + edge e4 = NULL;
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + if (!single_comparison)
>>> + {
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + g = gimple_build_cond (EQ_EXPR, idx,
>>> + size_int (startlimb), NULL_TREE,
>>> + NULL_TREE);
>>> + insert_before (g);
>>> + e2 = split_block (gsi_bb (m_gsi), g);
>>> + basic_block bb = create_empty_bb (e2->dest);
>>> + add_bb_to_loop (bb, e2->dest->loop_father);
>>> + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
>>> + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
>>> + e4->probability = profile_probability::unlikely ();
>>> + e2->flags = EDGE_FALSE_VALUE;
>>> + e2->probability = e4->probability.invert ();
>>> + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
>>> + e2 = find_edge (e2->dest, e3->dest);
>>> + }
>>> + m_gsi = gsi_after_labels (e2->src);
>>> + unsigned tidx = startlimb + (cmp_code == GT_EXPR);
>>> + tree l = arith_overflow_extract_bits (start, end, rhs, tidx,
>>> + check_zero);
>>> + tree this_ovf = make_ssa_name (boolean_type_node);
>>> + if (cmp_code != GT_EXPR && !check_zero)
>>> + {
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + PLUS_EXPR, l,
>>> + build_int_cst (m_limb_type, 1));
>>> + insert_before (g);
>>> + g = gimple_build_assign (this_ovf, GT_EXPR,
>>> + gimple_assign_lhs (g),
>>> + build_int_cst (m_limb_type, 1));
>>> + }
>>> + else
>>> + g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
>>> + insert_before (g);
>>> + if (cmp_code == GT_EXPR)
>>> + {
>>> + tree t = make_ssa_name (boolean_type_node);
>>> + g = gimple_build_assign (t, BIT_IOR_EXPR, ovf, this_ovf);
>>> + insert_before (g);
>>> + this_ovf = t;
>>> + }
>>> + tree this_ovf2 = NULL_TREE;
>>> + if (!single_comparison)
>>> + {
>>> + m_gsi = gsi_after_labels (e4->src);
>>> + tree t = make_ssa_name (boolean_type_node);
>>> + g = gimple_build_assign (t, NE_EXPR, rhs, cmp);
>>> + insert_before (g);
>>> + this_ovf2 = make_ssa_name (boolean_type_node);
>>> + g = gimple_build_assign (this_ovf2, BIT_IOR_EXPR,
>>> + ovf, t);
>>> + insert_before (g);
>>> + }
>>> + m_gsi = gsi_after_labels (e2->dest);
>>> + tree t;
>>> + if (i == 1 && ovf_out)
>>> + t = ovf_out;
>>> + else
>>> + t = make_ssa_name (boolean_type_node);
>>> + gphi *phi = create_phi_node (t, e2->dest);
>>> + add_phi_arg (phi, this_ovf, e2, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, ovf ? ovf
>>> + : boolean_false_node, e3,
>>> + UNKNOWN_LOCATION);
>>> + if (e4)
>>> + add_phi_arg (phi, this_ovf2, e4, UNKNOWN_LOCATION);
>>> + ovf = t;
>>> + if (!check_zero && cmp_code != GT_EXPR)
>>> + {
>>> + t = cmp_out ? cmp_out : make_ssa_name (m_limb_type);
>>> + phi = create_phi_node (t, e2->dest);
>>> + add_phi_arg (phi, l, e2, UNKNOWN_LOCATION);
>>> + add_phi_arg (phi, cmp, e3, UNKNOWN_LOCATION);
>>> + if (e4)
>>> + add_phi_arg (phi, cmp, e4, UNKNOWN_LOCATION);
>>> + cmp = t;
>>> + }
>>> + }
>>> + }
>>> + }
>>> +
>>> + if (var || obj)
>>> + {
>>> + if (tree_fits_uhwi_p (idx) && tree_to_uhwi (idx) >= prec_limbs)
>>> + ;
>>> + else if (!tree_fits_uhwi_p (idx)
>>> + && (unsigned) prec < (fin - (i == 0)) * limb_prec)
>>> + {
>>> + bool single_comparison
>>> + = (((unsigned) prec % limb_prec) == 0
>>> + || prec_limbs + 1 >= fin
>>> + || (prec_limbs & 1) == (i & 1));
>>> + g = gimple_build_cond (LE_EXPR, idx, size_int (prec_limbs - 1),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + edge e2 = split_block (e1->dest, (gimple *) NULL);
>>> + edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
>>> + edge e4 = NULL;
>>> + e3->probability = profile_probability::unlikely ();
>>> + e1->flags = EDGE_TRUE_VALUE;
>>> + e1->probability = e3->probability.invert ();
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
>>> + if (!single_comparison)
>>> + {
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + g = gimple_build_cond (LT_EXPR, idx,
>>> + size_int (prec_limbs - 1),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + e2 = split_block (gsi_bb (m_gsi), g);
>>> + basic_block bb = create_empty_bb (e2->dest);
>>> + add_bb_to_loop (bb, e2->dest->loop_father);
>>> + e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
>>> + set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
>>> + e4->probability = profile_probability::unlikely ();
>>> + e2->flags = EDGE_FALSE_VALUE;
>>> + e2->probability = e4->probability.invert ();
>>> + e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
>>> + e2 = find_edge (e2->dest, e3->dest);
>>> + }
>>> + m_gsi = gsi_after_labels (e2->src);
>>> + tree l = limb_access (type, var ? var : obj, idx, true);
>>> + g = gimple_build_assign (l, rhs);
>>> + insert_before (g);
>>> + if (!single_comparison)
>>> + {
>>> + m_gsi = gsi_after_labels (e4->src);
>>> + l = limb_access (type, var ? var : obj,
>>> + size_int (prec_limbs - 1), true);
>>> + if (!useless_type_conversion_p (TREE_TYPE (l),
>>> + TREE_TYPE (rhs)))
>>> + rhs = add_cast (TREE_TYPE (l), rhs);
>>> + g = gimple_build_assign (l, rhs);
>>> + insert_before (g);
>>> + }
>>> + m_gsi = gsi_after_labels (e2->dest);
>>> + }
>>> + else
>>> + {
>>> + tree l = limb_access (type, var ? var : obj, idx, true);
>>> + if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs)))
>>> + rhs = add_cast (TREE_TYPE (l), rhs);
>>> + g = gimple_build_assign (l, rhs);
>>> + insert_before (g);
>>> + }
>>> + }
>>> + m_first = false;
>>> + if (kind == bitint_prec_huge && i <= 1)
>>> + {
>>> + if (i == 0)
>>> + {
>>> + idx = make_ssa_name (sizetype);
>>> + g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
>>> + size_one_node);
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
>>> + size_int (2));
>>> + insert_before (g);
>>> + g = gimple_build_cond (NE_EXPR, idx_next, size_int (fin),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + }
>>> + }
>>> + }
>>> +
>>> + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, code);
>>> +}
>>> +
>>> +/* Lower a .MUL_OVERFLOW call with at least one large/huge _BitInt
>>> + argument or return type _Complex large/huge _BitInt. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_mul_overflow (tree obj, gimple *stmt)
>>> +{
>>> + tree arg0 = gimple_call_arg (stmt, 0);
>>> + tree arg1 = gimple_call_arg (stmt, 1);
>>> + tree lhs = gimple_call_lhs (stmt);
>>> + if (!lhs)
>>> + {
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>>> + gsi_remove (&gsi, true);
>>> + return;
>>> + }
>>> + gimple *final_stmt = gsi_stmt (m_gsi);
>>> + tree type = TREE_TYPE (lhs);
>>> + if (TREE_CODE (type) == COMPLEX_TYPE)
>>> + type = TREE_TYPE (type);
>>> + int prec = TYPE_PRECISION (type), prec0, prec1;
>>> + arg0 = handle_operand_addr (arg0, stmt, NULL, &prec0);
>>> + arg1 = handle_operand_addr (arg1, stmt, NULL, &prec1);
>>> + int prec2 = ((prec0 < 0 ? -prec0 : prec0)
>>> + + (prec1 < 0 ? -prec1 : prec1)
>>> + + ((prec0 < 0) != (prec1 < 0)));
>>> + tree var = NULL_TREE;
>>> + tree orig_obj = obj;
>>> + bool force_var = false;
>>> + if (obj == NULL_TREE
>>> + && TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) >= bitint_prec_large
>>> + && m_names
>>> + && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
>>> + {
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + if (TREE_TYPE (lhs) == type)
>>> + orig_obj = obj;
>>> + }
>>> + else if (obj != NULL_TREE && DECL_P (obj))
>>> + {
>>> + for (int i = 0; i < 2; ++i)
>>> + {
>>> + tree arg = i ? arg1 : arg0;
>>> + if (TREE_CODE (arg) == ADDR_EXPR)
>>> + arg = TREE_OPERAND (arg, 0);
>>> + if (get_base_address (arg) == obj)
>>> + {
>>> + force_var = true;
>>> + break;
>>> + }
>>> + }
>>> + }
>>> + if (obj == NULL_TREE
>>> + || force_var
>>> + || TREE_CODE (type) != BITINT_TYPE
>>> + || bitint_precision_kind (type) < bitint_prec_large
>>> + || prec2 > (CEIL (prec, limb_prec) * limb_prec * (orig_obj ? 1 : 2)))
>>> + {
>>> + unsigned HOST_WIDE_INT nelts = CEIL (MAX (prec, prec2), limb_prec);
>>> + tree atype = build_array_type_nelts (m_limb_type, nelts);
>>> + var = create_tmp_var (atype);
>>> + }
>>> + tree addr = build_fold_addr_expr (var ? var : obj);
>>> + addr = force_gimple_operand_gsi (&m_gsi, addr, true,
>>> + NULL_TREE, true, GSI_SAME_STMT);
>>> + tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
>>> + gimple *g
>>> + = gimple_build_call_internal (IFN_MULBITINT, 6,
>>> + addr, build_int_cst (sitype,
>>> + MAX (prec2, prec)),
>>> + arg0, build_int_cst (sitype, prec0),
>>> + arg1, build_int_cst (sitype, prec1));
>>> + insert_before (g);
>>> +
>>> + unsigned start, end;
>>> + bool check_zero;
>>> + tree ovf = arith_overflow (MULT_EXPR, type, prec, prec0, prec1, prec2,
>>> + &start, &end, &check_zero);
>>> + if (ovf == NULL_TREE)
>>> + {
>>> + unsigned startlimb = start / limb_prec;
>>> + unsigned endlimb = (end - 1) / limb_prec;
>>> + unsigned cnt;
>>> + bool use_loop = false;
>>> + if (startlimb == endlimb)
>>> + cnt = 1;
>>> + else if (startlimb + 1 == endlimb)
>>> + cnt = 2;
>>> + else if ((end % limb_prec) == 0)
>>> + {
>>> + cnt = 2;
>>> + use_loop = true;
>>> + }
>>> + else
>>> + {
>>> + cnt = 3;
>>> + use_loop = startlimb + 2 < endlimb;
>>> + }
>>> + if (cnt == 1)
>>> + {
>>> + tree l = limb_access (NULL_TREE, var ? var : obj,
>>> + size_int (startlimb), true);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
>>> + insert_before (g);
>>> + l = arith_overflow_extract_bits (start, end, gimple_assign_lhs (g),
>>> + startlimb, check_zero);
>>> + ovf = make_ssa_name (boolean_type_node);
>>> + if (check_zero)
>>> + g = gimple_build_assign (ovf, NE_EXPR, l,
>>> + build_zero_cst (m_limb_type));
>>> + else
>>> + {
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + PLUS_EXPR, l,
>>> + build_int_cst (m_limb_type, 1));
>>> + insert_before (g);
>>> + g = gimple_build_assign (ovf, GT_EXPR, gimple_assign_lhs (g),
>>> + build_int_cst (m_limb_type, 1));
>>> + }
>>> + insert_before (g);
>>> + }
>>> + else
>>> + {
>>> + basic_block edge_bb = NULL;
>>> + gimple_stmt_iterator gsi = m_gsi;
>>> + gsi_prev (&gsi);
>>> + edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
>>> + edge_bb = e->src;
>>> + m_gsi = gsi_last_bb (edge_bb);
>>> + if (!gsi_end_p (m_gsi))
>>> + gsi_next (&m_gsi);
>>> +
>>> + tree cmp = build_zero_cst (m_limb_type);
>>> + for (unsigned i = 0; i < cnt; i++)
>>> + {
>>> + tree idx, idx_next = NULL_TREE;
>>> + if (i == 0)
>>> + idx = size_int (startlimb);
>>> + else if (i == 2)
>>> + idx = size_int (endlimb);
>>> + else if (use_loop)
>>> + idx = create_loop (size_int (startlimb + 1), &idx_next);
>>> + else
>>> + idx = size_int (startlimb + 1);
>>> + tree l = limb_access (NULL_TREE, var ? var : obj, idx, true);
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type), l);
>>> + insert_before (g);
>>> + l = gimple_assign_lhs (g);
>>> + if (i == 0 || i == 2)
>>> + l = arith_overflow_extract_bits (start, end, l,
>>> + tree_to_uhwi (idx),
>>> + check_zero);
>>> + if (i == 0 && !check_zero)
>>> + {
>>> + cmp = l;
>>> + g = gimple_build_assign (make_ssa_name (m_limb_type),
>>> + PLUS_EXPR, l,
>>> + build_int_cst (m_limb_type, 1));
>>> + insert_before (g);
>>> + g = gimple_build_cond (GT_EXPR, gimple_assign_lhs (g),
>>> + build_int_cst (m_limb_type, 1),
>>> + NULL_TREE, NULL_TREE);
>>> + }
>>> + else
>>> + g = gimple_build_cond (NE_EXPR, l, cmp, NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge e1 = split_block (gsi_bb (m_gsi), g);
>>> + e1->flags = EDGE_FALSE_VALUE;
>>> + edge e2 = make_edge (e1->src, gimple_bb (final_stmt),
>>> + EDGE_TRUE_VALUE);
>>> + e1->probability = profile_probability::likely ();
>>> + e2->probability = e1->probability.invert ();
>>> + if (i == 0)
>>> + set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
>>> + m_gsi = gsi_after_labels (e1->dest);
>>> + if (i == 1 && use_loop)
>>> + {
>>> + g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
>>> + size_one_node);
>>> + insert_before (g);
>>> + g = gimple_build_cond (NE_EXPR, idx_next,
>>> + size_int (endlimb + (cnt == 1)),
>>> + NULL_TREE, NULL_TREE);
>>> + insert_before (g);
>>> + edge true_edge, false_edge;
>>> + extract_true_false_edges_from_block (gsi_bb (m_gsi),
>>> + &true_edge,
>>> + &false_edge);
>>> + m_gsi = gsi_after_labels (false_edge->dest);
>>> + }
>>> + }
>>> +
>>> + ovf = make_ssa_name (boolean_type_node);
>>> + basic_block bb = gimple_bb (final_stmt);
>>> + gphi *phi = create_phi_node (ovf, bb);
>>> + edge e1 = find_edge (gsi_bb (m_gsi), bb);
>>> + edge_iterator ei;
>>> + FOR_EACH_EDGE (e, ei, bb->preds)
>>> + {
>>> + tree val = e == e1 ? boolean_false_node : boolean_true_node;
>>> + add_phi_arg (phi, val, e, UNKNOWN_LOCATION);
>>> + }
>>> + m_gsi = gsi_for_stmt (final_stmt);
>>> + }
>>> + }
>>> +
>>> + finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, MULT_EXPR);
>>> +}
>>> +
>>> +/* Lower REALPART_EXPR or IMAGPART_EXPR stmt extracting part of result from
>>> + .{ADD,SUB,MUL}_OVERFLOW call. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_cplxpart_stmt (tree obj, gimple *stmt)
>>> +{
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + rhs1 = TREE_OPERAND (rhs1, 0);
>>> + if (obj == NULL_TREE)
>>> + {
>>> + int part = var_to_partition (m_map, gimple_assign_lhs (stmt));
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + obj = m_vars[part];
>>> + }
>>> + if (TREE_CODE (rhs1) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
>>> + {
>>> + lower_call (obj, SSA_NAME_DEF_STMT (rhs1));
>>> + return;
>>> + }
>>> + int part = var_to_partition (m_map, rhs1);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + tree var = m_vars[part];
>>> + unsigned HOST_WIDE_INT nelts
>>> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
>>> + tree atype = build_array_type_nelts (m_limb_type, nelts);
>>> + if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
>>> + obj = build1 (VIEW_CONVERT_EXPR, atype, obj);
>>> + tree off = build_int_cst (build_pointer_type (TREE_TYPE (var)),
>>> + gimple_assign_rhs_code (stmt) == REALPART_EXPR
>>> + ? 0 : nelts * m_limb_size);
>>> + tree v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), off);
>>> + gimple *g = gimple_build_assign (obj, v2);
>>> + insert_before (g);
>>> +}
>>> +
>>> +/* Lower COMPLEX_EXPR stmt. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_complexexpr_stmt (gimple *stmt)
>>> +{
>>> + tree lhs = gimple_assign_lhs (stmt);
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + tree rhs2 = gimple_assign_rhs2 (stmt);
>>> + int part = var_to_partition (m_map, lhs);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + lhs = m_vars[part];
>>> + unsigned HOST_WIDE_INT nelts
>>> + = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (rhs1))) / limb_prec;
>>> + tree atype = build_array_type_nelts (m_limb_type, nelts);
>>> + tree zero = build_zero_cst (build_pointer_type (TREE_TYPE (lhs)));
>>> + tree v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), zero);
>>> + tree v2;
>>> + if (TREE_CODE (rhs1) == SSA_NAME)
>>> + {
>>> + part = var_to_partition (m_map, rhs1);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + v2 = m_vars[part];
>>> + }
>>> + else if (integer_zerop (rhs1))
>>> + v2 = build_zero_cst (atype);
>>> + else
>>> + v2 = tree_output_constant_def (rhs1);
>>> + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
>>> + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
>>> + gimple *g = gimple_build_assign (v1, v2);
>>> + insert_before (g);
>>> + tree off = fold_convert (build_pointer_type (TREE_TYPE (lhs)),
>>> + TYPE_SIZE_UNIT (atype));
>>> + v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), off);
>>> + if (TREE_CODE (rhs2) == SSA_NAME)
>>> + {
>>> + part = var_to_partition (m_map, rhs2);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + v2 = m_vars[part];
>>> + }
>>> + else if (integer_zerop (rhs2))
>>> + v2 = build_zero_cst (atype);
>>> + else
>>> + v2 = tree_output_constant_def (rhs2);
>>> + if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
>>> + v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
>>> + g = gimple_build_assign (v1, v2);
>>> + insert_before (g);
>>> +}
>>> +
>>> +/* Lower a call statement with one or more large/huge _BitInt
>>> + arguments or large/huge _BitInt return value. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_call (tree obj, gimple *stmt)
>>> +{
>>> + gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>>> + unsigned int nargs = gimple_call_num_args (stmt);
>>> + if (gimple_call_internal_p (stmt))
>>> + switch (gimple_call_internal_fn (stmt))
>>> + {
>>> + case IFN_ADD_OVERFLOW:
>>> + case IFN_SUB_OVERFLOW:
>>> + case IFN_UBSAN_CHECK_ADD:
>>> + case IFN_UBSAN_CHECK_SUB:
>>> + lower_addsub_overflow (obj, stmt);
>>> + return;
>>> + case IFN_MUL_OVERFLOW:
>>> + case IFN_UBSAN_CHECK_MUL:
>>> + lower_mul_overflow (obj, stmt);
>>> + return;
>>> + default:
>>> + break;
>>> + }
>>> + for (unsigned int i = 0; i < nargs; ++i)
>>> + {
>>> + tree arg = gimple_call_arg (stmt, i);
>>> + if (TREE_CODE (arg) != SSA_NAME
>>> + || TREE_CODE (TREE_TYPE (arg)) != BITINT_TYPE
>>> + || bitint_precision_kind (TREE_TYPE (arg)) <= bitint_prec_middle)
>>> + continue;
>>> + int p = var_to_partition (m_map, arg);
>>> + tree v = m_vars[p];
>>> + gcc_assert (v != NULL_TREE);
>>> + if (!types_compatible_p (TREE_TYPE (arg), TREE_TYPE (v)))
>>> + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (arg), v);
>>> + arg = make_ssa_name (TREE_TYPE (arg));
>>> + gimple *g = gimple_build_assign (arg, v);
>>> + gsi_insert_before (&gsi, g, GSI_SAME_STMT);
>>> + gimple_call_set_arg (stmt, i, arg);
>>> + if (m_preserved == NULL)
>>> + m_preserved = BITMAP_ALLOC (NULL);
>>> + bitmap_set_bit (m_preserved, SSA_NAME_VERSION (arg));
>>> + }
>>> + tree lhs = gimple_call_lhs (stmt);
>>> + if (lhs
>>> + && TREE_CODE (lhs) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
>>> + {
>>> + int p = var_to_partition (m_map, lhs);
>>> + tree v = m_vars[p];
>>> + gcc_assert (v != NULL_TREE);
>>> + if (!types_compatible_p (TREE_TYPE (lhs), TREE_TYPE (v)))
>>> + v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), v);
>>> + gimple_call_set_lhs (stmt, v);
>>> + SSA_NAME_DEF_STMT (lhs) = gimple_build_nop ();
>>> + }
>>> + update_stmt (stmt);
>>> +}
>>> +
>>> +/* Lower __asm STMT which involves large/huge _BitInt values. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_asm (gimple *stmt)
>>> +{
>>> + gasm *g = as_a <gasm *> (stmt);
>>> + unsigned noutputs = gimple_asm_noutputs (g);
>>> + unsigned ninputs = gimple_asm_ninputs (g);
>>> +
>>> + for (unsigned i = 0; i < noutputs; ++i)
>>> + {
>>> + tree t = gimple_asm_output_op (g, i);
>>> + tree s = TREE_VALUE (t);
>>> + if (TREE_CODE (s) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
>>> + {
>>> + int part = var_to_partition (m_map, s);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + TREE_VALUE (t) = m_vars[part];
>>> + }
>>> + }
>>> + for (unsigned i = 0; i < ninputs; ++i)
>>> + {
>>> + tree t = gimple_asm_input_op (g, i);
>>> + tree s = TREE_VALUE (t);
>>> + if (TREE_CODE (s) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
>>> + {
>>> + int part = var_to_partition (m_map, s);
>>> + gcc_assert (m_vars[part] != NULL_TREE);
>>> + TREE_VALUE (t) = m_vars[part];
>>> + }
>>> + }
>>> + update_stmt (stmt);
>>> +}
>>> +
>>> +/* Lower statement STMT which involves large/huge _BitInt values
>>> + into code accessing individual limbs. */
>>> +
>>> +void
>>> +bitint_large_huge::lower_stmt (gimple *stmt)
>>> +{
>>> + m_first = true;
>>> + m_lhs = NULL_TREE;
>>> + m_data.truncate (0);
>>> + m_data_cnt = 0;
>>> + m_gsi = gsi_for_stmt (stmt);
>>> + m_after_stmt = NULL;
>>> + m_bb = NULL;
>>> + m_init_gsi = m_gsi;
>>> + gsi_prev (&m_init_gsi);
>>> + m_preheader_bb = NULL;
>>> + m_upwards_2limb = 0;
>>> + m_var_msb = false;
>>> + m_loc = gimple_location (stmt);
>>> + if (is_gimple_call (stmt))
>>> + {
>>> + lower_call (NULL_TREE, stmt);
>>> + return;
>>> + }
>>> + if (gimple_code (stmt) == GIMPLE_ASM)
>>> + {
>>> + lower_asm (stmt);
>>> + return;
>>> + }
>>> + tree lhs = NULL_TREE, cmp_op1 = NULL_TREE, cmp_op2 = NULL_TREE;
>>> + tree_code cmp_code = comparison_op (stmt, &cmp_op1, &cmp_op2);
>>> + bool eq_p = (cmp_code == EQ_EXPR || cmp_code == NE_EXPR);
>>> + bool mergeable_cast_p = false;
>>> + bool final_cast_p = false;
>>> + if (gimple_assign_cast_p (stmt))
>>> + {
>>> + lhs = gimple_assign_lhs (stmt);
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
>>> + && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
>>> + mergeable_cast_p = true;
>>> + else if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
>>> + && INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
>>> + {
>>> + final_cast_p = true;
>>> + if (TREE_CODE (rhs1) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
>>> + {
>>> + gimple *g = SSA_NAME_DEF_STMT (rhs1);
>>> + if (is_gimple_assign (g)
>>> + && gimple_assign_rhs_code (g) == IMAGPART_EXPR)
>>> + {
>>> + tree rhs2 = TREE_OPERAND (gimple_assign_rhs1 (g), 0);
>>> + if (TREE_CODE (rhs2) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs2))))
>>> + {
>>> + g = SSA_NAME_DEF_STMT (rhs2);
>>> + int ovf = optimizable_arith_overflow (g);
>>> + if (ovf == 2)
>>> + /* If .{ADD,SUB,MUL}_OVERFLOW has both REALPART_EXPR
>>> + and IMAGPART_EXPR uses, where the latter is cast to
>>> + non-_BitInt, it will be optimized when handling
>>> + the REALPART_EXPR. */
>>> + return;
>>> + if (ovf == 1)
>>> + {
>>> + lower_call (NULL_TREE, g);
>>> + return;
>>> + }
>>> + }
>>> + }
>>> + }
>>> + }
>>> + }
>>> + if (gimple_store_p (stmt))
>>> + {
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + if (TREE_CODE (rhs1) == SSA_NAME
>>> + && (m_names == NULL
>>> + || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
>>> + {
>>> + gimple *g = SSA_NAME_DEF_STMT (rhs1);
>>> + m_loc = gimple_location (g);
>>> + lhs = gimple_assign_lhs (stmt);
>>> + if (is_gimple_assign (g) && !mergeable_op (g))
>>> + switch (gimple_assign_rhs_code (g))
>>> + {
>>> + case LSHIFT_EXPR:
>>> + case RSHIFT_EXPR:
>>> + lower_shift_stmt (lhs, g);
>>> + handled:
>>> + m_gsi = gsi_for_stmt (stmt);
>>> + unlink_stmt_vdef (stmt);
>>> + release_ssa_name (gimple_vdef (stmt));
>>> + gsi_remove (&m_gsi, true);
>>> + return;
>>> + case MULT_EXPR:
>>> + case TRUNC_DIV_EXPR:
>>> + case TRUNC_MOD_EXPR:
>>> + lower_muldiv_stmt (lhs, g);
>>> + goto handled;
>>> + case FIX_TRUNC_EXPR:
>>> + lower_float_conv_stmt (lhs, g);
>>> + goto handled;
>>> + case REALPART_EXPR:
>>> + case IMAGPART_EXPR:
>>> + lower_cplxpart_stmt (lhs, g);
>>> + goto handled;
>>> + default:
>>> + break;
>>> + }
>>> + else if (optimizable_arith_overflow (g) == 3)
>>> + {
>>> + lower_call (lhs, g);
>>> + goto handled;
>>> + }
>>> + m_loc = gimple_location (stmt);
>>> + }
>>> + }
>>> + if (mergeable_op (stmt)
>>> + || gimple_store_p (stmt)
>>> + || gimple_assign_load_p (stmt)
>>> + || eq_p
>>> + || mergeable_cast_p)
>>> + {
>>> + lhs = lower_mergeable_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
>>> + if (!eq_p)
>>> + return;
>>> + }
>>> + else if (cmp_code != ERROR_MARK)
>>> + lhs = lower_comparison_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
>>> + if (cmp_code != ERROR_MARK)
>>> + {
>>> + if (gimple_code (stmt) == GIMPLE_COND)
>>> + {
>>> + gcond *cstmt = as_a <gcond *> (stmt);
>>> + gimple_cond_set_lhs (cstmt, lhs);
>>> + gimple_cond_set_rhs (cstmt, boolean_false_node);
>>> + gimple_cond_set_code (cstmt, cmp_code);
>>> + update_stmt (stmt);
>>> + return;
>>> + }
>>> + if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>>> + {
>>> + tree cond = build2 (cmp_code, boolean_type_node, lhs,
>>> + boolean_false_node);
>>> + gimple_assign_set_rhs1 (stmt, cond);
>>> + lhs = gimple_assign_lhs (stmt);
>>> + gcc_assert (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
>>> + || (bitint_precision_kind (TREE_TYPE (lhs))
>>> + <= bitint_prec_middle));
>>> + update_stmt (stmt);
>>> + return;
>>> + }
>>> + gimple_assign_set_rhs1 (stmt, lhs);
>>> + gimple_assign_set_rhs2 (stmt, boolean_false_node);
>>> + gimple_assign_set_rhs_code (stmt, cmp_code);
>>> + update_stmt (stmt);
>>> + return;
>>> + }
>>> + if (final_cast_p)
>>> + {
>>> + tree lhs_type = TREE_TYPE (lhs);
>>> + /* Add support for 3 or more limbs filled in from normal integral
>>> + type if this assert fails. If no target chooses limb mode smaller
>>> + than half of largest supported normal integral type, this will not
>>> + be needed. */
>>> + gcc_assert (TYPE_PRECISION (lhs_type) <= 2 * limb_prec);
>>> + gimple *g;
>>> + if (TREE_CODE (lhs_type) == BITINT_TYPE
>>> + && bitint_precision_kind (lhs_type) == bitint_prec_middle)
>>> + lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (lhs_type),
>>> + TYPE_UNSIGNED (lhs_type));
>>> + m_data_cnt = 0;
>>> + tree rhs1 = gimple_assign_rhs1 (stmt);
>>> + tree r1 = handle_operand (rhs1, size_int (0));
>>> + if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
>>> + r1 = add_cast (lhs_type, r1);
>>> + if (TYPE_PRECISION (lhs_type) > limb_prec)
>>> + {
>>> + m_data_cnt = 0;
>>> + m_first = false;
>>> + tree r2 = handle_operand (rhs1, size_int (1));
>>> + r2 = add_cast (lhs_type, r2);
>>> + g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
>>> + build_int_cst (unsigned_type_node,
>>> + limb_prec));
>>> + insert_before (g);
>>> + g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
>>> + gimple_assign_lhs (g));
>>> + insert_before (g);
>>> + r1 = gimple_assign_lhs (g);
>>> + }
>>> + if (lhs_type != TREE_TYPE (lhs))
>>> + g = gimple_build_assign (lhs, NOP_EXPR, r1);
>>> + else
>>> + g = gimple_build_assign (lhs, r1);
>>> + gsi_replace (&m_gsi, g, true);
>>> + return;
>>> + }
>>> + if (is_gimple_assign (stmt))
>>> + switch (gimple_assign_rhs_code (stmt))
>>> + {
>>> + case LSHIFT_EXPR:
>>> + case RSHIFT_EXPR:
>>> + lower_shift_stmt (NULL_TREE, stmt);
>>> + return;
>>> + case MULT_EXPR:
>>> + case TRUNC_DIV_EXPR:
>>> + case TRUNC_MOD_EXPR:
>>> + lower_muldiv_stmt (NULL_TREE, stmt);
>>> + return;
>>> + case FIX_TRUNC_EXPR:
>>> + case FLOAT_EXPR:
>>> + lower_float_conv_stmt (NULL_TREE, stmt);
>>> + return;
>>> + case REALPART_EXPR:
>>> + case IMAGPART_EXPR:
>>> + lower_cplxpart_stmt (NULL_TREE, stmt);
>>> + return;
>>> + case COMPLEX_EXPR:
>>> + lower_complexexpr_stmt (stmt);
>>> + return;
>>> + default:
>>> + break;
>>> + }
>>> + gcc_unreachable ();
>>> +}
>>> +
>>> +/* Helper for walk_non_aliased_vuses. Determine if we arrived at
>>> + the desired memory state. */
>>> +
>>> +void *
>>> +vuse_eq (ao_ref *, tree vuse1, void *data)
>>> +{
>>> + tree vuse2 = (tree) data;
>>> + if (vuse1 == vuse2)
>>> + return data;
>>> +
>>> + return NULL;
>>> +}
>>> +
>>> +/* Dominator walker used to discover which large/huge _BitInt
>>> + loads could be sunk into all their uses. */
>>> +
>>> +class bitint_dom_walker : public dom_walker
>>> +{
>>> +public:
>>> + bitint_dom_walker (bitmap names, bitmap loads)
>>> + : dom_walker (CDI_DOMINATORS), m_names (names), m_loads (loads) {}
>>> +
>>> + edge before_dom_children (basic_block) final override;
>>> +
>>> +private:
>>> + bitmap m_names, m_loads;
>>> +};
>>> +
>>> +edge
>>> +bitint_dom_walker::before_dom_children (basic_block bb)
>>> +{
>>> + gphi *phi = get_virtual_phi (bb);
>>> + tree vop;
>>> + if (phi)
>>> + vop = gimple_phi_result (phi);
>>> + else if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
>>> + vop = NULL_TREE;
>>> + else
>>> + vop = (tree) get_immediate_dominator (CDI_DOMINATORS, bb)->aux;
>>> +
>>> + auto_vec<tree, 16> worklist;
>>> + for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
>>> + !gsi_end_p (gsi); gsi_next (&gsi))
>>> + {
>>> + gimple *stmt = gsi_stmt (gsi);
>>> + if (is_gimple_debug (stmt))
>>> + continue;
>>> +
>>> + if (!vop && gimple_vuse (stmt))
>>> + vop = gimple_vuse (stmt);
>>> +
>>> + tree cvop = vop;
>>> + if (gimple_vdef (stmt))
>>> + vop = gimple_vdef (stmt);
>>> +
>>> + tree lhs = gimple_get_lhs (stmt);
>>> + if (lhs
>>> + && TREE_CODE (lhs) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
>>> + && !bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
>>> + /* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names,
>>> + it means it will be handled in a loop or straight line code
>>> + at the location of its (ultimate) immediate use, so for
>>> + vop checking purposes check these only at the ultimate
>>> + immediate use. */
>>> + continue;
>>> +
>>> + ssa_op_iter oi;
>>> + use_operand_p use_p;
>>> + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, oi, SSA_OP_USE)
>>> + {
>>> + tree s = USE_FROM_PTR (use_p);
>>> + if (TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
>>> + worklist.safe_push (s);
>>> + }
>>> +
>>> + while (worklist.length () > 0)
>>> + {
>>> + tree s = worklist.pop ();
>>> +
>>> + if (!bitmap_bit_p (m_names, SSA_NAME_VERSION (s)))
>>> + {
>>> + FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
>>> + oi, SSA_OP_USE)
>>> + {
>>> + tree s2 = USE_FROM_PTR (use_p);
>>> + if (TREE_CODE (TREE_TYPE (s2)) == BITINT_TYPE
>>> + && (bitint_precision_kind (TREE_TYPE (s2))
>>> + >= bitint_prec_large))
>>> + worklist.safe_push (s2);
>>> + }
>>> + continue;
>>> + }
>>> + if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
>>> + && gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
>>> + {
>>> + tree rhs = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
>>> + if (TREE_CODE (rhs) == SSA_NAME
>>> + && bitmap_bit_p (m_loads, SSA_NAME_VERSION (rhs)))
>>> + s = rhs;
>>> + else
>>> + continue;
>>> + }
>>> + else if (!bitmap_bit_p (m_loads, SSA_NAME_VERSION (s)))
>>> + continue;
>>> +
>>> + ao_ref ref;
>>> + ao_ref_init (&ref, gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)));
>>> + tree lvop = gimple_vuse (SSA_NAME_DEF_STMT (s));
>>> + unsigned limit = 64;
>>> + tree vuse = cvop;
>>> + if (vop != cvop
>>> + && is_gimple_assign (stmt)
>>> + && gimple_store_p (stmt)
>>> + && !operand_equal_p (lhs,
>>> + gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)),
>>> + 0))
>>> + vuse = vop;
>>> + if (vuse != lvop
>>> + && walk_non_aliased_vuses (&ref, vuse, false, vuse_eq,
>>> + NULL, NULL, limit, lvop) == NULL)
>>> + bitmap_clear_bit (m_loads, SSA_NAME_VERSION (s));
>>> + }
>>> + }
>>> +
>>> + bb->aux = (void *) vop;
>>> + return NULL;
>>> +}
>>> +
>>> +}
>>> +
>>> +/* Replacement for normal processing of STMT in tree-ssa-coalesce.cc
>>> + build_ssa_conflict_graph.
>>> + The differences are:
>>> + 1) don't process assignments with large/huge _BitInt lhs not in NAMES
>>> + 2) for large/huge _BitInt multiplication/division/modulo process def
>>> + only after processing uses rather than before to make uses conflict
>>> + with the definition
>>> + 3) for large/huge _BitInt uses not in NAMES mark the uses of their
>>> + SSA_NAME_DEF_STMT (recursively), because those uses will be sunk into
>>> + the final statement. */
>>> +
>>> +void
>>> +build_bitint_stmt_ssa_conflicts (gimple *stmt, live_track *live,
>>> + ssa_conflicts *graph, bitmap names,
>>> + void (*def) (live_track *, tree,
>>> + ssa_conflicts *),
>>> + void (*use) (live_track *, tree))
>>> +{
>>> + bool muldiv_p = false;
>>> + tree lhs = NULL_TREE;
>>> + if (is_gimple_assign (stmt))
>>> + {
>>> + lhs = gimple_assign_lhs (stmt);
>>> + if (TREE_CODE (lhs) == SSA_NAME
>>> + && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
>>> + {
>>> + if (!bitmap_bit_p (names, SSA_NAME_VERSION (lhs)))
>>> + return;
>>> + switch (gimple_assign_rhs_code (stmt))
>>> + {
>>> + case MULT_EXPR:
>>> + case TRUNC_DIV_EXPR:
>>> + case TRUNC_MOD_EXPR:
>>> + muldiv_p = true;
>>> + default:
>>> + break;
>>> + }
>>> + }
>>> + }
>>> +
>>> + ssa_op_iter iter;
>>> + tree var;
>>> + if (!muldiv_p)
>>> + {
>>> + /* For stmts with more than one SSA_NAME definition pretend all the
>>> + SSA_NAME outputs but the first one are live at this point, so
>>> + that conflicts are added in between all those even when they are
>>> + actually not really live after the asm, because expansion might
>>> + copy those into pseudos after the asm and if multiple outputs
>>> + share the same partition, it might overwrite those that should
>>> + be live. E.g.
>>> + asm volatile (".." : "=r" (a) : "=r" (b) : "0" (a), "1" (a));
>>> + return a;
>>> + See PR70593. */
>>> + bool first = true;
>>> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
>>> + if (first)
>>> + first = false;
>>> + else
>>> + use (live, var);
>>> +
>>> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
>>> + def (live, var, graph);
>>> + }
>>> +
>>> + auto_vec<tree, 16> worklist;
>>> + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_USE)
>>> + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
>>> + {
>>> + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
>>> + use (live, var);
>>> + else
>>> + worklist.safe_push (var);
>>> + }
>>> +
>>> + while (worklist.length () > 0)
>>> + {
>>> + tree s = worklist.pop ();
>>> + FOR_EACH_SSA_TREE_OPERAND (var, SSA_NAME_DEF_STMT (s), iter, SSA_OP_USE)
>>> + if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
>>> + && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
>>> + {
>>> + if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
>>> + use (live, var);
>>> + else
>>> + worklist.safe_push (var);
>>> + }
>>> + }
>>> +
>>> + if (muldiv_p)
>>> + def (live, lhs, graph);
>>> +}
>>> +
>>> +/* Entry point for _BitInt(N) operation lowering during optimization. */
>>> +
>>> +static unsigned int
>>> +gimple_lower_bitint (void)
>>> +{
>>> + small_max_prec = mid_min_prec = large_min_prec = huge_min_prec = 0;
>>> + limb_prec = 0;
>>> +
>>> + unsigned int i;
>>> + tree vop = gimple_vop (cfun);
>>> + for (i = 0; i < num_ssa_names; ++i)
>>> + {
>>> + tree s = ssa_name (i);
>>> + if (s == NULL)
>>> + continue;
>>> + tree type = TREE_TYPE (s);
>>> + if (TREE_CODE (type) == COMPLEX_TYPE)
>>> + type = TREE_TYPE (type);
>>> + if (TREE_CODE (type) == BITINT_TYPE
>>> + && bitint_precision_kind (type) != bitint_prec_small)
>>> + break;
>>> + /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
>>> + into memory. Such functions could have no large/huge SSA_NAMEs. */
>>> + if (vop && SSA_NAME_VAR (s) == vop)
>>
>> SSA_NAME_IS_VIRTUAL_OPERAND (s)
>
> Ok.
>>
>>> + {
>>> + gimple *g = SSA_NAME_DEF_STMT (s);
>>> + if (is_gimple_assign (g) && gimple_store_p (g))
>>> + {
>>
>> what about functions returning large _BitInt<N> where the ABI
>> specifies it doesn't return by invisible reference?
>
> When we have such a target with _BitInt support we'd see it in testsuite
> coverage and I guess checking GIMPLE_RETURN stmts in a function shouldn't
> be that hard (first check that the function returns large/huge _BitInt and
> if it does, look for preds of EXIT block, or simply say all such functions
> do have large/huge _BitInt if they return it).
>
>> The other def not handled are ASMs ...
>
> Indeed, ASMs is what I've realized I won't be able to find so cheaply like
> the constant stores into memory. I think it is more important to have the
> pass cheap for non-_BitInt sources and so for asm with large/huge _BitInt
> INTEGER_CST inputs I've dealt with it in expansion (and intentionally not
> in a very optimized way by forcing it into memory, because I don't think
> doing anything smarter is worth it for inline asm).
>
>>> + i = 0;
> ^^^^^^ here
>
>>> + FOR_EACH_VEC_ELT (switch_statements, j, stmt)
>>> + {
>>> + gswitch *swtch = as_a<gswitch *> (stmt);
>>> + tree_switch_conversion::switch_decision_tree dt (swtch);
>>> + expanded |= dt.analyze_switch_statement ();
>>> + }
>>> +
>>> + if (expanded)
>>> + {
>>> + free_dominance_info (CDI_DOMINATORS);
>>> + free_dominance_info (CDI_POST_DOMINATORS);
>>> + mark_virtual_operands_for_renaming (cfun);
>>> + cleanup_tree_cfg (TODO_update_ssa);
>>> + }
>>> + }
>>> +
>>> + struct bitint_large_huge large_huge;
>>> + bool has_large_huge_parm_result = false;
>>> + bool has_large_huge = false;
>>> + unsigned int ret = 0, first_large_huge = ~0U;
>>> + bool edge_insertions = false;
>>> + for (; i < num_ssa_names; ++i)
>>
>> the above SSA update could end up re-using a smaller SSA name number,
>> so I wonder if you can really avoid starting at 1 again.
>
> I do that above. And similarly if I try to "deoptimize" ABS/ABSU/MIN/MAX
> or rotates etc., I reset first_large_huge to 0 so the loop after that starts
> at 0.
Ah, missed that.
>>> + FOR_EACH_BB_REVERSE_FN (bb, cfun)
>>
>> is reverse in any way important? (not visiting newly created blocks?)
>
> Yeah, that was so that I don't visit the newly created blocks.
> The loop continues to iterate with prev which is computed before the
> lowering, so if the lowering splits blocks etc. it will continue in the
> original block before the code added during the lowering.
>
>>> --- gcc/lto-streamer-in.cc.jj 2023-07-17 09:07:42.078283882 +0200
>>> +++ gcc/lto-streamer-in.cc 2023-07-27 15:03:24.255234159 +0200
>>> @@ -1888,7 +1888,7 @@ lto_input_tree_1 (class lto_input_block
>>>
>>> for (i = 0; i < len; i++)
>>> a[i] = streamer_read_hwi (ib);
>>> - gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
>>> + gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
>>
>> OK to push separately.
>
> Ok.
>
>>> + else
>>> + {
>>> + SET_TYPE_MODE (type, BLKmode);
>>> + cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
>>> + }
>>> + TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
>>> + TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
>>> + SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
>>
>> so when a target allows say TImode we don't align to that larger mode?
>> Might be worth documenting in the target hook that the alignment
>> which I think is part of the ABI is specified by the limb mode.
>
> Right now there is just x86-64 psABI finalized, which says roughly that
> what fits into {,un}signed {char,short,int,long,long long} is passed/laid
> out like that, everything else is handled like structure containing n
> unsigned long long limbs, so indeed
> alignof (__int128) > alignof (_BitInt(128)) there.
> Now, e.g. the ARM people don't really like that and are contemplating
> to say the limb_mode is TImode for 64-bit code, that would mean that
> even _BitInt(128) would be a bitint_small_prec there, no bitint_middle_prec
> and _BitInt(129) and above would have 128-bit alignment.
> The problem with that is that the double-word support in GCC isn't very good
> as you know, tons of operations need libgcc and the implementation using
> 128-bit limbs in libgcc would be terrible. So, maybe we'll want to split
> info.limb_mode into info.abi_limb_mode and info.limb_mode, where the former
> would be used just in a few spots for ABI purposes (e.g. the alignment and
> sizing), while a smaller info.limb_mode could be used what is used
> internally for the loops and semi-internally (GCC ABI) in the libgcc APIs.
> Of course precondition would be that the _BitInt endianity matches the
> target endianity, otherwise there is no way to do that.
> So, AArch64 could then say _BitInt(256) is 128-bit aligned and
> _BitInt(257) has same size as _BitInt(384), but still handle it internally
> using 64-bit limbs and expect the libgcc APIs to be passed arrays of 64-bit
> limbs (with 64-bit alignment).
>
>> Are arrays of _BitInt a thing? _BitInt<8>[10] would have quite some
>> padding then which might be unexpected?
>
> Sure, _BitInt(8)[10] is a thing, after all, the testsuite contains tons
> of examples of that. In the x86-64 psABI, _BitInt(8) has same
> alignment/size as signed char, so there is no padding, but sure,
> _BitInt(9)[10] does have a padding, it is like array of 10 unsigned shorts
> with 7 bits of padding in each of them. Similarly,
> _BitInt(575)[10] is an array with 72 bytes long elements with 1 padding bit
> in each.
>
>>> +/* Target properties of _BitInt(N) type. _BitInt(N) is to be represented
>>> + as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
>>> + ordered from least significant to most significant if !big_endian,
>>> + otherwise from most significant to least significant. If extended is
>>> + false, the bits above or equal to N are undefined when stored in a register
>>> + or memory, otherwise they are zero or sign extended depending on if
>>> + it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N). */
>>> +
>>
>> I think this belongs to tm.texi (or duplicated there)
>
> Ok.
>
>>> @@ -6969,8 +6970,14 @@ eliminate_dom_walker::eliminate_stmt (ba
>>> || !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
>>> && !type_has_mode_precision_p (TREE_TYPE (lhs)))
>>> {
>>> - if (TREE_CODE (lhs) == COMPONENT_REF
>>> - || TREE_CODE (lhs) == MEM_REF)
>>> + if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
>>> + && (TYPE_PRECISION (TREE_TYPE (lhs))
>>> + > (targetm.scalar_mode_supported_p (TImode)
>>> + ? GET_MODE_PRECISION (TImode)
>>> + : GET_MODE_PRECISION (DImode))))
>>> + lookup_lhs = NULL_TREE;
>>
>> What's the reason for this? You allow non-mode precision
>> stores, if you wanted to disallow BLKmode I think the better
>> way would be to add != BLKmode above or alternatively
>> build a limb-size _BitInt type (instead of
>> build_nonstandard_integer_type)?
>
> This was just a quick hack to fix some ICEs. I'm afraid once some people
> try csmith on _BitInt we'll get more such spots, and sure, it might be able
> to deal with it better, just not too familiar with this to know what that
> would be.
Ok, if you remember what iced I’d appreciate as pointer after bit int is on trunk, I’ll see if I can make more sense of it then.
>>> + this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
>>> + g = gimple_build_assign (make_ssa_name (TREE_TYPE (index_expr)),
>>> + PLUS_EXPR, index_expr, this_low);
>>> + gimple_set_location (g, loc);
>>> + gsi_insert_after (&gsi, g, GSI_NEW_STMT);
>>> + index_expr = gimple_assign_lhs (g);
>>
>> I suppose using gimple_convert/gimple_build with a sequence would be
>> easier to follow.
>
> Guess I could try to use them here, but as I said earlier, changing the
> lowering pass to use those everywhere would mean rewriting half of those
> 6000 lines.
>>> --- gcc/ubsan.cc.jj 2023-05-20 15:31:09.240660915 +0200
>>> +++ gcc/ubsan.cc 2023-07-27 15:03:24.260234089 +0200
>>> @@ -50,6 +50,8 @@ along with GCC; see the file COPYING3.
>>> #include "gimple-fold.h"
>>> #include "varasm.h"
>>> #include "realmpfr.h"
>>> +#include "target.h"
>>> +#include "langhooks.h"
>>
>> Sanitizer support into a separate patch?
>
> Ok.
>
>>> @@ -1717,12 +1717,11 @@ simplify_using_ranges::simplify_internal
>>> g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
>>> else
>>> {
>>> - int prec = TYPE_PRECISION (type);
>>> tree utype = type;
>>> if (ovf
>>> || !useless_type_conversion_p (type, TREE_TYPE (op0))
>>> || !useless_type_conversion_p (type, TREE_TYPE (op1)))
>>> - utype = build_nonstandard_integer_type (prec, 1);
>>> + utype = unsigned_type_for (type);
>>> if (TREE_CODE (op0) == INTEGER_CST)
>>> op0 = fold_convert (utype, op0);
>>> else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))
>>
>> Phew. That was big.
>
> Sorry, I hoped it wouldn't take me almost 3 months and would be much shorter
> as well, but clearly I'm not good at estimating stuff...
Well, it’s definitely feature creep with now the _Decimal and bitfield stuff …
>> A lot of it looks OK (I guess nearly all of it). For the overall
>> picture I'm unsure esp. how/if we need to keep the distinction for
>> small _BitInt<>s and if we maybe want to lower them earlier even?
>
> The reason for current location was to have a few cleanup passes after IPA,
> so that e.g. value ranges can be propagated and computed (something that
> helps a lot e.g. for multiplications/divisions and __builtin_*_overflow).
> Once lowered, ranger is out of luck with these.
For the small and very small bitints ranger should still work though? Guess it depends how people will use bit int in the end.
Richard
> Jakub
>
On Fri, 4 Aug 2023, Richard Biener via Gcc-patches wrote:
> > Sorry, I hoped it wouldn't take me almost 3 months and would be much shorter
> > as well, but clearly I'm not good at estimating stuff...
>
> Well, it’s definitely feature creep with now the _Decimal and bitfield stuff …
I think feature creep would more be adding new features *outside the scope
of the standard* (_BitInt bit-fields and conversions to/from DFP are
within the standard, as are _BitInt atomic operations). For example,
features to help support type-generic operations on _BitInt, or
type-generic versions of existing built-in functions (e.g. popcount)
suitable for use on _BitInt - it's likely such features will be of use
eventually, but they aren't needed for C23 (where the corresponding
type-generic operations only support _BitInt types when they have the same
width as some other type), so we can certainly get the standard features
in first and think about additional features beyond that later (just as
support for wider _BitInt can come later, not being required by the
standard).
@@ -113,7 +113,7 @@ DEFTREECODE (BLOCK, "block", tcc_excepti
/* The ordering of the following codes is optimized for the checking
macros in tree.h. Changing the order will degrade the speed of the
compiler. OFFSET_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, INTEGER_TYPE,
- REAL_TYPE, POINTER_TYPE. */
+ BITINT_TYPE, REAL_TYPE, POINTER_TYPE. */
/* An offset is a pointer relative to an object.
The TREE_TYPE field is the type of the object at the offset.
@@ -144,6 +144,9 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type
and TYPE_PRECISION (number of bits used by this type). */
DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)
+/* Bit-precise integer type. */
+DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
+
/* C's float and double. Different floating types are distinguished
by machine mode and by the TYPE_SIZE and the TYPE_PRECISION. */
DEFTREECODE (REAL_TYPE, "real_type", tcc_type, 0)
@@ -363,6 +363,14 @@ code_helper::is_builtin_fn () const
(tree_not_check5 ((T), __FILE__, __LINE__, __FUNCTION__, \
(CODE1), (CODE2), (CODE3), (CODE4), (CODE5)))
+#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
+(tree_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
+ (CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
+
+#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
+(tree_not_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
+ (CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
+
#define CONTAINS_STRUCT_CHECK(T, STRUCT) \
(contains_struct_check ((T), (STRUCT), __FILE__, __LINE__, __FUNCTION__))
@@ -485,6 +493,8 @@ extern void omp_clause_range_check_faile
#define TREE_NOT_CHECK4(T, CODE1, CODE2, CODE3, CODE4) (T)
#define TREE_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
#define TREE_NOT_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
+#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
+#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
#define TREE_CLASS_CHECK(T, CODE) (T)
#define TREE_RANGE_CHECK(T, CODE1, CODE2) (T)
#define EXPR_CHECK(T) (T)
@@ -528,8 +538,8 @@ extern void omp_clause_range_check_faile
TREE_CHECK2 (T, ARRAY_TYPE, INTEGER_TYPE)
#define NUMERICAL_TYPE_CHECK(T) \
- TREE_CHECK5 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
- FIXED_POINT_TYPE)
+ TREE_CHECK6 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
+ FIXED_POINT_TYPE, BITINT_TYPE)
/* Here is how primitive or already-canonicalized types' hash codes
are made. */
@@ -603,7 +613,8 @@ extern void omp_clause_range_check_faile
#define INTEGRAL_TYPE_P(TYPE) \
(TREE_CODE (TYPE) == ENUMERAL_TYPE \
|| TREE_CODE (TYPE) == BOOLEAN_TYPE \
- || TREE_CODE (TYPE) == INTEGER_TYPE)
+ || TREE_CODE (TYPE) == INTEGER_TYPE \
+ || TREE_CODE (TYPE) == BITINT_TYPE)
/* Nonzero if TYPE represents an integral type, including complex
and vector integer types. */
@@ -614,6 +625,10 @@ extern void omp_clause_range_check_faile
|| VECTOR_TYPE_P (TYPE)) \
&& INTEGRAL_TYPE_P (TREE_TYPE (TYPE))))
+/* Nonzero if TYPE is bit-precise integer type. */
+
+#define BITINT_TYPE_P(TYPE) (TREE_CODE (TYPE) == BITINT_TYPE)
+
/* Nonzero if TYPE represents a non-saturating fixed-point type. */
#define NON_SAT_FIXED_POINT_TYPE_P(TYPE) \
@@ -3684,6 +3699,38 @@ tree_not_check5 (tree __t, const char *_
}
inline tree
+tree_check6 (tree __t, const char *__f, int __l, const char *__g,
+ enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
+ enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
+{
+ if (TREE_CODE (__t) != __c1
+ && TREE_CODE (__t) != __c2
+ && TREE_CODE (__t) != __c3
+ && TREE_CODE (__t) != __c4
+ && TREE_CODE (__t) != __c5
+ && TREE_CODE (__t) != __c6)
+ tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
+ 0);
+ return __t;
+}
+
+inline tree
+tree_not_check6 (tree __t, const char *__f, int __l, const char *__g,
+ enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
+ enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
+{
+ if (TREE_CODE (__t) == __c1
+ || TREE_CODE (__t) == __c2
+ || TREE_CODE (__t) == __c3
+ || TREE_CODE (__t) == __c4
+ || TREE_CODE (__t) == __c5
+ || TREE_CODE (__t) == __c6)
+ tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
+ __c6, 0);
+ return __t;
+}
+
+inline tree
contains_struct_check (tree __t, const enum tree_node_structure_enum __s,
const char *__f, int __l, const char *__g)
{
@@ -3821,7 +3868,7 @@ any_integral_type_check (tree __t, const
{
if (!ANY_INTEGRAL_TYPE_P (__t))
tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
- INTEGER_TYPE, 0);
+ INTEGER_TYPE, BITINT_TYPE, 0);
return __t;
}
@@ -3940,6 +3987,38 @@ tree_not_check5 (const_tree __t, const c
}
inline const_tree
+tree_check6 (const_tree __t, const char *__f, int __l, const char *__g,
+ enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
+ enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
+{
+ if (TREE_CODE (__t) != __c1
+ && TREE_CODE (__t) != __c2
+ && TREE_CODE (__t) != __c3
+ && TREE_CODE (__t) != __c4
+ && TREE_CODE (__t) != __c5
+ && TREE_CODE (__t) != __c6)
+ tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
+ 0);
+ return __t;
+}
+
+inline const_tree
+tree_not_check6 (const_tree __t, const char *__f, int __l, const char *__g,
+ enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
+ enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
+{
+ if (TREE_CODE (__t) == __c1
+ || TREE_CODE (__t) == __c2
+ || TREE_CODE (__t) == __c3
+ || TREE_CODE (__t) == __c4
+ || TREE_CODE (__t) == __c5
+ || TREE_CODE (__t) == __c6)
+ tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
+ __c6, 0);
+ return __t;
+}
+
+inline const_tree
contains_struct_check (const_tree __t, const enum tree_node_structure_enum __s,
const char *__f, int __l, const char *__g)
{
@@ -4047,7 +4126,7 @@ any_integral_type_check (const_tree __t,
{
if (!ANY_INTEGRAL_TYPE_P (__t))
tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
- INTEGER_TYPE, 0);
+ INTEGER_TYPE, BITINT_TYPE, 0);
return __t;
}
@@ -5579,6 +5658,7 @@ extern void build_common_builtin_nodes (
extern void tree_cc_finalize (void);
extern tree build_nonstandard_integer_type (unsigned HOST_WIDE_INT, int);
extern tree build_nonstandard_boolean_type (unsigned HOST_WIDE_INT);
+extern tree build_bitint_type (unsigned HOST_WIDE_INT, int);
extern tree build_range_type (tree, tree, tree);
extern tree build_nonshared_range_type (tree, tree, tree);
extern bool subrange_type_for_debug_p (const_tree, tree *, tree *);
@@ -991,6 +991,7 @@ tree_code_size (enum tree_code code)
case VOID_TYPE:
case FUNCTION_TYPE:
case METHOD_TYPE:
+ case BITINT_TYPE:
case LANG_TYPE: return sizeof (tree_type_non_common);
default:
gcc_checking_assert (code >= NUM_TREE_CODES);
@@ -1732,6 +1733,7 @@ wide_int_to_tree_1 (tree type, const wid
case INTEGER_TYPE:
case OFFSET_TYPE:
+ case BITINT_TYPE:
if (TYPE_SIGN (type) == UNSIGNED)
{
/* Cache [0, N). */
@@ -1915,6 +1917,7 @@ cache_integer_cst (tree t, bool might_du
case INTEGER_TYPE:
case OFFSET_TYPE:
+ case BITINT_TYPE:
if (TYPE_UNSIGNED (type))
{
/* Cache 0..N */
@@ -2637,7 +2640,7 @@ build_zero_cst (tree type)
{
case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
- case OFFSET_TYPE: case NULLPTR_TYPE:
+ case OFFSET_TYPE: case NULLPTR_TYPE: case BITINT_TYPE:
return build_int_cst (type, 0);
case REAL_TYPE:
@@ -6053,7 +6056,16 @@ type_hash_canon_hash (tree type)
hstate.add_object (TREE_INT_CST_ELT (t, i));
break;
}
-
+
+ case BITINT_TYPE:
+ {
+ unsigned prec = TYPE_PRECISION (type);
+ unsigned uns = TYPE_UNSIGNED (type);
+ hstate.add_object (prec);
+ hstate.add_int (uns);
+ break;
+ }
+
case REAL_TYPE:
case FIXED_POINT_TYPE:
{
@@ -6136,6 +6148,11 @@ type_cache_hasher::equal (type_hash *a,
|| tree_int_cst_equal (TYPE_MIN_VALUE (a->type),
TYPE_MIN_VALUE (b->type))));
+ case BITINT_TYPE:
+ if (TYPE_PRECISION (a->type) != TYPE_PRECISION (b->type))
+ return false;
+ return TYPE_UNSIGNED (a->type) == TYPE_UNSIGNED (b->type);
+
case FIXED_POINT_TYPE:
return TYPE_SATURATING (a->type) == TYPE_SATURATING (b->type);
@@ -6236,7 +6253,7 @@ type_hash_canon (unsigned int hashcode,
/* Free also min/max values and the cache for integer
types. This can't be done in free_node, as LTO frees
those on its own. */
- if (TREE_CODE (type) == INTEGER_TYPE)
+ if (TREE_CODE (type) == INTEGER_TYPE || TREE_CODE (type) == BITINT_TYPE)
{
if (TYPE_MIN_VALUE (type)
&& TREE_TYPE (TYPE_MIN_VALUE (type)) == type)
@@ -7154,6 +7171,44 @@ build_nonstandard_boolean_type (unsigned
return type;
}
+static GTY(()) vec<tree, va_gc> *bitint_type_cache;
+
+/* Builds a signed or unsigned _BitInt(PRECISION) type. */
+tree
+build_bitint_type (unsigned HOST_WIDE_INT precision, int unsignedp)
+{
+ tree itype, ret;
+
+ if (unsignedp)
+ unsignedp = MAX_INT_CACHED_PREC + 1;
+
+ if (bitint_type_cache == NULL)
+ vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
+
+ if (precision <= MAX_INT_CACHED_PREC)
+ {
+ itype = (*bitint_type_cache)[precision + unsignedp];
+ if (itype)
+ return itype;
+ }
+
+ itype = make_node (BITINT_TYPE);
+ TYPE_PRECISION (itype) = precision;
+
+ if (unsignedp)
+ fixup_unsigned_type (itype);
+ else
+ fixup_signed_type (itype);
+
+ inchash::hash hstate;
+ inchash::add_expr (TYPE_MAX_VALUE (itype), hstate);
+ ret = type_hash_canon (hstate.end (), itype);
+ if (precision <= MAX_INT_CACHED_PREC)
+ (*bitint_type_cache)[precision + unsignedp] = ret;
+
+ return ret;
+}
+
/* Create a range of some discrete type TYPE (an INTEGER_TYPE, ENUMERAL_TYPE
or BOOLEAN_TYPE) with low bound LOWVAL and high bound HIGHVAL. If SHARED
is true, reuse such a type that has already been constructed. */
@@ -11041,6 +11096,8 @@ signed_or_unsigned_type_for (int unsigne
else
return NULL_TREE;
+ if (TREE_CODE (type) == BITINT_TYPE)
+ return build_bitint_type (bits, unsignedp);
return build_nonstandard_integer_type (bits, unsignedp);
}
@@ -13462,6 +13519,7 @@ verify_type_variant (const_tree t, tree
if ((TREE_CODE (t) == ENUMERAL_TYPE && COMPLETE_TYPE_P (t))
|| TREE_CODE (t) == INTEGER_TYPE
|| TREE_CODE (t) == BOOLEAN_TYPE
+ || TREE_CODE (t) == BITINT_TYPE
|| SCALAR_FLOAT_TYPE_P (t)
|| FIXED_POINT_TYPE_P (t))
{
@@ -14201,6 +14259,7 @@ verify_type (const_tree t)
}
else if (TREE_CODE (t) == INTEGER_TYPE
|| TREE_CODE (t) == BOOLEAN_TYPE
+ || TREE_CODE (t) == BITINT_TYPE
|| TREE_CODE (t) == OFFSET_TYPE
|| TREE_CODE (t) == REFERENCE_TYPE
|| TREE_CODE (t) == NULLPTR_TYPE
@@ -14260,6 +14319,7 @@ verify_type (const_tree t)
}
if (TREE_CODE (t) != INTEGER_TYPE
&& TREE_CODE (t) != BOOLEAN_TYPE
+ && TREE_CODE (t) != BITINT_TYPE
&& TREE_CODE (t) != OFFSET_TYPE
&& TREE_CODE (t) != REFERENCE_TYPE
&& TREE_CODE (t) != NULLPTR_TYPE
@@ -15035,6 +15095,7 @@ void
tree_cc_finalize (void)
{
clear_nonstandard_integer_type_cache ();
+ vec_free (bitint_type_cache);
}
#if CHECKING_P
@@ -1876,6 +1876,7 @@ type_to_class (tree type)
? string_type_class : array_type_class);
case LANG_TYPE: return lang_type_class;
case OPAQUE_TYPE: return opaque_type_class;
+ case BITINT_TYPE: return bitint_type_class;
default: return no_type_class;
}
}
@@ -9423,9 +9424,11 @@ fold_builtin_unordered_cmp (location_t l
/* Choose the wider of two real types. */
cmp_type = TYPE_PRECISION (type0) >= TYPE_PRECISION (type1)
? type0 : type1;
- else if (code0 == REAL_TYPE && code1 == INTEGER_TYPE)
+ else if (code0 == REAL_TYPE
+ && (code1 == INTEGER_TYPE || code1 == BITINT_TYPE))
cmp_type = type0;
- else if (code0 == INTEGER_TYPE && code1 == REAL_TYPE)
+ else if ((code0 == INTEGER_TYPE || code0 == BITINT_TYPE)
+ && code1 == REAL_TYPE)
cmp_type = type1;
arg0 = fold_convert_loc (loc, cmp_type, arg0);
@@ -5016,6 +5016,24 @@ store_one_arg (struct arg_data *arg, rtx
if (arg->pass_on_stack)
stack_arg_under_construction++;
+ if (TREE_CODE (pval) == INTEGER_CST
+ && TREE_CODE (TREE_TYPE (pval)) == BITINT_TYPE)
+ {
+ unsigned int prec = TYPE_PRECISION (TREE_TYPE (pval));
+ struct bitint_info info;
+ gcc_assert (targetm.c.bitint_type_info (prec, &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
+ if (prec > limb_prec)
+ {
+ scalar_int_mode arith_mode
+ = (targetm.scalar_mode_supported_p (TImode)
+ ? TImode : DImode);
+ if (prec > GET_MODE_PRECISION (arith_mode))
+ pval = tree_output_constant_def (pval);
+ }
+ }
+
arg->value = expand_expr (pval,
(partial
|| TYPE_MODE (TREE_TYPE (pval)) != arg->mode)
@@ -3096,6 +3096,15 @@ expand_asm_stmt (gasm *stmt)
{
tree t = gimple_asm_input_op (stmt, i);
input_tvec[i] = TREE_VALUE (t);
+ if (TREE_CODE (input_tvec[i]) == INTEGER_CST
+ && TREE_CODE (TREE_TYPE (input_tvec[i])) == BITINT_TYPE)
+ {
+ scalar_int_mode arith_mode
+ = (targetm.scalar_mode_supported_p (TImode) ? TImode : DImode);
+ if (TYPE_PRECISION (TREE_TYPE (input_tvec[i]))
+ > GET_MODE_PRECISION (arith_mode))
+ input_tvec[i] = tree_output_constant_def (input_tvec[i]);
+ }
constraints[i + noutputs]
= TREE_STRING_POINTER (TREE_VALUE (TREE_PURPOSE (t)));
}
@@ -4524,6 +4533,10 @@ expand_debug_expr (tree exp)
/* Fall through. */
case INTEGER_CST:
+ if (TREE_CODE (TREE_TYPE (exp)) == BITINT_TYPE
+ && TYPE_MODE (TREE_TYPE (exp)) == BLKmode)
+ return NULL;
+ /* FALLTHRU */
case REAL_CST:
case FIXED_CST:
op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER);
@@ -2121,7 +2121,8 @@ classify_argument (machine_mode mode, co
return 0;
}
- if (type && AGGREGATE_TYPE_P (type))
+ if (type && (AGGREGATE_TYPE_P (type)
+ || (TREE_CODE (type) == BITINT_TYPE && words > 1)))
{
int i;
tree field;
@@ -2270,6 +2271,14 @@ classify_argument (machine_mode mode, co
}
break;
+ case BITINT_TYPE:
+ /* _BitInt(N) for N > 64 is passed as structure containing
+ (N + 63) / 64 64-bit elements. */
+ if (words > 2)
+ return 0;
+ classes[0] = classes[1] = X86_64_INTEGER_CLASS;
+ return 2;
+
default:
gcc_unreachable ();
}
@@ -24799,6 +24808,26 @@ ix86_get_excess_precision (enum excess_p
return FLT_EVAL_METHOD_UNPREDICTABLE;
}
+/* Return true if _BitInt(N) is supported and fill details about it into
+ *INFO. */
+bool
+ix86_bitint_type_info (int n, struct bitint_info *info)
+{
+ if (!TARGET_64BIT)
+ return false;
+ if (n <= 8)
+ info->limb_mode = QImode;
+ else if (n <= 16)
+ info->limb_mode = HImode;
+ else if (n <= 32)
+ info->limb_mode = SImode;
+ else
+ info->limb_mode = DImode;
+ info->big_endian = false;
+ info->extended = false;
+ return true;
+}
+
/* Implement PUSH_ROUNDING. On 386, we have pushw instruction that
decrements by exactly 2 no matter what the position was, there is no pushb.
@@ -25403,6 +25432,8 @@ ix86_run_selftests (void)
#undef TARGET_C_EXCESS_PRECISION
#define TARGET_C_EXCESS_PRECISION ix86_get_excess_precision
+#undef TARGET_C_BITINT_TYPE_INFO
+#define TARGET_C_BITINT_TYPE_INFO ix86_bitint_type_info
#undef TARGET_PROMOTE_PROTOTYPES
#define TARGET_PROMOTE_PROTOTYPES hook_bool_const_tree_true
#undef TARGET_PUSH_ARGUMENT
@@ -77,6 +77,7 @@ convert_to_pointer_1 (tree type, tree ex
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
{
/* If the input precision differs from the target pointer type
precision, first convert the input expression to an integer type of
@@ -316,6 +317,7 @@ convert_to_real_1 (tree type, tree expr,
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
return build1 (FLOAT_EXPR, type, expr);
case FIXED_POINT_TYPE:
@@ -660,6 +662,7 @@ convert_to_integer_1 (tree type, tree ex
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case OFFSET_TYPE:
+ case BITINT_TYPE:
/* If this is a logical operation, which just returns 0 or 1, we can
change the type of the expression. */
@@ -701,7 +704,9 @@ convert_to_integer_1 (tree type, tree ex
type corresponding to its mode, then do a nop conversion
to TYPE. */
else if (TREE_CODE (type) == ENUMERAL_TYPE
- || maybe_ne (outprec, GET_MODE_PRECISION (TYPE_MODE (type))))
+ || (TREE_CODE (type) != BITINT_TYPE
+ && maybe_ne (outprec,
+ GET_MODE_PRECISION (TYPE_MODE (type)))))
{
expr
= convert_to_integer_1 (lang_hooks.types.type_for_mode
@@ -1000,6 +1005,7 @@ convert_to_complex_1 (tree type, tree ex
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
return build2 (COMPLEX_EXPR, type, convert (subtype, expr),
convert (subtype, integer_zero_node));
@@ -936,6 +936,8 @@ Return a value, with the same meaning as
@code{FLT_EVAL_METHOD} that describes which excess precision should be
applied.
+@hook TARGET_C_BITINT_TYPE_INFO
+
@hook TARGET_PROMOTE_FUNCTION_MODE
@defmac PARM_BOUNDARY
@@ -1020,6 +1020,11 @@ Return a value, with the same meaning as
@code{FLT_EVAL_METHOD} that describes which excess precision should be
applied.
+@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, struct bitint_info *@var{info})
+This target hook returns true if _BitInt(N) is supported and provides some
+details on it.
+@end deftypefn
+
@deftypefn {Target Hook} machine_mode TARGET_PROMOTE_FUNCTION_MODE (const_tree @var{type}, machine_mode @var{mode}, int *@var{punsignedp}, const_tree @var{funtype}, int @var{for_return})
Like @code{PROMOTE_MODE}, but it is applied to outgoing function arguments or
function return values. The target hook should return the new mode
@@ -13298,6 +13298,14 @@ base_type_die (tree type, bool reverse)
encoding = DW_ATE_boolean;
break;
+ case BITINT_TYPE:
+ /* C23 _BitInt(N). */
+ if (TYPE_UNSIGNED (type))
+ encoding = DW_ATE_unsigned;
+ else
+ encoding = DW_ATE_signed;
+ break;
+
default:
/* No other TREE_CODEs are Dwarf fundamental types. */
gcc_unreachable ();
@@ -13308,6 +13316,8 @@ base_type_die (tree type, bool reverse)
add_AT_unsigned (base_type_result, DW_AT_byte_size,
int_size_in_bytes (type));
add_AT_unsigned (base_type_result, DW_AT_encoding, encoding);
+ if (TREE_CODE (type) == BITINT_TYPE)
+ add_AT_unsigned (base_type_result, DW_AT_bit_size, TYPE_PRECISION (type));
if (need_endianity_attribute_p (reverse))
add_AT_unsigned (base_type_result, DW_AT_endianity,
@@ -13392,6 +13402,7 @@ is_base_type (tree type)
case FIXED_POINT_TYPE:
case COMPLEX_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
return true;
case VOID_TYPE:
@@ -13990,12 +14001,24 @@ modified_type_die (tree type, int cv_qua
name = DECL_NAME (name);
add_name_attribute (mod_type_die, IDENTIFIER_POINTER (name));
}
- /* This probably indicates a bug. */
else if (mod_type_die && mod_type_die->die_tag == DW_TAG_base_type)
{
- name = TYPE_IDENTIFIER (type);
- add_name_attribute (mod_type_die,
- name ? IDENTIFIER_POINTER (name) : "__unknown__");
+ if (TREE_CODE (type) == BITINT_TYPE)
+ {
+ char name_buf[sizeof ("unsigned _BitInt(2147483647)")];
+ snprintf (name_buf, sizeof (name_buf),
+ "%s_BitInt(%d)", TYPE_UNSIGNED (type) ? "unsigned " : "",
+ TYPE_PRECISION (type));
+ add_name_attribute (mod_type_die, name_buf);
+ }
+ else
+ {
+ /* This probably indicates a bug. */
+ name = TYPE_IDENTIFIER (type);
+ add_name_attribute (mod_type_die,
+ name
+ ? IDENTIFIER_POINTER (name) : "__unknown__");
+ }
}
if (qualified_type && !reverse_base_type)
@@ -20523,6 +20546,22 @@ rtl_for_decl_init (tree init, tree type)
return NULL;
}
+ /* RTL can't deal with BLKmode INTEGER_CSTs. */
+ if (TREE_CODE (init) == INTEGER_CST
+ && TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
+ && TYPE_MODE (TREE_TYPE (init)) == BLKmode)
+ {
+ if (tree_fits_shwi_p (init))
+ {
+ bool uns = TYPE_UNSIGNED (TREE_TYPE (init));
+ tree type
+ = build_nonstandard_integer_type (HOST_BITS_PER_WIDE_INT, uns);
+ init = fold_convert (type, init);
+ }
+ else
+ return NULL;
+ }
+
rtl = expand_expr (init, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
/* If expand_expr returns a MEM, it wasn't immediate. */
@@ -26361,6 +26400,7 @@ gen_type_die_with_usage (tree type, dw_d
case FIXED_POINT_TYPE:
case COMPLEX_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
/* No DIEs needed for fundamental types. */
break;
@@ -10828,6 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
ssa_name = exp;
decl_rtl = get_rtx_for_ssa_name (ssa_name);
exp = SSA_NAME_VAR (ssa_name);
+ if (!exp || VAR_P (exp))
+ reduce_bit_field = false;
goto expand_decl_rtl;
case VAR_DECL:
@@ -10961,6 +10963,13 @@ expand_expr_real_1 (tree exp, rtx target
temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
MEM_ALIGN (temp), NULL_RTX, NULL);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_MEMORY
+ && modifier != EXPAND_WRITE
+ && modifier != EXPAND_CONST_ADDRESS)
+ return reduce_to_bit_field_precision (temp, NULL_RTX, type);
return temp;
}
@@ -11007,9 +11016,23 @@ expand_expr_real_1 (tree exp, rtx target
temp = gen_lowpart_SUBREG (mode, decl_rtl);
SUBREG_PROMOTED_VAR_P (temp) = 1;
SUBREG_PROMOTED_SET (temp, unsignedp);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_MEMORY
+ && modifier != EXPAND_WRITE
+ && modifier != EXPAND_CONST_ADDRESS)
+ return reduce_to_bit_field_precision (temp, NULL_RTX, type);
return temp;
}
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_MEMORY
+ && modifier != EXPAND_WRITE
+ && modifier != EXPAND_CONST_ADDRESS)
+ return reduce_to_bit_field_precision (decl_rtl, NULL_RTX, type);
return decl_rtl;
case INTEGER_CST:
@@ -11192,6 +11215,13 @@ expand_expr_real_1 (tree exp, rtx target
&& align < GET_MODE_ALIGNMENT (mode))
temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
align, NULL_RTX, NULL);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_WRITE
+ && modifier != EXPAND_MEMORY
+ && modifier != EXPAND_CONST_ADDRESS)
+ return reduce_to_bit_field_precision (temp, NULL_RTX, type);
return temp;
}
@@ -11253,18 +11283,21 @@ expand_expr_real_1 (tree exp, rtx target
set_mem_addr_space (temp, as);
if (TREE_THIS_VOLATILE (exp))
MEM_VOLATILE_P (temp) = 1;
- if (modifier != EXPAND_WRITE
- && modifier != EXPAND_MEMORY
- && !inner_reference_p
+ if (modifier == EXPAND_WRITE || modifier == EXPAND_MEMORY)
+ return temp;
+ if (!inner_reference_p
&& mode != BLKmode
&& align < GET_MODE_ALIGNMENT (mode))
temp = expand_misaligned_mem_ref (temp, mode, unsignedp, align,
modifier == EXPAND_STACK_PARM
? NULL_RTX : target, alt_rtl);
- if (reverse
- && modifier != EXPAND_MEMORY
- && modifier != EXPAND_WRITE)
+ if (reverse)
temp = flip_storage_order (mode, temp);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_CONST_ADDRESS)
+ return reduce_to_bit_field_precision (temp, NULL_RTX, type);
return temp;
}
@@ -11817,6 +11850,14 @@ expand_expr_real_1 (tree exp, rtx target
&& modifier != EXPAND_WRITE)
op0 = flip_storage_order (mode1, op0);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && reduce_bit_field
+ && mode != BLKmode
+ && modifier != EXPAND_MEMORY
+ && modifier != EXPAND_WRITE
+ && modifier != EXPAND_CONST_ADDRESS)
+ op0 = reduce_to_bit_field_precision (op0, NULL_RTX, type);
+
if (mode == mode1 || mode1 == BLKmode || mode1 == tmode
|| modifier == EXPAND_CONST_ADDRESS
|| modifier == EXPAND_INITIALIZER)
@@ -2557,7 +2557,7 @@ fold_convert_loc (location_t loc, tree t
/* fall through */
case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
- case OFFSET_TYPE:
+ case OFFSET_TYPE: case BITINT_TYPE:
if (TREE_CODE (arg) == INTEGER_CST)
{
tem = fold_convert_const (NOP_EXPR, type, arg);
@@ -2597,7 +2597,7 @@ fold_convert_loc (location_t loc, tree t
switch (TREE_CODE (orig))
{
- case INTEGER_TYPE:
+ case INTEGER_TYPE: case BITINT_TYPE:
case BOOLEAN_TYPE: case ENUMERAL_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
return fold_build1_loc (loc, FLOAT_EXPR, type, arg);
@@ -2632,6 +2632,7 @@ fold_convert_loc (location_t loc, tree t
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case REAL_TYPE:
+ case BITINT_TYPE:
return fold_build1_loc (loc, FIXED_CONVERT_EXPR, type, arg);
case COMPLEX_TYPE:
@@ -2645,7 +2646,7 @@ fold_convert_loc (location_t loc, tree t
case COMPLEX_TYPE:
switch (TREE_CODE (orig))
{
- case INTEGER_TYPE:
+ case INTEGER_TYPE: case BITINT_TYPE:
case BOOLEAN_TYPE: case ENUMERAL_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
case REAL_TYPE:
@@ -5324,6 +5325,8 @@ make_range_step (location_t loc, enum tr
equiv_type
= lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type),
TYPE_SATURATING (arg0_type));
+ else if (TREE_CODE (arg0_type) == BITINT_TYPE)
+ equiv_type = arg0_type;
else
equiv_type
= lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type), 1);
@@ -6850,10 +6853,19 @@ extract_muldiv_1 (tree t, tree c, enum t
{
tree type = TREE_TYPE (t);
enum tree_code tcode = TREE_CODE (t);
- tree ctype = (wide_type != 0
- && (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
- > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
- ? wide_type : type);
+ tree ctype = type;
+ if (wide_type)
+ {
+ if (TREE_CODE (type) == BITINT_TYPE
+ || TREE_CODE (wide_type) == BITINT_TYPE)
+ {
+ if (TYPE_PRECISION (wide_type) > TYPE_PRECISION (type))
+ ctype = wide_type;
+ }
+ else if (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
+ > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
+ ctype = wide_type;
+ }
tree t1, t2;
bool same_p = tcode == code;
tree op0 = NULL_TREE, op1 = NULL_TREE;
@@ -7714,7 +7726,29 @@ static int
native_encode_int (const_tree expr, unsigned char *ptr, int len, int off)
{
tree type = TREE_TYPE (expr);
- int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
+ int total_bytes;
+ if (TREE_CODE (type) == BITINT_TYPE)
+ {
+ struct bitint_info info;
+ gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
+ &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
+ {
+ total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
+ /* More work is needed when adding _BitInt support to PDP endian
+ if limb is smaller than word, or if _BitInt limb ordering doesn't
+ match target endianity here. */
+ gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
+ && (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
+ || (GET_MODE_SIZE (limb_mode)
+ >= UNITS_PER_WORD)));
+ }
+ else
+ total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
+ }
+ else
+ total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
int byte, offset, word, words;
unsigned char value;
@@ -8622,7 +8656,29 @@ native_encode_initializer (tree init, un
static tree
native_interpret_int (tree type, const unsigned char *ptr, int len)
{
- int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
+ int total_bytes;
+ if (TREE_CODE (type) == BITINT_TYPE)
+ {
+ struct bitint_info info;
+ gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
+ &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
+ {
+ total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
+ /* More work is needed when adding _BitInt support to PDP endian
+ if limb is smaller than word, or if _BitInt limb ordering doesn't
+ match target endianity here. */
+ gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
+ && (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
+ || (GET_MODE_SIZE (limb_mode)
+ >= UNITS_PER_WORD)));
+ }
+ else
+ total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
+ }
+ else
+ total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
if (total_bytes > len
|| total_bytes * BITS_PER_UNIT > HOST_BITS_PER_DOUBLE_INT)
@@ -8824,6 +8880,7 @@ native_interpret_expr (tree type, const
case POINTER_TYPE:
case REFERENCE_TYPE:
case OFFSET_TYPE:
+ case BITINT_TYPE:
return native_interpret_int (type, ptr, len);
case REAL_TYPE:
@@ -111,6 +111,15 @@ useless_type_conversion_p (tree outer_ty
&& TYPE_PRECISION (outer_type) != 1)
return false;
+ /* Preserve conversions to/from BITINT_TYPE. While we don't
+ need to care that much about such conversions within a function's
+ body, we need to prevent changing BITINT_TYPE to INTEGER_TYPE
+ of the same precision or vice versa when passed to functions,
+ especially for varargs. */
+ if ((TREE_CODE (inner_type) == BITINT_TYPE)
+ != (TREE_CODE (outer_type) == BITINT_TYPE))
+ return false;
+
/* We don't need to preserve changes in the types minimum or
maximum value in general as these do not generate code
unless the types precisions are different. */
@@ -1475,8 +1475,9 @@ gimple_fold_builtin_memset (gimple_stmt_
if (TREE_CODE (etype) == ARRAY_TYPE)
etype = TREE_TYPE (etype);
- if (!INTEGRAL_TYPE_P (etype)
- && !POINTER_TYPE_P (etype))
+ if ((!INTEGRAL_TYPE_P (etype)
+ && !POINTER_TYPE_P (etype))
+ || TREE_CODE (etype) == BITINT_TYPE)
return NULL_TREE;
if (! var_decl_component_p (var))
@@ -0,0 +1,5495 @@
+/* Lower _BitInt(N) operations to scalar operations.
+ Copyright (C) 2023 Free Software Foundation, Inc.
+ Contributed by Jakub Jelinek <jakub@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "fold-const.h"
+#include "gimplify.h"
+#include "gimple-iterator.h"
+#include "tree-cfg.h"
+#include "tree-dfa.h"
+#include "cfgloop.h"
+#include "cfganal.h"
+#include "target.h"
+#include "tree-ssa-live.h"
+#include "tree-ssa-coalesce.h"
+#include "domwalk.h"
+#include "memmodel.h"
+#include "optabs.h"
+#include "varasm.h"
+#include "gimple-range.h"
+#include "value-range.h"
+#include "langhooks.h"
+#include "gimplify-me.h"
+#include "diagnostic-core.h"
+#include "tree-eh.h"
+#include "tree-pretty-print.h"
+#include "alloc-pool.h"
+#include "tree-into-ssa.h"
+#include "tree-cfgcleanup.h"
+#include "tree-switch-conversion.h"
+#include "ubsan.h"
+#include "gimple-lower-bitint.h"
+
+/* Split BITINT_TYPE precisions in 4 categories. Small _BitInt, where
+ target hook says it is a single limb, middle _BitInt which per ABI
+ does not, but there is some INTEGER_TYPE in which arithmetics can be
+ performed (operations on such _BitInt are lowered to casts to that
+ arithmetic type and cast back; e.g. on x86_64 limb is DImode, but
+ target supports TImode, so _BitInt(65) to _BitInt(128) are middle
+ ones), large _BitInt which should by straight line code and
+ finally huge _BitInt which should be handled by loops over the limbs. */
+
+enum bitint_prec_kind {
+ bitint_prec_small,
+ bitint_prec_middle,
+ bitint_prec_large,
+ bitint_prec_huge
+};
+
+/* Caches to speed up bitint_precision_kind. */
+
+static int small_max_prec, mid_min_prec, large_min_prec, huge_min_prec;
+static int limb_prec;
+
+/* Categorize _BitInt(PREC) as small, middle, large or huge. */
+
+static bitint_prec_kind
+bitint_precision_kind (int prec)
+{
+ if (prec <= small_max_prec)
+ return bitint_prec_small;
+ if (huge_min_prec && prec >= huge_min_prec)
+ return bitint_prec_huge;
+ if (large_min_prec && prec >= large_min_prec)
+ return bitint_prec_large;
+ if (mid_min_prec && prec >= mid_min_prec)
+ return bitint_prec_middle;
+
+ struct bitint_info info;
+ gcc_assert (targetm.c.bitint_type_info (prec, &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ if (prec <= GET_MODE_PRECISION (limb_mode))
+ {
+ small_max_prec = prec;
+ return bitint_prec_small;
+ }
+ scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
+ ? TImode : DImode);
+ if (!large_min_prec
+ && GET_MODE_PRECISION (arith_mode) > GET_MODE_PRECISION (limb_mode))
+ large_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
+ if (!limb_prec)
+ limb_prec = GET_MODE_PRECISION (limb_mode);
+ if (!huge_min_prec)
+ {
+ if (4 * limb_prec >= GET_MODE_PRECISION (arith_mode))
+ huge_min_prec = 4 * limb_prec;
+ else
+ huge_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
+ }
+ if (prec <= GET_MODE_PRECISION (arith_mode))
+ {
+ if (!mid_min_prec || prec < mid_min_prec)
+ mid_min_prec = prec;
+ return bitint_prec_middle;
+ }
+ if (large_min_prec && prec <= large_min_prec)
+ return bitint_prec_large;
+ return bitint_prec_huge;
+}
+
+/* Same for a TYPE. */
+
+static bitint_prec_kind
+bitint_precision_kind (tree type)
+{
+ return bitint_precision_kind (TYPE_PRECISION (type));
+}
+
+/* Return minimum precision needed to describe INTEGER_CST
+ CST. All bits above that precision up to precision of
+ TREE_TYPE (CST) are cleared if EXT is set to 0, or set
+ if EXT is set to -1. */
+
+static unsigned
+bitint_min_cst_precision (tree cst, int &ext)
+{
+ ext = tree_int_cst_sgn (cst) < 0 ? -1 : 0;
+ wide_int w = wi::to_wide (cst);
+ unsigned min_prec = wi::min_precision (w, TYPE_SIGN (TREE_TYPE (cst)));
+ /* For signed values, we don't need to count the sign bit,
+ we'll use constant 0 or -1 for the upper bits. */
+ if (!TYPE_UNSIGNED (TREE_TYPE (cst)))
+ --min_prec;
+ else
+ {
+ /* For unsigned values, also try signed min_precision
+ in case the constant has lots of most significant bits set. */
+ unsigned min_prec2 = wi::min_precision (w, SIGNED) - 1;
+ if (min_prec2 < min_prec)
+ {
+ ext = -1;
+ return min_prec2;
+ }
+ }
+ return min_prec;
+}
+
+namespace {
+
+/* If OP is middle _BitInt, cast it to corresponding INTEGER_TYPE
+ cached in TYPE and return it. */
+
+tree
+maybe_cast_middle_bitint (gimple_stmt_iterator *gsi, tree op, tree &type)
+{
+ if (op == NULL_TREE
+ || TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
+ || bitint_precision_kind (TREE_TYPE (op)) != bitint_prec_middle)
+ return op;
+
+ int prec = TYPE_PRECISION (TREE_TYPE (op));
+ int uns = TYPE_UNSIGNED (TREE_TYPE (op));
+ if (type == NULL_TREE
+ || TYPE_PRECISION (type) != prec
+ || TYPE_UNSIGNED (type) != uns)
+ type = build_nonstandard_integer_type (prec, uns);
+
+ if (TREE_CODE (op) != SSA_NAME)
+ {
+ tree nop = fold_convert (type, op);
+ if (is_gimple_val (nop))
+ return nop;
+ }
+
+ tree nop = make_ssa_name (type);
+ gimple *g = gimple_build_assign (nop, NOP_EXPR, op);
+ gsi_insert_before (gsi, g, GSI_SAME_STMT);
+ return nop;
+}
+
+/* Return true if STMT can be handled in a loop from least to most
+ significant limb together with its dependencies. */
+
+bool
+mergeable_op (gimple *stmt)
+{
+ if (!is_gimple_assign (stmt))
+ return false;
+ switch (gimple_assign_rhs_code (stmt))
+ {
+ case PLUS_EXPR:
+ case MINUS_EXPR:
+ case NEGATE_EXPR:
+ case BIT_AND_EXPR:
+ case BIT_IOR_EXPR:
+ case BIT_XOR_EXPR:
+ case BIT_NOT_EXPR:
+ case SSA_NAME:
+ case INTEGER_CST:
+ return true;
+ case LSHIFT_EXPR:
+ {
+ tree cnt = gimple_assign_rhs2 (stmt);
+ if (tree_fits_uhwi_p (cnt)
+ && tree_to_uhwi (cnt) < (unsigned HOST_WIDE_INT) limb_prec)
+ return true;
+ }
+ break;
+ CASE_CONVERT:
+ case VIEW_CONVERT_EXPR:
+ {
+ tree lhs_type = TREE_TYPE (gimple_assign_lhs (stmt));
+ tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
+ if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
+ && TREE_CODE (lhs_type) == BITINT_TYPE
+ && TREE_CODE (rhs_type) == BITINT_TYPE
+ && bitint_precision_kind (lhs_type) >= bitint_prec_large
+ && bitint_precision_kind (rhs_type) >= bitint_prec_large
+ && tree_int_cst_equal (TYPE_SIZE (lhs_type), TYPE_SIZE (rhs_type)))
+ {
+ if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type))
+ return true;
+ if ((unsigned) TYPE_PRECISION (lhs_type) % (2 * limb_prec) != 0)
+ return true;
+ if (bitint_precision_kind (lhs_type) == bitint_prec_large)
+ return true;
+ }
+ break;
+ }
+ default:
+ break;
+ }
+ return false;
+}
+
+/* Return non-zero if stmt is .{ADD,SUB,MUL}_OVERFLOW call with
+ _Complex large/huge _BitInt lhs which has at most two immediate uses,
+ at most one use in REALPART_EXPR stmt in the same bb and exactly one
+ IMAGPART_EXPR use in the same bb with a single use which casts it to
+ non-BITINT_TYPE integral type. If there is a REALPART_EXPR use,
+ return 2. Such cases (most common uses of those builtins) can be
+ optimized by marking their lhs and lhs of IMAGPART_EXPR and maybe lhs
+ of REALPART_EXPR as not needed to be backed up by a stack variable.
+ For .UBSAN_CHECK_{ADD,SUB,MUL} return 3. */
+
+int
+optimizable_arith_overflow (gimple *stmt)
+{
+ bool is_ubsan = false;
+ if (!is_gimple_call (stmt) || !gimple_call_internal_p (stmt))
+ return false;
+ switch (gimple_call_internal_fn (stmt))
+ {
+ case IFN_ADD_OVERFLOW:
+ case IFN_SUB_OVERFLOW:
+ case IFN_MUL_OVERFLOW:
+ break;
+ case IFN_UBSAN_CHECK_ADD:
+ case IFN_UBSAN_CHECK_SUB:
+ case IFN_UBSAN_CHECK_MUL:
+ is_ubsan = true;
+ break;
+ default:
+ return 0;
+ }
+ tree lhs = gimple_call_lhs (stmt);
+ if (!lhs)
+ return 0;
+ if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs))
+ return 0;
+ tree type = is_ubsan ? TREE_TYPE (lhs) : TREE_TYPE (TREE_TYPE (lhs));
+ if (TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) < bitint_prec_large)
+ return 0;
+
+ if (is_ubsan)
+ {
+ use_operand_p use_p;
+ gimple *use_stmt;
+ if (!single_imm_use (lhs, &use_p, &use_stmt)
+ || gimple_bb (use_stmt) != gimple_bb (stmt)
+ || !gimple_store_p (use_stmt)
+ || !is_gimple_assign (use_stmt)
+ || gimple_has_volatile_ops (use_stmt)
+ || stmt_ends_bb_p (use_stmt))
+ return 0;
+ return 3;
+ }
+
+ imm_use_iterator ui;
+ use_operand_p use_p;
+ int seen = 0;
+ FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
+ {
+ gimple *g = USE_STMT (use_p);
+ if (is_gimple_debug (g))
+ continue;
+ if (!is_gimple_assign (g) || gimple_bb (g) != gimple_bb (stmt))
+ return 0;
+ if (gimple_assign_rhs_code (g) == REALPART_EXPR)
+ {
+ if ((seen & 1) != 0)
+ return 0;
+ seen |= 1;
+ }
+ else if (gimple_assign_rhs_code (g) == IMAGPART_EXPR)
+ {
+ if ((seen & 2) != 0)
+ return 0;
+ seen |= 2;
+
+ use_operand_p use2_p;
+ gimple *use_stmt;
+ tree lhs2 = gimple_assign_lhs (g);
+ if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs2))
+ return 0;
+ if (!single_imm_use (lhs2, &use2_p, &use_stmt)
+ || gimple_bb (use_stmt) != gimple_bb (stmt)
+ || !gimple_assign_cast_p (use_stmt))
+ return 0;
+
+ lhs2 = gimple_assign_lhs (use_stmt);
+ if (!INTEGRAL_TYPE_P (TREE_TYPE (lhs2))
+ || TREE_CODE (TREE_TYPE (lhs2)) == BITINT_TYPE)
+ return 0;
+ }
+ else
+ return 0;
+ }
+ if ((seen & 2) == 0)
+ return 0;
+ return seen == 3 ? 2 : 1;
+}
+
+/* If STMT is some kind of comparison (GIMPLE_COND, comparison
+ assignment or COND_EXPR) comparing large/huge _BitInt types,
+ return the comparison code and if non-NULL fill in the comparison
+ operands to *POP1 and *POP2. */
+
+tree_code
+comparison_op (gimple *stmt, tree *pop1, tree *pop2)
+{
+ tree op1 = NULL_TREE, op2 = NULL_TREE;
+ tree_code code = ERROR_MARK;
+ if (gimple_code (stmt) == GIMPLE_COND)
+ {
+ code = gimple_cond_code (stmt);
+ op1 = gimple_cond_lhs (stmt);
+ op2 = gimple_cond_rhs (stmt);
+ }
+ else if (is_gimple_assign (stmt))
+ {
+ code = gimple_assign_rhs_code (stmt);
+ op1 = gimple_assign_rhs1 (stmt);
+ if (TREE_CODE_CLASS (code) == tcc_comparison
+ || TREE_CODE_CLASS (code) == tcc_binary)
+ op2 = gimple_assign_rhs2 (stmt);
+ switch (code)
+ {
+ default:
+ break;
+ case COND_EXPR:
+ tree cond = gimple_assign_rhs1 (stmt);
+ code = TREE_CODE (cond);
+ op1 = TREE_OPERAND (cond, 0);
+ op2 = TREE_OPERAND (cond, 1);
+ break;
+ }
+ }
+ if (TREE_CODE_CLASS (code) != tcc_comparison)
+ return ERROR_MARK;
+ tree type = TREE_TYPE (op1);
+ if (TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) < bitint_prec_large)
+ return ERROR_MARK;
+ if (pop1)
+ {
+ *pop1 = op1;
+ *pop2 = op2;
+ }
+ return code;
+}
+
+/* Class used during large/huge _BitInt lowering containing all the
+ state for the methods. */
+
+struct bitint_large_huge
+{
+ bitint_large_huge ()
+ : m_names (NULL), m_loads (NULL), m_preserved (NULL),
+ m_single_use_names (NULL), m_map (NULL), m_vars (NULL),
+ m_limb_type (NULL_TREE), m_data (vNULL) {}
+
+ ~bitint_large_huge ();
+
+ void insert_before (gimple *);
+ tree limb_access_type (tree, tree);
+ tree limb_access (tree, tree, tree, bool);
+ tree handle_operand (tree, tree);
+ tree prepare_data_in_out (tree, tree, tree *);
+ tree add_cast (tree, tree);
+ tree handle_plus_minus (tree_code, tree, tree, tree);
+ tree handle_lshift (tree, tree, tree);
+ tree handle_cast (tree, tree, tree);
+ tree handle_stmt (gimple *, tree);
+ tree handle_operand_addr (tree, gimple *, int *, int *);
+ tree create_loop (tree, tree *);
+ tree lower_mergeable_stmt (gimple *, tree_code &, tree, tree);
+ tree lower_comparison_stmt (gimple *, tree_code &, tree, tree);
+ void lower_shift_stmt (tree, gimple *);
+ void lower_muldiv_stmt (tree, gimple *);
+ void lower_float_conv_stmt (tree, gimple *);
+ tree arith_overflow_extract_bits (unsigned int, unsigned int, tree,
+ unsigned int, bool);
+ void finish_arith_overflow (tree, tree, tree, tree, tree, tree, gimple *,
+ tree_code);
+ void lower_addsub_overflow (tree, gimple *);
+ void lower_mul_overflow (tree, gimple *);
+ void lower_cplxpart_stmt (tree, gimple *);
+ void lower_complexexpr_stmt (gimple *);
+ void lower_call (tree, gimple *);
+ void lower_asm (gimple *);
+ void lower_stmt (gimple *);
+
+ /* Bitmap of large/huge _BitInt SSA_NAMEs except those can be
+ merged with their uses. */
+ bitmap m_names;
+ /* Subset of those for lhs of load statements. These will be
+ cleared in m_names if the loads will be mergeable with all
+ their uses. */
+ bitmap m_loads;
+ /* Bitmap of large/huge _BitInt SSA_NAMEs that should survive
+ to later passes (arguments or return values of calls). */
+ bitmap m_preserved;
+ /* Subset of m_names which have a single use. As the lowering
+ can replace various original statements with their lowered
+ form even before it is done iterating over all basic blocks,
+ testing has_single_use for the purpose of emitting clobbers
+ doesn't work properly. */
+ bitmap m_single_use_names;
+ /* Used for coalescing/partitioning of large/huge _BitInt SSA_NAMEs
+ set in m_names. */
+ var_map m_map;
+ /* Mapping of the partitions to corresponding decls. */
+ tree *m_vars;
+ /* Unsigned integer type with limb precision. */
+ tree m_limb_type;
+ /* Its TYPE_SIZE_UNIT. */
+ unsigned HOST_WIDE_INT m_limb_size;
+ /* Location of a gimple stmt which is being currently lowered. */
+ location_t m_loc;
+ /* Current stmt iterator where code is being lowered currently. */
+ gimple_stmt_iterator m_gsi;
+ /* Statement after which any clobbers should be added if non-NULL. */
+ gimple *m_after_stmt;
+ /* Set when creating loops to the loop header bb and its preheader. */
+ basic_block m_bb, m_preheader_bb;
+ /* Stmt iterator after which initialization statements should be emitted. */
+ gimple_stmt_iterator m_init_gsi;
+ /* Decl into which a mergeable statement stores result. */
+ tree m_lhs;
+ /* handle_operand/handle_stmt can be invoked in various ways.
+
+ lower_mergeable_stmt for large _BitInt calls those with constant
+ idx only, expanding to straight line code, for huge _BitInt
+ emits a loop from least significant limb upwards, where each loop
+ iteration handles 2 limbs, plus there can be up to one full limb
+ and one partial limb processed after the loop, where handle_operand
+ and/or handle_stmt are called with constant idx. m_upwards_2limb
+ is set for this case, false otherwise.
+
+ Another way is used by lower_comparison_stmt, which walks limbs
+ from most significant to least significant, partial limb if any
+ processed first with constant idx and then loop processing a single
+ limb per iteration with non-constant idx.
+
+ Another way is used in lower_shift_stmt, where for LSHIFT_EXPR
+ destination limbs are processed from most significant to least
+ significant or for RSHIFT_EXPR the other way around, in loops or
+ straight line code, but idx usually is non-constant (so from
+ handle_operand/handle_stmt POV random access). The LSHIFT_EXPR
+ handling there can access even partial limbs using non-constant
+ idx (then m_var_msb should be true, for all the other cases
+ including lower_mergeable_stmt/lower_comparison_stmt that is
+ not the case and so m_var_msb should be false.
+
+ m_first should be set the first time handle_operand/handle_stmt
+ is called and clear when it is called for some other limb with
+ the same argument. If the lowering of an operand (e.g. INTEGER_CST)
+ or statement (e.g. +/-/<< with < limb_prec constant) needs some
+ state between the different calls, when m_first is true it should
+ push some trees to m_data vector and also make sure m_data_cnt is
+ incremented by how many trees were pushed, and when m_first is
+ false, it can use the m_data[m_data_cnt] etc. data or update them,
+ just needs to bump m_data_cnt by the same amount as when it was
+ called with m_first set. The toplevel calls to
+ handle_operand/handle_stmt should set m_data_cnt to 0 and truncate
+ m_data vector when setting m_first to true. */
+ bool m_first;
+ bool m_var_msb;
+ unsigned m_upwards_2limb;
+ vec<tree> m_data;
+ unsigned int m_data_cnt;
+};
+
+bitint_large_huge::~bitint_large_huge ()
+{
+ BITMAP_FREE (m_names);
+ BITMAP_FREE (m_loads);
+ BITMAP_FREE (m_preserved);
+ BITMAP_FREE (m_single_use_names);
+ if (m_map)
+ delete_var_map (m_map);
+ XDELETEVEC (m_vars);
+ m_data.release ();
+}
+
+/* Insert gimple statement G before current location
+ and set its gimple_location. */
+
+void
+bitint_large_huge::insert_before (gimple *g)
+{
+ gimple_set_location (g, m_loc);
+ gsi_insert_before (&m_gsi, g, GSI_SAME_STMT);
+}
+
+/* Return type for accessing limb IDX of BITINT_TYPE TYPE.
+ This is normally m_limb_type, except for a partial most
+ significant limb if any. */
+
+tree
+bitint_large_huge::limb_access_type (tree type, tree idx)
+{
+ if (type == NULL_TREE)
+ return m_limb_type;
+ unsigned HOST_WIDE_INT i = tree_to_uhwi (idx);
+ unsigned int prec = TYPE_PRECISION (type);
+ gcc_assert (i * limb_prec < prec);
+ if ((i + 1) * limb_prec <= prec)
+ return m_limb_type;
+ else
+ return build_nonstandard_integer_type (prec % limb_prec,
+ TYPE_UNSIGNED (type));
+}
+
+/* Return a tree how to access limb IDX of VAR corresponding to BITINT_TYPE
+ TYPE. If WRITE_P is true, it will be a store, otherwise a read. */
+
+tree
+bitint_large_huge::limb_access (tree type, tree var, tree idx, bool write_p)
+{
+ tree atype = (tree_fits_uhwi_p (idx)
+ ? limb_access_type (type, idx) : m_limb_type);
+ tree ret;
+ if (DECL_P (var) && tree_fits_uhwi_p (idx))
+ {
+ tree ptype = build_pointer_type (strip_array_types (TREE_TYPE (var)));
+ unsigned HOST_WIDE_INT off = tree_to_uhwi (idx) * m_limb_size;
+ ret = build2 (MEM_REF, m_limb_type,
+ build_fold_addr_expr (var),
+ build_int_cst (ptype, off));
+ if (TREE_THIS_VOLATILE (var) || TREE_THIS_VOLATILE (TREE_TYPE (var)))
+ TREE_THIS_VOLATILE (ret) = 1;
+ }
+ else if (TREE_CODE (var) == MEM_REF && tree_fits_uhwi_p (idx))
+ {
+ ret
+ = build2 (MEM_REF, m_limb_type, TREE_OPERAND (var, 0),
+ size_binop (PLUS_EXPR, TREE_OPERAND (var, 1),
+ build_int_cst (TREE_TYPE (TREE_OPERAND (var, 1)),
+ tree_to_uhwi (idx)
+ * m_limb_size)));
+ if (TREE_THIS_VOLATILE (var))
+ TREE_THIS_VOLATILE (ret) = 1;
+ }
+ else
+ {
+ var = unshare_expr (var);
+ if (TREE_CODE (TREE_TYPE (var)) != ARRAY_TYPE
+ || !useless_type_conversion_p (m_limb_type,
+ TREE_TYPE (TREE_TYPE (var))))
+ {
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (type)) / limb_prec;
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ var = build1 (VIEW_CONVERT_EXPR, atype, var);
+ }
+ ret = build4 (ARRAY_REF, m_limb_type, var, idx, NULL_TREE, NULL_TREE);
+ }
+ if (!write_p && !useless_type_conversion_p (atype, m_limb_type))
+ {
+ gimple *g = gimple_build_assign (make_ssa_name (m_limb_type), ret);
+ insert_before (g);
+ ret = gimple_assign_lhs (g);
+ ret = build1 (NOP_EXPR, atype, ret);
+ }
+ return ret;
+}
+
+/* Emit code to access limb IDX from OP. */
+
+tree
+bitint_large_huge::handle_operand (tree op, tree idx)
+{
+ switch (TREE_CODE (op))
+ {
+ case SSA_NAME:
+ if (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
+ {
+ if (gimple_code (SSA_NAME_DEF_STMT (op)) == GIMPLE_NOP)
+ {
+ if (m_first)
+ {
+ tree v = create_tmp_var (m_limb_type);
+ if (SSA_NAME_VAR (op) && VAR_P (SSA_NAME_VAR (op)))
+ {
+ DECL_NAME (v) = DECL_NAME (SSA_NAME_VAR (op));
+ DECL_SOURCE_LOCATION (v)
+ = DECL_SOURCE_LOCATION (SSA_NAME_VAR (op));
+ }
+ v = get_or_create_ssa_default_def (cfun, v);
+ m_data.safe_push (v);
+ }
+ tree ret = m_data[m_data_cnt];
+ m_data_cnt++;
+ if (tree_fits_uhwi_p (idx))
+ {
+ tree type = limb_access_type (TREE_TYPE (op), idx);
+ ret = add_cast (type, ret);
+ }
+ return ret;
+ }
+ location_t loc_save = m_loc;
+ m_loc = gimple_location (SSA_NAME_DEF_STMT (op));
+ tree ret = handle_stmt (SSA_NAME_DEF_STMT (op), idx);
+ m_loc = loc_save;
+ return ret;
+ }
+ int p;
+ gimple *g;
+ tree t;
+ p = var_to_partition (m_map, op);
+ gcc_assert (m_vars[p] != NULL_TREE);
+ t = limb_access (TREE_TYPE (op), m_vars[p], idx, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
+ insert_before (g);
+ t = gimple_assign_lhs (g);
+ if (m_first
+ && m_single_use_names
+ && m_vars[p] != m_lhs
+ && m_after_stmt
+ && bitmap_bit_p (m_single_use_names, SSA_NAME_VERSION (op)))
+ {
+ tree clobber = build_clobber (TREE_TYPE (m_vars[p]), CLOBBER_EOL);
+ g = gimple_build_assign (m_vars[p], clobber);
+ gimple_stmt_iterator gsi = gsi_for_stmt (m_after_stmt);
+ gsi_insert_after (&gsi, g, GSI_SAME_STMT);
+ }
+ return t;
+ case INTEGER_CST:
+ if (tree_fits_uhwi_p (idx))
+ {
+ tree c, type = limb_access_type (TREE_TYPE (op), idx);
+ unsigned HOST_WIDE_INT i = tree_to_uhwi (idx);
+ if (m_first)
+ {
+ m_data.safe_push (NULL_TREE);
+ m_data.safe_push (NULL_TREE);
+ }
+ if (limb_prec != HOST_BITS_PER_WIDE_INT)
+ {
+ wide_int w = wi::rshift (wi::to_wide (op), i * limb_prec,
+ TYPE_SIGN (TREE_TYPE (op)));
+ c = wide_int_to_tree (type,
+ wide_int::from (w, TYPE_PRECISION (type),
+ UNSIGNED));
+ }
+ else if (i >= TREE_INT_CST_EXT_NUNITS (op))
+ c = build_int_cst (type,
+ tree_int_cst_sgn (op) < 0 ? -1 : 0);
+ else
+ c = build_int_cst (type, TREE_INT_CST_ELT (op, i));
+ m_data_cnt += 2;
+ return c;
+ }
+ if (m_first
+ || (m_data[m_data_cnt] == NULL_TREE
+ && m_data[m_data_cnt + 1] == NULL_TREE))
+ {
+ unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
+ unsigned int rem = prec % (2 * limb_prec);
+ int ext;
+ unsigned min_prec = bitint_min_cst_precision (op, ext);
+ if (m_first)
+ {
+ m_data.safe_push (NULL_TREE);
+ m_data.safe_push (NULL_TREE);
+ }
+ if (integer_zerop (op))
+ {
+ tree c = build_zero_cst (m_limb_type);
+ m_data[m_data_cnt] = c;
+ m_data[m_data_cnt + 1] = c;
+ }
+ else if (integer_all_onesp (op))
+ {
+ tree c = build_all_ones_cst (m_limb_type);
+ m_data[m_data_cnt] = c;
+ m_data[m_data_cnt + 1] = c;
+ }
+ else if (m_upwards_2limb && min_prec <= (unsigned) limb_prec)
+ {
+ /* Single limb constant. Use a phi with that limb from
+ the preheader edge and 0 or -1 constant from the other edge
+ and for the second limb in the loop. */
+ tree out;
+ gcc_assert (m_first);
+ m_data.pop ();
+ m_data.pop ();
+ prepare_data_in_out (fold_convert (m_limb_type, op), idx, &out);
+ g = gimple_build_assign (m_data[m_data_cnt + 1],
+ build_int_cst (m_limb_type, ext));
+ insert_before (g);
+ m_data[m_data_cnt + 1] = gimple_assign_rhs1 (g);
+ }
+ else if (min_prec > prec - rem - 2 * limb_prec)
+ {
+ /* Constant which has enough significant bits that it isn't
+ worth trying to save .rodata space by extending from smaller
+ number. */
+ tree type;
+ if (m_var_msb)
+ type = TREE_TYPE (op);
+ else
+ /* If we have a guarantee the most significant partial limb
+ (if any) will be only accessed through handle_operand
+ with INTEGER_CST idx, we don't need to include the partial
+ limb in .rodata. */
+ type = build_bitint_type (prec - rem, 1);
+ tree c = tree_output_constant_def (fold_convert (type, op));
+ m_data[m_data_cnt] = c;
+ m_data[m_data_cnt + 1] = NULL_TREE;
+ }
+ else if (m_upwards_2limb)
+ {
+ /* Constant with smaller number of bits. Trade conditional
+ code for .rodata space by extending from smaller number. */
+ min_prec = CEIL (min_prec, 2 * limb_prec) * (2 * limb_prec);
+ tree type = build_bitint_type (min_prec, 1);
+ tree c = tree_output_constant_def (fold_convert (type, op));
+ tree idx2 = make_ssa_name (sizetype);
+ g = gimple_build_assign (idx2, PLUS_EXPR, idx, size_one_node);
+ insert_before (g);
+ g = gimple_build_cond (GE_EXPR, idx,
+ size_int (min_prec / limb_prec),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
+ e3->probability = profile_probability::likely ();
+ if (min_prec >= (prec - rem) / 2)
+ e3->probability = e3->probability.invert ();
+ e1->flags = EDGE_FALSE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ tree c1 = limb_access (TREE_TYPE (op), c, idx, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (c1)), c1);
+ insert_before (g);
+ c1 = gimple_assign_lhs (g);
+ tree c2 = limb_access (TREE_TYPE (op), c, idx2, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (c2)), c2);
+ insert_before (g);
+ c2 = gimple_assign_lhs (g);
+ tree c3 = build_int_cst (m_limb_type, ext);
+ m_gsi = gsi_after_labels (e2->dest);
+ m_data[m_data_cnt] = make_ssa_name (m_limb_type);
+ m_data[m_data_cnt + 1] = make_ssa_name (m_limb_type);
+ gphi *phi = create_phi_node (m_data[m_data_cnt], e2->dest);
+ add_phi_arg (phi, c1, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, c3, e3, UNKNOWN_LOCATION);
+ phi = create_phi_node (m_data[m_data_cnt + 1], e2->dest);
+ add_phi_arg (phi, c2, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, c3, e3, UNKNOWN_LOCATION);
+ }
+ else
+ {
+ /* Constant with smaller number of bits. Trade conditional
+ code for .rodata space by extending from smaller number.
+ Version for loops with random access to the limbs or
+ downwards loops. */
+ min_prec = CEIL (min_prec, limb_prec) * limb_prec;
+ tree c;
+ if (min_prec <= (unsigned) limb_prec)
+ c = fold_convert (m_limb_type, op);
+ else
+ {
+ tree type = build_bitint_type (min_prec, 1);
+ c = tree_output_constant_def (fold_convert (type, op));
+ }
+ m_data[m_data_cnt] = c;
+ m_data[m_data_cnt + 1] = integer_type_node;
+ }
+ t = m_data[m_data_cnt];
+ if (m_data[m_data_cnt + 1] == NULL_TREE)
+ {
+ t = limb_access (TREE_TYPE (op), t, idx, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
+ insert_before (g);
+ t = gimple_assign_lhs (g);
+ }
+ }
+ else if (m_data[m_data_cnt + 1] == NULL_TREE)
+ {
+ t = limb_access (TREE_TYPE (op), m_data[m_data_cnt], idx, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (t)), t);
+ insert_before (g);
+ t = gimple_assign_lhs (g);
+ }
+ else
+ t = m_data[m_data_cnt + 1];
+ if (m_data[m_data_cnt + 1] == integer_type_node)
+ {
+ unsigned int prec = TYPE_PRECISION (TREE_TYPE (op));
+ unsigned rem = prec % (2 * limb_prec);
+ int ext = tree_int_cst_sgn (op) < 0 ? -1 : 0;
+ tree c = m_data[m_data_cnt];
+ unsigned min_prec = TYPE_PRECISION (TREE_TYPE (c));
+ g = gimple_build_cond (GE_EXPR, idx,
+ size_int (min_prec / limb_prec),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
+ e3->probability = profile_probability::likely ();
+ if (min_prec >= (prec - rem) / 2)
+ e3->probability = e3->probability.invert ();
+ e1->flags = EDGE_FALSE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ if (min_prec > (unsigned) limb_prec)
+ {
+ c = limb_access (TREE_TYPE (op), c, idx, false);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (c)), c);
+ insert_before (g);
+ c = gimple_assign_lhs (g);
+ }
+ tree c2 = build_int_cst (m_limb_type, ext);
+ m_gsi = gsi_after_labels (e2->dest);
+ t = make_ssa_name (m_limb_type);
+ gphi *phi = create_phi_node (t, e2->dest);
+ add_phi_arg (phi, c, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, c2, e3, UNKNOWN_LOCATION);
+ }
+ m_data_cnt += 2;
+ return t;
+ default:
+ gcc_unreachable ();
+ }
+}
+
+/* Helper method, add a PHI node with VAL from preheader edge if
+ inside of a loop and m_first. Keep state in a pair of m_data
+ elements. */
+
+tree
+bitint_large_huge::prepare_data_in_out (tree val, tree idx, tree *data_out)
+{
+ if (!m_first)
+ {
+ *data_out = tree_fits_uhwi_p (idx) ? NULL_TREE : m_data[m_data_cnt + 1];
+ return m_data[m_data_cnt];
+ }
+
+ *data_out = NULL_TREE;
+ if (tree_fits_uhwi_p (idx))
+ {
+ m_data.safe_push (val);
+ m_data.safe_push (NULL_TREE);
+ return val;
+ }
+
+ tree in = make_ssa_name (TREE_TYPE (val));
+ gphi *phi = create_phi_node (in, m_bb);
+ edge e1 = find_edge (m_preheader_bb, m_bb);
+ edge e2 = EDGE_PRED (m_bb, 0);
+ if (e1 == e2)
+ e2 = EDGE_PRED (m_bb, 1);
+ add_phi_arg (phi, val, e1, UNKNOWN_LOCATION);
+ tree out = make_ssa_name (TREE_TYPE (val));
+ add_phi_arg (phi, out, e2, UNKNOWN_LOCATION);
+ m_data.safe_push (in);
+ m_data.safe_push (out);
+ return in;
+}
+
+/* Return VAL cast to TYPE. If VAL is INTEGER_CST, just
+ convert it without emitting any code, otherwise emit
+ the conversion statement before the current location. */
+
+tree
+bitint_large_huge::add_cast (tree type, tree val)
+{
+ if (TREE_CODE (val) == INTEGER_CST)
+ return fold_convert (type, val);
+
+ tree lhs = make_ssa_name (type);
+ gimple *g = gimple_build_assign (lhs, NOP_EXPR, val);
+ insert_before (g);
+ return lhs;
+}
+
+/* Helper of handle_stmt method, handle PLUS_EXPR or MINUS_EXPR. */
+
+tree
+bitint_large_huge::handle_plus_minus (tree_code code, tree rhs1, tree rhs2,
+ tree idx)
+{
+ tree lhs, data_out, ctype;
+ tree rhs1_type = TREE_TYPE (rhs1);
+ gimple *g;
+ tree data_in = prepare_data_in_out (build_zero_cst (m_limb_type), idx,
+ &data_out);
+
+ if (optab_handler (code == PLUS_EXPR ? uaddc5_optab : usubc5_optab,
+ TYPE_MODE (m_limb_type)) != CODE_FOR_nothing)
+ {
+ ctype = build_complex_type (m_limb_type);
+ if (!types_compatible_p (rhs1_type, m_limb_type))
+ {
+ if (!TYPE_UNSIGNED (rhs1_type))
+ {
+ tree type = unsigned_type_for (rhs1_type);
+ rhs1 = add_cast (type, rhs1);
+ rhs2 = add_cast (type, rhs2);
+ }
+ rhs1 = add_cast (m_limb_type, rhs1);
+ rhs2 = add_cast (m_limb_type, rhs2);
+ }
+ lhs = make_ssa_name (ctype);
+ g = gimple_build_call_internal (code == PLUS_EXPR
+ ? IFN_UADDC : IFN_USUBC,
+ 3, rhs1, rhs2, data_in);
+ gimple_call_set_lhs (g, lhs);
+ insert_before (g);
+ if (data_out == NULL_TREE)
+ data_out = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (data_out, IMAGPART_EXPR,
+ build1 (IMAGPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ }
+ else if (types_compatible_p (rhs1_type, m_limb_type))
+ {
+ ctype = build_complex_type (m_limb_type);
+ lhs = make_ssa_name (ctype);
+ g = gimple_build_call_internal (code == PLUS_EXPR
+ ? IFN_ADD_OVERFLOW : IFN_SUB_OVERFLOW,
+ 2, rhs1, rhs2);
+ gimple_call_set_lhs (g, lhs);
+ insert_before (g);
+ if (data_out == NULL_TREE)
+ data_out = make_ssa_name (m_limb_type);
+ if (!integer_zerop (data_in))
+ {
+ rhs1 = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (rhs1, REALPART_EXPR,
+ build1 (REALPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ rhs2 = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (rhs2, IMAGPART_EXPR,
+ build1 (IMAGPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ lhs = make_ssa_name (ctype);
+ g = gimple_build_call_internal (code == PLUS_EXPR
+ ? IFN_ADD_OVERFLOW
+ : IFN_SUB_OVERFLOW,
+ 2, rhs1, data_in);
+ gimple_call_set_lhs (g, lhs);
+ insert_before (g);
+ data_in = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (data_in, IMAGPART_EXPR,
+ build1 (IMAGPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ g = gimple_build_assign (data_out, PLUS_EXPR, rhs2, data_in);
+ insert_before (g);
+ }
+ else
+ {
+ g = gimple_build_assign (data_out, IMAGPART_EXPR,
+ build1 (IMAGPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ }
+ }
+ else
+ {
+ tree in = add_cast (rhs1_type, data_in);
+ lhs = make_ssa_name (rhs1_type);
+ g = gimple_build_assign (lhs, code, rhs1, rhs2);
+ insert_before (g);
+ rhs1 = make_ssa_name (rhs1_type);
+ g = gimple_build_assign (rhs1, code, lhs, in);
+ insert_before (g);
+ m_data[m_data_cnt] = NULL_TREE;
+ m_data_cnt += 2;
+ return rhs1;
+ }
+ rhs1 = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (rhs1, REALPART_EXPR,
+ build1 (REALPART_EXPR, m_limb_type, lhs));
+ insert_before (g);
+ if (!types_compatible_p (rhs1_type, m_limb_type))
+ rhs1 = add_cast (rhs1_type, rhs1);
+ m_data[m_data_cnt] = data_out;
+ m_data_cnt += 2;
+ return rhs1;
+}
+
+/* Helper function for handle_stmt method, handle LSHIFT_EXPR by
+ count in [0, limb_prec - 1] range. */
+
+tree
+bitint_large_huge::handle_lshift (tree rhs1, tree rhs2, tree idx)
+{
+ unsigned HOST_WIDE_INT cnt = tree_to_uhwi (rhs2);
+ gcc_checking_assert (cnt < (unsigned) limb_prec);
+ if (cnt == 0)
+ return rhs1;
+
+ tree lhs, data_out, rhs1_type = TREE_TYPE (rhs1);
+ gimple *g;
+ tree data_in = prepare_data_in_out (build_zero_cst (m_limb_type), idx,
+ &data_out);
+
+ if (!integer_zerop (data_in))
+ {
+ lhs = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (lhs, RSHIFT_EXPR, data_in,
+ build_int_cst (unsigned_type_node,
+ limb_prec - cnt));
+ insert_before (g);
+ if (!types_compatible_p (rhs1_type, m_limb_type))
+ lhs = add_cast (rhs1_type, lhs);
+ data_in = lhs;
+ }
+ if (types_compatible_p (rhs1_type, m_limb_type))
+ {
+ if (data_out == NULL_TREE)
+ data_out = make_ssa_name (m_limb_type);
+ g = gimple_build_assign (data_out, rhs1);
+ insert_before (g);
+ }
+ if (cnt < (unsigned) TYPE_PRECISION (rhs1_type))
+ {
+ lhs = make_ssa_name (rhs1_type);
+ g = gimple_build_assign (lhs, LSHIFT_EXPR, rhs1, rhs2);
+ insert_before (g);
+ if (!integer_zerop (data_in))
+ {
+ rhs1 = lhs;
+ lhs = make_ssa_name (rhs1_type);
+ g = gimple_build_assign (lhs, BIT_IOR_EXPR, rhs1, data_in);
+ insert_before (g);
+ }
+ }
+ else
+ lhs = data_in;
+ m_data[m_data_cnt] = data_out;
+ m_data_cnt += 2;
+ return lhs;
+}
+
+/* Helper function for handle_stmt method, handle an integral
+ to integral conversion. */
+
+tree
+bitint_large_huge::handle_cast (tree lhs_type, tree rhs1, tree idx)
+{
+ tree rhs_type = TREE_TYPE (rhs1);
+ gimple *g;
+ if (TREE_CODE (rhs1) == SSA_NAME
+ && TREE_CODE (lhs_type) == BITINT_TYPE
+ && TREE_CODE (rhs_type) == BITINT_TYPE
+ && bitint_precision_kind (lhs_type) >= bitint_prec_large
+ && bitint_precision_kind (rhs_type) >= bitint_prec_large)
+ {
+ if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type)
+ /* If lhs has bigger precision than rhs, we can use
+ the simple case only if there is a guarantee that
+ the most significant limb is handled in straight
+ line code. If m_var_msb (on left shifts) or
+ if m_upwards_2limb * limb_prec is equal to
+ lhs precision that is not the case. */
+ || (!m_var_msb
+ && tree_int_cst_equal (TYPE_SIZE (rhs_type),
+ TYPE_SIZE (lhs_type))
+ && (!m_upwards_2limb
+ || (m_upwards_2limb * limb_prec
+ < TYPE_PRECISION (lhs_type)))))
+ {
+ rhs1 = handle_operand (rhs1, idx);
+ if (tree_fits_uhwi_p (idx))
+ {
+ tree type = limb_access_type (lhs_type, idx);
+ if (!types_compatible_p (type, TREE_TYPE (rhs1)))
+ rhs1 = add_cast (type, rhs1);
+ }
+ return rhs1;
+ }
+ tree t;
+ /* Indexes lower than this don't need any special processing. */
+ unsigned low = ((unsigned) TYPE_PRECISION (rhs_type)
+ - !TYPE_UNSIGNED (rhs_type)) / limb_prec;
+ /* Indexes >= than this always contain an extension. */
+ unsigned high = CEIL ((unsigned) TYPE_PRECISION (rhs_type), limb_prec);
+ bool save_first = m_first;
+ if (m_first)
+ {
+ m_data.safe_push (NULL_TREE);
+ m_data.safe_push (NULL_TREE);
+ m_data.safe_push (NULL_TREE);
+ if (TYPE_UNSIGNED (rhs_type))
+ /* No need to keep state between iterations. */
+ ;
+ else if (!m_upwards_2limb)
+ {
+ unsigned save_data_cnt = m_data_cnt;
+ gimple_stmt_iterator save_gsi = m_gsi;
+ m_gsi = m_init_gsi;
+ if (gsi_end_p (m_gsi))
+ m_gsi = gsi_after_labels (gsi_bb (m_gsi));
+ else
+ gsi_next (&m_gsi);
+ m_data_cnt = save_data_cnt + 3;
+ t = handle_operand (rhs1, size_int (low));
+ m_first = false;
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ m_data_cnt = save_data_cnt;
+ t = add_cast (signed_type_for (m_limb_type), t);
+ tree lpm1 = build_int_cst (unsigned_type_node, limb_prec - 1);
+ tree n = make_ssa_name (TREE_TYPE (t));
+ g = gimple_build_assign (n, RSHIFT_EXPR, t, lpm1);
+ insert_before (g);
+ m_data[save_data_cnt + 1] = add_cast (m_limb_type, n);
+ m_gsi = save_gsi;
+ }
+ else if (m_upwards_2limb * limb_prec < TYPE_PRECISION (rhs_type))
+ /* We need to keep state between iterations, but
+ fortunately not within the loop, only afterwards. */
+ ;
+ else
+ {
+ tree out;
+ m_data.truncate (m_data_cnt);
+ prepare_data_in_out (build_zero_cst (m_limb_type), idx, &out);
+ m_data.safe_push (NULL_TREE);
+ }
+ }
+
+ unsigned save_data_cnt = m_data_cnt;
+ m_data_cnt += 3;
+ if (!tree_fits_uhwi_p (idx))
+ {
+ if (m_upwards_2limb
+ && (m_upwards_2limb * limb_prec
+ <= ((unsigned) TYPE_PRECISION (rhs_type)
+ - !TYPE_UNSIGNED (rhs_type))))
+ {
+ rhs1 = handle_operand (rhs1, idx);
+ if (m_first)
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ m_first = save_first;
+ return rhs1;
+ }
+ bool single_comparison
+ = low == high || (m_upwards_2limb && (low & 1) == m_first);
+ g = gimple_build_cond (single_comparison ? LT_EXPR : LE_EXPR,
+ idx, size_int (low), NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ edge e4 = NULL;
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e1->dest);
+ g = gimple_build_cond (EQ_EXPR, idx, size_int (low),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e2 = split_block (gsi_bb (m_gsi), g);
+ basic_block bb = create_empty_bb (e2->dest);
+ add_bb_to_loop (bb, e2->dest->loop_father);
+ e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
+ set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
+ e4->probability = profile_probability::unlikely ();
+ e2->flags = EDGE_FALSE_VALUE;
+ e2->probability = e4->probability.invert ();
+ e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
+ e2 = find_edge (e2->dest, e3->dest);
+ }
+ m_gsi = gsi_after_labels (e2->src);
+ tree t1 = handle_operand (rhs1, idx), t2 = NULL_TREE;
+ if (m_first)
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ tree ext = NULL_TREE;
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e4->src);
+ m_first = false;
+ m_data_cnt = save_data_cnt + 3;
+ t2 = handle_operand (rhs1, size_int (low));
+ if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (t2)))
+ t2 = add_cast (m_limb_type, t2);
+ if (!TYPE_UNSIGNED (rhs_type) && m_upwards_2limb)
+ {
+ ext = add_cast (signed_type_for (m_limb_type), t2);
+ tree lpm1 = build_int_cst (unsigned_type_node,
+ limb_prec - 1);
+ tree n = make_ssa_name (TREE_TYPE (ext));
+ g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
+ insert_before (g);
+ ext = add_cast (m_limb_type, n);
+ }
+ }
+ tree t3;
+ if (TYPE_UNSIGNED (rhs_type))
+ t3 = build_zero_cst (m_limb_type);
+ else if (m_upwards_2limb && (save_first || ext != NULL_TREE))
+ t3 = m_data[save_data_cnt];
+ else
+ t3 = m_data[save_data_cnt + 1];
+ m_gsi = gsi_after_labels (e2->dest);
+ t = make_ssa_name (m_limb_type);
+ gphi *phi = create_phi_node (t, e2->dest);
+ add_phi_arg (phi, t1, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, t3, e3, UNKNOWN_LOCATION);
+ if (e4)
+ add_phi_arg (phi, t2, e4, UNKNOWN_LOCATION);
+ if (ext)
+ {
+ tree t4 = make_ssa_name (m_limb_type);
+ phi = create_phi_node (t4, e2->dest);
+ add_phi_arg (phi, build_zero_cst (m_limb_type), e2,
+ UNKNOWN_LOCATION);
+ add_phi_arg (phi, m_data[save_data_cnt], e3, UNKNOWN_LOCATION);
+ add_phi_arg (phi, ext, e4, UNKNOWN_LOCATION);
+ g = gimple_build_assign (m_data[save_data_cnt + 1], t4);
+ insert_before (g);
+ }
+ m_first = save_first;
+ return t;
+ }
+ else
+ {
+ if (tree_to_uhwi (idx) < low)
+ {
+ t = handle_operand (rhs1, idx);
+ if (m_first)
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ }
+ else if (tree_to_uhwi (idx) < high)
+ {
+ t = handle_operand (rhs1, size_int (low));
+ if (m_first)
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (t)))
+ t = add_cast (m_limb_type, t);
+ tree ext = NULL_TREE;
+ if (!TYPE_UNSIGNED (rhs_type) && m_upwards_2limb)
+ {
+ ext = add_cast (signed_type_for (m_limb_type), t);
+ tree lpm1 = build_int_cst (unsigned_type_node,
+ limb_prec - 1);
+ tree n = make_ssa_name (TREE_TYPE (ext));
+ g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
+ insert_before (g);
+ ext = add_cast (m_limb_type, n);
+ m_data[save_data_cnt + 1] = ext;
+ }
+ }
+ else
+ {
+ if (TYPE_UNSIGNED (rhs_type) && m_first)
+ {
+ handle_operand (rhs1, size_zero_node);
+ m_data[save_data_cnt + 2]
+ = build_int_cst (NULL_TREE, m_data_cnt);
+ }
+ else
+ m_data_cnt = tree_to_uhwi (m_data[save_data_cnt + 2]);
+ if (TYPE_UNSIGNED (rhs_type))
+ t = build_zero_cst (m_limb_type);
+ else
+ t = m_data[save_data_cnt + 1];
+ }
+ tree type = limb_access_type (lhs_type, idx);
+ if (!useless_type_conversion_p (type, m_limb_type))
+ t = add_cast (type, t);
+ m_first = save_first;
+ return t;
+ }
+ }
+ else if (TREE_CODE (lhs_type) == BITINT_TYPE
+ && bitint_precision_kind (lhs_type) >= bitint_prec_large
+ && INTEGRAL_TYPE_P (rhs_type))
+ {
+ /* Add support for 3 or more limbs filled in from normal integral
+ type if this assert fails. If no target chooses limb mode smaller
+ than half of largest supported normal integral type, this will not
+ be needed. */
+ gcc_assert (TYPE_PRECISION (rhs_type) <= 2 * limb_prec);
+ tree r1 = NULL_TREE, r2 = NULL_TREE, rext = NULL_TREE;
+ if (m_first)
+ {
+ gimple_stmt_iterator save_gsi = m_gsi;
+ m_gsi = m_init_gsi;
+ if (gsi_end_p (m_gsi))
+ m_gsi = gsi_after_labels (gsi_bb (m_gsi));
+ else
+ gsi_next (&m_gsi);
+ if (TREE_CODE (rhs_type) == BITINT_TYPE
+ && bitint_precision_kind (rhs_type) == bitint_prec_middle)
+ {
+ tree type = NULL_TREE;
+ rhs1 = maybe_cast_middle_bitint (&m_gsi, rhs1, type);
+ rhs_type = TREE_TYPE (rhs1);
+ }
+ r1 = rhs1;
+ if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
+ r1 = add_cast (m_limb_type, rhs1);
+ if (TYPE_PRECISION (rhs_type) > limb_prec)
+ {
+ g = gimple_build_assign (make_ssa_name (rhs_type),
+ RSHIFT_EXPR, rhs1,
+ build_int_cst (unsigned_type_node,
+ limb_prec));
+ insert_before (g);
+ r2 = add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ if (TYPE_UNSIGNED (rhs_type))
+ rext = build_zero_cst (m_limb_type);
+ else
+ {
+ rext = add_cast (signed_type_for (m_limb_type), r2 ? r2 : r1);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rext)),
+ RSHIFT_EXPR, rext,
+ build_int_cst (unsigned_type_node,
+ limb_prec - 1));
+ insert_before (g);
+ rext = add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ m_gsi = save_gsi;
+ }
+ tree t;
+ if (m_upwards_2limb)
+ {
+ if (m_first)
+ {
+ tree out1, out2;
+ prepare_data_in_out (r1, idx, &out1);
+ g = gimple_build_assign (m_data[m_data_cnt + 1], rext);
+ insert_before (g);
+ if (TYPE_PRECISION (rhs_type) > limb_prec)
+ {
+ prepare_data_in_out (r2, idx, &out2);
+ g = gimple_build_assign (m_data[m_data_cnt + 3], rext);
+ insert_before (g);
+ m_data.pop ();
+ t = m_data.pop ();
+ m_data[m_data_cnt + 1] = t;
+ }
+ else
+ m_data[m_data_cnt + 1] = rext;
+ m_data.safe_push (rext);
+ t = m_data[m_data_cnt];
+ }
+ else if (!tree_fits_uhwi_p (idx))
+ t = m_data[m_data_cnt + 1];
+ else
+ {
+ tree type = limb_access_type (lhs_type, idx);
+ t = m_data[m_data_cnt + 2];
+ if (!useless_type_conversion_p (type, m_limb_type))
+ t = add_cast (type, t);
+ }
+ m_data_cnt += 3;
+ return t;
+ }
+ else if (m_first)
+ {
+ m_data.safe_push (r1);
+ m_data.safe_push (r2);
+ m_data.safe_push (rext);
+ }
+ if (tree_fits_uhwi_p (idx))
+ {
+ tree type = limb_access_type (lhs_type, idx);
+ if (integer_zerop (idx))
+ t = m_data[m_data_cnt];
+ else if (TYPE_PRECISION (rhs_type) > limb_prec
+ && integer_onep (idx))
+ t = m_data[m_data_cnt + 1];
+ else
+ t = m_data[m_data_cnt + 2];
+ if (!useless_type_conversion_p (type, m_limb_type))
+ t = add_cast (type, t);
+ m_data_cnt += 3;
+ return t;
+ }
+ g = gimple_build_cond (EQ_EXPR, idx, size_zero_node,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
+ edge e4 = NULL;
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_FALSE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ if (m_data[m_data_cnt + 1])
+ {
+ m_gsi = gsi_after_labels (e1->dest);
+ g = gimple_build_cond (EQ_EXPR, idx, size_one_node,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e5 = split_block (gsi_bb (m_gsi), g);
+ e4 = make_edge (e5->src, e2->dest, EDGE_TRUE_VALUE);
+ e2 = find_edge (e5->dest, e2->dest);
+ e4->probability = profile_probability::unlikely ();
+ e5->flags = EDGE_FALSE_VALUE;
+ e5->probability = e4->probability.invert ();
+ }
+ m_gsi = gsi_after_labels (e2->dest);
+ t = make_ssa_name (m_limb_type);
+ gphi *phi = create_phi_node (t, e2->dest);
+ add_phi_arg (phi, m_data[m_data_cnt + 2], e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, m_data[m_data_cnt], e3, UNKNOWN_LOCATION);
+ if (e4)
+ add_phi_arg (phi, m_data[m_data_cnt + 1], e4, UNKNOWN_LOCATION);
+ m_data_cnt += 3;
+ return t;
+ }
+ return NULL_TREE;
+}
+
+/* Return a limb IDX from a mergeable statement STMT. */
+
+tree
+bitint_large_huge::handle_stmt (gimple *stmt, tree idx)
+{
+ tree lhs, rhs1, rhs2 = NULL_TREE;
+ gimple *g;
+ switch (gimple_code (stmt))
+ {
+ case GIMPLE_ASSIGN:
+ if (gimple_assign_load_p (stmt))
+ {
+ rhs1 = gimple_assign_rhs1 (stmt);
+ tree rhs_type = TREE_TYPE (rhs1);
+ bool eh = stmt_ends_bb_p (stmt);
+ /* Use write_p = true for loads with EH edges to make
+ sure limb_access doesn't add a cast as separate
+ statement after it. */
+ rhs1 = limb_access (rhs_type, rhs1, idx, eh);
+ lhs = make_ssa_name (TREE_TYPE (rhs1));
+ g = gimple_build_assign (lhs, rhs1);
+ insert_before (g);
+ if (eh)
+ {
+ maybe_duplicate_eh_stmt (g, stmt);
+ edge e1;
+ edge_iterator ei;
+ basic_block bb = gimple_bb (stmt);
+
+ FOR_EACH_EDGE (e1, ei, bb->succs)
+ if (e1->flags & EDGE_EH)
+ break;
+ if (e1)
+ {
+ edge e2 = split_block (gsi_bb (m_gsi), g);
+ m_gsi = gsi_after_labels (e2->dest);
+ make_edge (e2->src, e1->dest, EDGE_EH)->probability
+ = profile_probability::very_unlikely ();
+ }
+ if (tree_fits_uhwi_p (idx))
+ {
+ tree atype = limb_access_type (rhs_type, idx);
+ if (!useless_type_conversion_p (atype, TREE_TYPE (rhs1)))
+ lhs = add_cast (atype, lhs);
+ }
+ }
+ return lhs;
+ }
+ switch (gimple_assign_rhs_code (stmt))
+ {
+ case BIT_AND_EXPR:
+ case BIT_IOR_EXPR:
+ case BIT_XOR_EXPR:
+ rhs2 = handle_operand (gimple_assign_rhs2 (stmt), idx);
+ /* FALLTHRU */
+ case BIT_NOT_EXPR:
+ rhs1 = handle_operand (gimple_assign_rhs1 (stmt), idx);
+ lhs = make_ssa_name (TREE_TYPE (rhs1));
+ g = gimple_build_assign (lhs, gimple_assign_rhs_code (stmt),
+ rhs1, rhs2);
+ insert_before (g);
+ return lhs;
+ case PLUS_EXPR:
+ case MINUS_EXPR:
+ rhs1 = handle_operand (gimple_assign_rhs1 (stmt), idx);
+ rhs2 = handle_operand (gimple_assign_rhs2 (stmt), idx);
+ return handle_plus_minus (gimple_assign_rhs_code (stmt),
+ rhs1, rhs2, idx);
+ case NEGATE_EXPR:
+ rhs2 = handle_operand (gimple_assign_rhs1 (stmt), idx);
+ rhs1 = build_zero_cst (TREE_TYPE (rhs2));
+ return handle_plus_minus (MINUS_EXPR, rhs1, rhs2, idx);
+ case LSHIFT_EXPR:
+ return handle_lshift (handle_operand (gimple_assign_rhs1 (stmt),
+ idx),
+ gimple_assign_rhs2 (stmt), idx);
+ case SSA_NAME:
+ case INTEGER_CST:
+ return handle_operand (gimple_assign_rhs1 (stmt), idx);
+ CASE_CONVERT:
+ case VIEW_CONVERT_EXPR:
+ return handle_cast (TREE_TYPE (gimple_assign_lhs (stmt)),
+ gimple_assign_rhs1 (stmt), idx);
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+ gcc_unreachable ();
+}
+
+/* Return minimum precision of OP at STMT.
+ Positive value is minimum precision above which all bits
+ are zero, negative means all bits above negation of the
+ value are copies of the sign bit. */
+
+static int
+range_to_prec (tree op, gimple *stmt)
+{
+ int_range_max r;
+ wide_int w;
+ tree type = TREE_TYPE (op);
+ unsigned int prec = TYPE_PRECISION (type);
+
+ if (!optimize
+ || !get_range_query (cfun)->range_of_expr (r, op, stmt))
+ {
+ if (TYPE_UNSIGNED (type))
+ return prec;
+ else
+ return -prec;
+ }
+
+ if (!TYPE_UNSIGNED (TREE_TYPE (op)))
+ {
+ w = r.lower_bound ();
+ if (wi::neg_p (w))
+ {
+ int min_prec1 = wi::min_precision (w, SIGNED);
+ w = r.upper_bound ();
+ int min_prec2 = wi::min_precision (w, SIGNED);
+ int min_prec = MAX (min_prec1, min_prec2);
+ return MIN (-min_prec, -2);
+ }
+ }
+
+ w = r.upper_bound ();
+ int min_prec = wi::min_precision (w, UNSIGNED);
+ return MAX (min_prec, 1);
+}
+
+/* Return address of the first limb of OP and write into *PREC
+ its precision. If positive, the operand is zero extended
+ from that precision, if it is negative, the operand is sign-extended
+ from -*PREC. If PREC_STORED is NULL, it is the toplevel call,
+ otherwise *PREC_STORED is prec from the innermost call without
+ range optimizations. */
+
+tree
+bitint_large_huge::handle_operand_addr (tree op, gimple *stmt,
+ int *prec_stored, int *prec)
+{
+ wide_int w;
+ location_t loc_save = m_loc;
+ if ((TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
+ || bitint_precision_kind (TREE_TYPE (op)) < bitint_prec_large)
+ && TREE_CODE (op) != INTEGER_CST)
+ {
+ do_int:
+ *prec = range_to_prec (op, stmt);
+ bitint_prec_kind kind = bitint_prec_small;
+ gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (op)));
+ if (TREE_CODE (TREE_TYPE (op)) == BITINT_TYPE)
+ kind = bitint_precision_kind (TREE_TYPE (op));
+ if (kind == bitint_prec_middle)
+ {
+ tree type = NULL_TREE;
+ op = maybe_cast_middle_bitint (&m_gsi, op, type);
+ }
+ tree op_type = TREE_TYPE (op);
+ unsigned HOST_WIDE_INT nelts
+ = CEIL (TYPE_PRECISION (op_type), limb_prec);
+ /* Add support for 3 or more limbs filled in from normal
+ integral type if this assert fails. If no target chooses
+ limb mode smaller than half of largest supported normal
+ integral type, this will not be needed. */
+ gcc_assert (nelts <= 2);
+ if (prec_stored)
+ *prec_stored = (TYPE_UNSIGNED (op_type)
+ ? TYPE_PRECISION (op_type)
+ : -TYPE_PRECISION (op_type));
+ if (*prec <= limb_prec && *prec >= -limb_prec)
+ {
+ nelts = 1;
+ if (prec_stored)
+ {
+ if (TYPE_UNSIGNED (op_type))
+ {
+ if (*prec_stored > limb_prec)
+ *prec_stored = limb_prec;
+ }
+ else if (*prec_stored < -limb_prec)
+ *prec_stored = -limb_prec;
+ }
+ }
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ tree var = create_tmp_var (atype);
+ tree t1 = op;
+ if (!useless_type_conversion_p (m_limb_type, op_type))
+ t1 = add_cast (m_limb_type, t1);
+ tree v = build4 (ARRAY_REF, m_limb_type, var, size_zero_node,
+ NULL_TREE, NULL_TREE);
+ gimple *g = gimple_build_assign (v, t1);
+ insert_before (g);
+ if (nelts > 1)
+ {
+ tree lp = build_int_cst (unsigned_type_node, limb_prec);
+ g = gimple_build_assign (make_ssa_name (op_type),
+ RSHIFT_EXPR, op, lp);
+ insert_before (g);
+ tree t2 = gimple_assign_lhs (g);
+ t2 = add_cast (m_limb_type, t2);
+ v = build4 (ARRAY_REF, m_limb_type, var, size_one_node,
+ NULL_TREE, NULL_TREE);
+ g = gimple_build_assign (v, t2);
+ insert_before (g);
+ }
+ tree ret = build_fold_addr_expr (var);
+ if (!stmt_ends_bb_p (gsi_stmt (m_gsi)))
+ {
+ tree clobber = build_clobber (atype, CLOBBER_EOL);
+ g = gimple_build_assign (var, clobber);
+ gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
+ }
+ m_loc = loc_save;
+ return ret;
+ }
+ switch (TREE_CODE (op))
+ {
+ case SSA_NAME:
+ if (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
+ {
+ gimple *g = SSA_NAME_DEF_STMT (op);
+ tree ret;
+ m_loc = gimple_location (g);
+ if (gimple_assign_load_p (g))
+ {
+ *prec = range_to_prec (op, NULL);
+ if (prec_stored)
+ *prec_stored = (TYPE_UNSIGNED (TREE_TYPE (op))
+ ? TYPE_PRECISION (TREE_TYPE (op))
+ : -TYPE_PRECISION (TREE_TYPE (op)));
+ ret = build_fold_addr_expr (gimple_assign_rhs1 (g));
+ ret = force_gimple_operand_gsi (&m_gsi, ret, true,
+ NULL_TREE, true, GSI_SAME_STMT);
+ }
+ else if (gimple_code (g) == GIMPLE_NOP)
+ {
+ tree var = create_tmp_var (m_limb_type);
+ TREE_ADDRESSABLE (var) = 1;
+ ret = build_fold_addr_expr (var);
+ if (!stmt_ends_bb_p (gsi_stmt (m_gsi)))
+ {
+ tree clobber = build_clobber (m_limb_type, CLOBBER_EOL);
+ g = gimple_build_assign (var, clobber);
+ gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
+ }
+ }
+ else
+ {
+ gcc_assert (gimple_assign_cast_p (g));
+ tree rhs1 = gimple_assign_rhs1 (g);
+ bitint_prec_kind kind = bitint_prec_small;
+ gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (rhs1)));
+ if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE)
+ kind = bitint_precision_kind (TREE_TYPE (rhs1));
+ if (kind >= bitint_prec_large)
+ {
+ tree lhs_type = TREE_TYPE (op);
+ tree rhs_type = TREE_TYPE (rhs1);
+ int prec_stored_val = 0;
+ ret = handle_operand_addr (rhs1, g, &prec_stored_val, prec);
+ if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (rhs_type))
+ {
+ if (TYPE_UNSIGNED (lhs_type)
+ && !TYPE_UNSIGNED (rhs_type))
+ gcc_assert (*prec >= 0 || prec_stored == NULL);
+ }
+ else
+ {
+ if (*prec > 0 && *prec < TYPE_PRECISION (lhs_type))
+ ;
+ else if (TYPE_UNSIGNED (lhs_type))
+ {
+ gcc_assert (*prec > 0
+ || prec_stored_val > 0
+ || (-prec_stored_val
+ >= TYPE_PRECISION (lhs_type)));
+ *prec = TYPE_PRECISION (lhs_type);
+ }
+ else if (*prec < 0 && -*prec < TYPE_PRECISION (lhs_type))
+ ;
+ else
+ *prec = -TYPE_PRECISION (lhs_type);
+ }
+ }
+ else
+ {
+ op = rhs1;
+ stmt = g;
+ goto do_int;
+ }
+ }
+ m_loc = loc_save;
+ return ret;
+ }
+ else
+ {
+ int p = var_to_partition (m_map, op);
+ gcc_assert (m_vars[p] != NULL_TREE);
+ *prec = range_to_prec (op, stmt);
+ if (prec_stored)
+ *prec_stored = (TYPE_UNSIGNED (TREE_TYPE (op))
+ ? TYPE_PRECISION (TREE_TYPE (op))
+ : -TYPE_PRECISION (TREE_TYPE (op)));
+ return build_fold_addr_expr (m_vars[p]);
+ }
+ case INTEGER_CST:
+ unsigned int min_prec, mp;
+ tree type;
+ w = wi::to_wide (op);
+ if (tree_int_cst_sgn (op) >= 0)
+ {
+ min_prec = wi::min_precision (w, UNSIGNED);
+ *prec = MAX (min_prec, 1);
+ }
+ else
+ {
+ min_prec = wi::min_precision (w, SIGNED);
+ *prec = MIN ((int) -min_prec, -2);
+ }
+ mp = CEIL (min_prec, limb_prec) * limb_prec;
+ if (mp >= (unsigned) TYPE_PRECISION (TREE_TYPE (op)))
+ type = TREE_TYPE (op);
+ else
+ type = build_bitint_type (mp, 1);
+ if (TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) == bitint_prec_small)
+ {
+ if (TYPE_PRECISION (type) <= limb_prec)
+ type = m_limb_type;
+ else
+ /* This case is for targets which e.g. have 64-bit
+ limb but categorize up to 128-bits _BitInts as
+ small. We could use type of m_limb_type[2] and
+ similar instead to save space. */
+ type = build_bitint_type (mid_min_prec, 1);
+ }
+ if (prec_stored)
+ {
+ if (tree_int_cst_sgn (op) >= 0)
+ *prec_stored = MAX (TYPE_PRECISION (type), 1);
+ else
+ *prec_stored = MIN ((int) -TYPE_PRECISION (type), -2);
+ }
+ op = tree_output_constant_def (fold_convert (type, op));
+ return build_fold_addr_expr (op);
+ default:
+ gcc_unreachable ();
+ }
+}
+
+/* Helper function, create a loop before the current location,
+ start with sizetype INIT value from the preheader edge. Return
+ a PHI result and set *IDX_NEXT to SSA_NAME it creates and uses
+ from the latch edge. */
+
+tree
+bitint_large_huge::create_loop (tree init, tree *idx_next)
+{
+ if (!gsi_end_p (m_gsi))
+ gsi_prev (&m_gsi);
+ else
+ m_gsi = gsi_last_bb (gsi_bb (m_gsi));
+ edge e1 = split_block (gsi_bb (m_gsi), gsi_stmt (m_gsi));
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->dest, e1->dest, EDGE_TRUE_VALUE);
+ e3->probability = profile_probability::very_unlikely ();
+ e2->flags = EDGE_FALSE_VALUE;
+ e2->probability = e3->probability.invert ();
+ tree idx = make_ssa_name (sizetype);
+ gphi *phi = create_phi_node (idx, e1->dest);
+ add_phi_arg (phi, init, e1, UNKNOWN_LOCATION);
+ *idx_next = make_ssa_name (sizetype);
+ add_phi_arg (phi, *idx_next, e3, UNKNOWN_LOCATION);
+ m_gsi = gsi_after_labels (e1->dest);
+ m_bb = e1->dest;
+ m_preheader_bb = e1->src;
+ class loop *loop = alloc_loop ();
+ loop->header = e1->dest;
+ add_loop (loop, e1->src->loop_father);
+ return idx;
+}
+
+/* Lower large/huge _BitInt statement mergeable or similar STMT which can be
+ lowered using iteration from the least significant limb up to the most
+ significant limb. For large _BitInt it is emitted as straight line code
+ before current location, for huge _BitInt as a loop handling two limbs
+ at once, followed by handling up to limbs in straight line code (at most
+ one full and one partial limb). It can also handle EQ_EXPR/NE_EXPR
+ comparisons, in that case CMP_CODE should be the comparison code and
+ CMP_OP1/CMP_OP2 the comparison operands. */
+
+tree
+bitint_large_huge::lower_mergeable_stmt (gimple *stmt, tree_code &cmp_code,
+ tree cmp_op1, tree cmp_op2)
+{
+ bool eq_p = cmp_code != ERROR_MARK;
+ tree type;
+ if (eq_p)
+ type = TREE_TYPE (cmp_op1);
+ else
+ type = TREE_TYPE (gimple_assign_lhs (stmt));
+ gcc_assert (TREE_CODE (type) == BITINT_TYPE);
+ bitint_prec_kind kind = bitint_precision_kind (type);
+ gcc_assert (kind >= bitint_prec_large);
+ gimple *g;
+ tree lhs = gimple_get_lhs (stmt);
+ tree rhs1, lhs_type = lhs ? TREE_TYPE (lhs) : NULL_TREE;
+ if (lhs
+ && TREE_CODE (lhs) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
+ {
+ int p = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[p] != NULL_TREE);
+ m_lhs = lhs = m_vars[p];
+ }
+ unsigned cnt, rem = 0, end = 0, prec = TYPE_PRECISION (type);
+ bool sext = false;
+ tree ext = NULL_TREE, store_operand = NULL_TREE;
+ bool eh = false;
+ basic_block eh_pad = NULL;
+ if (gimple_store_p (stmt))
+ {
+ store_operand = gimple_assign_rhs1 (stmt);
+ eh = stmt_ends_bb_p (stmt);
+ if (eh)
+ {
+ edge e;
+ edge_iterator ei;
+ basic_block bb = gimple_bb (stmt);
+
+ FOR_EACH_EDGE (e, ei, bb->succs)
+ if (e->flags & EDGE_EH)
+ {
+ eh_pad = e->dest;
+ break;
+ }
+ }
+ }
+ if ((store_operand
+ && TREE_CODE (store_operand) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (store_operand)))
+ && gimple_assign_cast_p (SSA_NAME_DEF_STMT (store_operand)))
+ || gimple_assign_cast_p (stmt))
+ {
+ rhs1 = gimple_assign_rhs1 (store_operand
+ ? SSA_NAME_DEF_STMT (store_operand)
+ : stmt);
+ /* Optimize mergeable ops ending with widening cast to _BitInt
+ (or followed by store). We can lower just the limbs of the
+ cast operand and widen afterwards. */
+ if (TREE_CODE (rhs1) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1)))
+ && TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
+ && (CEIL ((unsigned) TYPE_PRECISION (TREE_TYPE (rhs1)),
+ limb_prec) < CEIL (prec, limb_prec)
+ || (kind == bitint_prec_huge
+ && TYPE_PRECISION (TREE_TYPE (rhs1)) < prec)))
+ {
+ store_operand = rhs1;
+ prec = TYPE_PRECISION (TREE_TYPE (rhs1));
+ kind = bitint_precision_kind (TREE_TYPE (rhs1));
+ if (!TYPE_UNSIGNED (TREE_TYPE (rhs1)))
+ sext = true;
+ }
+ }
+ tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
+ if (kind == bitint_prec_large)
+ cnt = CEIL (prec, limb_prec);
+ else
+ {
+ rem = (prec % (2 * limb_prec));
+ end = (prec - rem) / limb_prec;
+ cnt = 2 + CEIL (rem, limb_prec);
+ idx = idx_first = create_loop (size_zero_node, &idx_next);
+ }
+
+ basic_block edge_bb = NULL;
+ if (eq_p)
+ {
+ gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+ gsi_prev (&gsi);
+ edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
+ edge_bb = e->src;
+ if (kind == bitint_prec_large)
+ {
+ m_gsi = gsi_last_bb (edge_bb);
+ if (!gsi_end_p (m_gsi))
+ gsi_next (&m_gsi);
+ }
+ }
+ else
+ m_after_stmt = stmt;
+ if (kind != bitint_prec_large)
+ m_upwards_2limb = end;
+
+ for (unsigned i = 0; i < cnt; i++)
+ {
+ m_data_cnt = 0;
+ if (kind == bitint_prec_large)
+ idx = size_int (i);
+ else if (i >= 2)
+ idx = size_int (end + (i > 2));
+ if (eq_p)
+ {
+ rhs1 = handle_operand (cmp_op1, idx);
+ tree rhs2 = handle_operand (cmp_op2, idx);
+ g = gimple_build_cond (NE_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ e1->flags = EDGE_FALSE_VALUE;
+ edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
+ e1->probability = profile_probability::unlikely ();
+ e2->probability = e1->probability.invert ();
+ if (i == 0)
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ }
+ else
+ {
+ if (store_operand)
+ rhs1 = handle_operand (store_operand, idx);
+ else
+ rhs1 = handle_stmt (stmt, idx);
+ tree l = limb_access (lhs_type, lhs, idx, true);
+ if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
+ rhs1 = add_cast (TREE_TYPE (l), rhs1);
+ if (sext && i == cnt - 1)
+ ext = rhs1;
+ g = gimple_build_assign (l, rhs1);
+ insert_before (g);
+ if (eh)
+ {
+ maybe_duplicate_eh_stmt (g, stmt);
+ if (eh_pad)
+ {
+ edge e = split_block (gsi_bb (m_gsi), g);
+ m_gsi = gsi_after_labels (e->dest);
+ make_edge (e->src, eh_pad, EDGE_EH)->probability
+ = profile_probability::very_unlikely ();
+ }
+ }
+ }
+ m_first = false;
+ if (kind == bitint_prec_huge && i <= 1)
+ {
+ if (i == 0)
+ {
+ idx = make_ssa_name (sizetype);
+ g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
+ size_one_node);
+ insert_before (g);
+ }
+ else
+ {
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
+ size_int (2));
+ insert_before (g);
+ g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ if (eq_p)
+ m_gsi = gsi_after_labels (edge_bb);
+ else
+ m_gsi = gsi_for_stmt (stmt);
+ }
+ }
+ }
+
+ if (prec != (unsigned) TYPE_PRECISION (type)
+ && (CEIL ((unsigned) TYPE_PRECISION (type), limb_prec)
+ > CEIL (prec, limb_prec)))
+ {
+ if (sext)
+ {
+ ext = add_cast (signed_type_for (m_limb_type), ext);
+ tree lpm1 = build_int_cst (unsigned_type_node,
+ limb_prec - 1);
+ tree n = make_ssa_name (TREE_TYPE (ext));
+ g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
+ insert_before (g);
+ ext = add_cast (m_limb_type, n);
+ }
+ else
+ ext = build_zero_cst (m_limb_type);
+ kind = bitint_precision_kind (type);
+ unsigned start = CEIL (prec, limb_prec);
+ prec = TYPE_PRECISION (type);
+ idx = idx_first = idx_next = NULL_TREE;
+ if (prec <= (start + 2) * limb_prec)
+ kind = bitint_prec_large;
+ if (kind == bitint_prec_large)
+ cnt = CEIL (prec, limb_prec) - start;
+ else
+ {
+ rem = prec % limb_prec;
+ end = (prec - rem) / limb_prec;
+ cnt = 1 + (rem != 0);
+ idx = create_loop (size_int (start), &idx_next);
+ }
+ for (unsigned i = 0; i < cnt; i++)
+ {
+ if (kind == bitint_prec_large)
+ idx = size_int (start + i);
+ else if (i == 1)
+ idx = size_int (end);
+ rhs1 = ext;
+ tree l = limb_access (lhs_type, lhs, idx, true);
+ if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
+ rhs1 = add_cast (TREE_TYPE (l), rhs1);
+ g = gimple_build_assign (l, rhs1);
+ insert_before (g);
+ if (kind == bitint_prec_huge && i == 0)
+ {
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
+ size_one_node);
+ insert_before (g);
+ g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ m_gsi = gsi_for_stmt (stmt);
+ }
+ }
+ }
+
+ if (gimple_store_p (stmt))
+ {
+ unlink_stmt_vdef (stmt);
+ release_ssa_name (gimple_vdef (stmt));
+ gsi_remove (&m_gsi, true);
+ }
+ if (eq_p)
+ {
+ lhs = make_ssa_name (boolean_type_node);
+ basic_block bb = gimple_bb (stmt);
+ gphi *phi = create_phi_node (lhs, bb);
+ edge e = find_edge (gsi_bb (m_gsi), bb);
+ unsigned int n = EDGE_COUNT (bb->preds);
+ for (unsigned int i = 0; i < n; i++)
+ {
+ edge e2 = EDGE_PRED (bb, i);
+ add_phi_arg (phi, e == e2 ? boolean_true_node : boolean_false_node,
+ e2, UNKNOWN_LOCATION);
+ }
+ cmp_code = cmp_code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
+ return lhs;
+ }
+ else
+ return NULL_TREE;
+}
+
+/* Handle a large/huge _BitInt comparison statement STMT other than
+ EQ_EXPR/NE_EXPR. CMP_CODE, CMP_OP1 and CMP_OP2 meaning is like in
+ lower_mergeable_stmt. The {GT,GE,LT,LE}_EXPR comparisons are
+ lowered by iteration from the most significant limb downwards to
+ the least significant one, for large _BitInt in straight line code,
+ otherwise with most significant limb handled in
+ straight line code followed by a loop handling one limb at a time.
+ Comparisons with unsigned huge _BitInt with precisions which are
+ multiples of limb precision can use just the loop and don't need to
+ handle most significant limb before the loop. The loop or straight
+ line code jumps to final basic block if a particular pair of limbs
+ is not equal. */
+
+tree
+bitint_large_huge::lower_comparison_stmt (gimple *stmt, tree_code &cmp_code,
+ tree cmp_op1, tree cmp_op2)
+{
+ tree type = TREE_TYPE (cmp_op1);
+ gcc_assert (TREE_CODE (type) == BITINT_TYPE);
+ bitint_prec_kind kind = bitint_precision_kind (type);
+ gcc_assert (kind >= bitint_prec_large);
+ gimple *g;
+ if (!TYPE_UNSIGNED (type)
+ && integer_zerop (cmp_op2)
+ && (cmp_code == GE_EXPR || cmp_code == LT_EXPR))
+ {
+ unsigned end = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec) - 1;
+ tree idx = size_int (end);
+ m_data_cnt = 0;
+ tree rhs1 = handle_operand (cmp_op1, idx);
+ if (TYPE_UNSIGNED (TREE_TYPE (rhs1)))
+ {
+ tree stype = signed_type_for (TREE_TYPE (rhs1));
+ rhs1 = add_cast (stype, rhs1);
+ }
+ tree lhs = make_ssa_name (boolean_type_node);
+ g = gimple_build_assign (lhs, cmp_code, rhs1,
+ build_zero_cst (TREE_TYPE (rhs1)));
+ insert_before (g);
+ cmp_code = NE_EXPR;
+ return lhs;
+ }
+
+ unsigned cnt, rem = 0, end = 0;
+ tree idx = NULL_TREE, idx_next = NULL_TREE;
+ if (kind == bitint_prec_large)
+ cnt = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec);
+ else
+ {
+ rem = ((unsigned) TYPE_PRECISION (type) % limb_prec);
+ if (rem == 0 && !TYPE_UNSIGNED (type))
+ rem = limb_prec;
+ end = ((unsigned) TYPE_PRECISION (type) - rem) / limb_prec;
+ cnt = 1 + (rem != 0);
+ }
+
+ basic_block edge_bb = NULL;
+ gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+ gsi_prev (&gsi);
+ edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
+ edge_bb = e->src;
+ m_gsi = gsi_last_bb (edge_bb);
+ if (!gsi_end_p (m_gsi))
+ gsi_next (&m_gsi);
+
+ edge *edges = XALLOCAVEC (edge, cnt * 2);
+ for (unsigned i = 0; i < cnt; i++)
+ {
+ m_data_cnt = 0;
+ if (kind == bitint_prec_large)
+ idx = size_int (cnt - i - 1);
+ else if (i == cnt - 1)
+ idx = create_loop (size_int (end - 1), &idx_next);
+ else
+ idx = size_int (end);
+ tree rhs1 = handle_operand (cmp_op1, idx);
+ tree rhs2 = handle_operand (cmp_op2, idx);
+ if (i == 0
+ && !TYPE_UNSIGNED (type)
+ && TYPE_UNSIGNED (TREE_TYPE (rhs1)))
+ {
+ tree stype = signed_type_for (TREE_TYPE (rhs1));
+ rhs1 = add_cast (stype, rhs1);
+ rhs2 = add_cast (stype, rhs2);
+ }
+ g = gimple_build_cond (GT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ e1->flags = EDGE_FALSE_VALUE;
+ edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
+ e1->probability = profile_probability::likely ();
+ e2->probability = e1->probability.invert ();
+ if (i == 0)
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ edges[2 * i] = e2;
+ g = gimple_build_cond (LT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e1 = split_block (gsi_bb (m_gsi), g);
+ e1->flags = EDGE_FALSE_VALUE;
+ e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
+ e1->probability = profile_probability::unlikely ();
+ e2->probability = e1->probability.invert ();
+ m_gsi = gsi_after_labels (e1->dest);
+ edges[2 * i + 1] = e2;
+ m_first = false;
+ if (kind == bitint_prec_huge && i == cnt - 1)
+ {
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
+ insert_before (g);
+ g = gimple_build_cond (NE_EXPR, idx, size_zero_node,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge true_edge, false_edge;
+ extract_true_false_edges_from_block (gsi_bb (m_gsi),
+ &true_edge, &false_edge);
+ m_gsi = gsi_after_labels (false_edge->dest);
+ }
+ }
+
+ tree lhs = make_ssa_name (boolean_type_node);
+ basic_block bb = gimple_bb (stmt);
+ gphi *phi = create_phi_node (lhs, bb);
+ for (unsigned int i = 0; i < cnt * 2; i++)
+ {
+ tree val = ((cmp_code == GT_EXPR || cmp_code == GE_EXPR)
+ ^ (i & 1)) ? boolean_true_node : boolean_false_node;
+ add_phi_arg (phi, val, edges[i], UNKNOWN_LOCATION);
+ }
+ add_phi_arg (phi, (cmp_code == GE_EXPR || cmp_code == LE_EXPR)
+ ? boolean_true_node : boolean_false_node,
+ find_edge (gsi_bb (m_gsi), bb), UNKNOWN_LOCATION);
+ cmp_code = NE_EXPR;
+ return lhs;
+}
+
+/* Lower large/huge _BitInt left and right shift except for left
+ shift by < limb_prec constant. */
+
+void
+bitint_large_huge::lower_shift_stmt (tree obj, gimple *stmt)
+{
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree lhs = gimple_assign_lhs (stmt);
+ tree_code rhs_code = gimple_assign_rhs_code (stmt);
+ tree type = TREE_TYPE (rhs1);
+ gimple *final_stmt = gsi_stmt (m_gsi);
+ gcc_assert (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large);
+ int prec = TYPE_PRECISION (type);
+ tree n = gimple_assign_rhs2 (stmt), n1, n2, n3, n4;
+ gimple *g;
+ if (obj == NULL_TREE)
+ {
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ }
+ /* Preparation code common for both left and right shifts.
+ unsigned n1 = n % limb_prec;
+ size_t n2 = n / limb_prec;
+ size_t n3 = n1 != 0;
+ unsigned n4 = (limb_prec - n1) % limb_prec;
+ (for power of 2 limb_prec n4 can be -n1 & (limb_prec)). */
+ if (TREE_CODE (n) == INTEGER_CST)
+ {
+ tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
+ n1 = int_const_binop (TRUNC_MOD_EXPR, n, lp);
+ n2 = fold_convert (sizetype, int_const_binop (TRUNC_DIV_EXPR, n, lp));
+ n3 = size_int (!integer_zerop (n1));
+ n4 = int_const_binop (TRUNC_MOD_EXPR,
+ int_const_binop (MINUS_EXPR, lp, n1), lp);
+ }
+ else
+ {
+ n1 = make_ssa_name (TREE_TYPE (n));
+ n2 = make_ssa_name (sizetype);
+ n3 = make_ssa_name (sizetype);
+ n4 = make_ssa_name (TREE_TYPE (n));
+ if (pow2p_hwi (limb_prec))
+ {
+ tree lpm1 = build_int_cst (TREE_TYPE (n), limb_prec - 1);
+ g = gimple_build_assign (n1, BIT_AND_EXPR, n, lpm1);
+ insert_before (g);
+ g = gimple_build_assign (useless_type_conversion_p (sizetype,
+ TREE_TYPE (n))
+ ? n2 : make_ssa_name (TREE_TYPE (n)),
+ RSHIFT_EXPR, n,
+ build_int_cst (TREE_TYPE (n),
+ exact_log2 (limb_prec)));
+ insert_before (g);
+ if (gimple_assign_lhs (g) != n2)
+ {
+ g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
+ insert_before (g);
+ }
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
+ NEGATE_EXPR, n1);
+ insert_before (g);
+ g = gimple_build_assign (n4, BIT_AND_EXPR, gimple_assign_lhs (g),
+ lpm1);
+ insert_before (g);
+ }
+ else
+ {
+ tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
+ g = gimple_build_assign (n1, TRUNC_MOD_EXPR, n, lp);
+ insert_before (g);
+ g = gimple_build_assign (useless_type_conversion_p (sizetype,
+ TREE_TYPE (n))
+ ? n2 : make_ssa_name (TREE_TYPE (n)),
+ TRUNC_DIV_EXPR, n, lp);
+ insert_before (g);
+ if (gimple_assign_lhs (g) != n2)
+ {
+ g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
+ insert_before (g);
+ }
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
+ MINUS_EXPR, lp, n1);
+ insert_before (g);
+ g = gimple_build_assign (n4, TRUNC_MOD_EXPR, gimple_assign_lhs (g),
+ lp);
+ insert_before (g);
+ }
+ g = gimple_build_assign (make_ssa_name (boolean_type_node), NE_EXPR, n1,
+ build_zero_cst (TREE_TYPE (n)));
+ insert_before (g);
+ g = gimple_build_assign (n3, NOP_EXPR, gimple_assign_lhs (g));
+ insert_before (g);
+ }
+ tree p = build_int_cst (sizetype,
+ prec / limb_prec - (prec % limb_prec == 0));
+ if (rhs_code == RSHIFT_EXPR)
+ {
+ /* Lower
+ dst = src >> n;
+ as
+ unsigned n1 = n % limb_prec;
+ size_t n2 = n / limb_prec;
+ size_t n3 = n1 != 0;
+ unsigned n4 = (limb_prec - n1) % limb_prec;
+ size_t idx;
+ size_t p = prec / limb_prec - (prec % limb_prec == 0);
+ int signed_p = (typeof (src) -1) < 0;
+ for (idx = n2; idx < ((!signed_p && (prec % limb_prec == 0))
+ ? p : p - n3); ++idx)
+ dst[idx - n2] = (src[idx] >> n1) | (src[idx + n3] << n4);
+ limb_type ext;
+ if (prec % limb_prec == 0)
+ ext = src[p];
+ else if (signed_p)
+ ext = ((signed limb_type) (src[p] << (limb_prec
+ - (prec % limb_prec))))
+ >> (limb_prec - (prec % limb_prec));
+ else
+ ext = src[p] & (((limb_type) 1 << (prec % limb_prec)) - 1);
+ if (!signed_p && (prec % limb_prec == 0))
+ ;
+ else if (idx < prec / 64)
+ {
+ dst[idx - n2] = (src[idx] >> n1) | (ext << n4);
+ ++idx;
+ }
+ idx -= n2;
+ if (signed_p)
+ {
+ dst[idx] = ((signed limb_type) ext) >> n1;
+ ext = ((signed limb_type) ext) >> (limb_prec - 1);
+ }
+ else
+ {
+ dst[idx] = ext >> n1;
+ ext = 0;
+ }
+ for (++idx; idx <= p; ++idx)
+ dst[idx] = ext; */
+ tree pmn3;
+ if (TYPE_UNSIGNED (type) && prec % limb_prec == 0)
+ pmn3 = p;
+ else if (TREE_CODE (n3) == INTEGER_CST)
+ pmn3 = int_const_binop (MINUS_EXPR, p, n3);
+ else
+ {
+ pmn3 = make_ssa_name (sizetype);
+ g = gimple_build_assign (pmn3, MINUS_EXPR, p, n3);
+ insert_before (g);
+ }
+ g = gimple_build_cond (LT_EXPR, n2, pmn3, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ tree idx_next;
+ tree idx = create_loop (n2, &idx_next);
+ tree idxmn2 = make_ssa_name (sizetype);
+ tree idxpn3 = make_ssa_name (sizetype);
+ g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
+ insert_before (g);
+ g = gimple_build_assign (idxpn3, PLUS_EXPR, idx, n3);
+ insert_before (g);
+ m_data_cnt = 0;
+ tree t1 = handle_operand (rhs1, idx);
+ m_first = false;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ RSHIFT_EXPR, t1, n1);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ if (!integer_zerop (n3))
+ {
+ m_data_cnt = 0;
+ tree t2 = handle_operand (rhs1, idxpn3);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, t2, n4);
+ insert_before (g);
+ t2 = gimple_assign_lhs (g);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ BIT_IOR_EXPR, t1, t2);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ }
+ tree l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
+ g = gimple_build_assign (l, t1);
+ insert_before (g);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
+ insert_before (g);
+ g = gimple_build_cond (LT_EXPR, idx_next, pmn3, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ idx = make_ssa_name (sizetype);
+ m_gsi = gsi_for_stmt (final_stmt);
+ gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
+ e1 = find_edge (e1->src, gsi_bb (m_gsi));
+ e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
+ add_phi_arg (phi, n2, e1, UNKNOWN_LOCATION);
+ add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
+ m_data_cnt = 0;
+ tree ms = handle_operand (rhs1, p);
+ tree ext = ms;
+ if (!types_compatible_p (TREE_TYPE (ms), m_limb_type))
+ ext = add_cast (m_limb_type, ms);
+ if (!(TYPE_UNSIGNED (type) && prec % limb_prec == 0)
+ && !integer_zerop (n3))
+ {
+ g = gimple_build_cond (LT_EXPR, idx, p, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e1 = split_block (gsi_bb (m_gsi), g);
+ e2 = split_block (e1->dest, (gimple *) NULL);
+ e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ m_data_cnt = 0;
+ t1 = handle_operand (rhs1, idx);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ RSHIFT_EXPR, t1, n1);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, ext, n4);
+ insert_before (g);
+ tree t2 = gimple_assign_lhs (g);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ BIT_IOR_EXPR, t1, t2);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ idxmn2 = make_ssa_name (sizetype);
+ g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
+ insert_before (g);
+ l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
+ g = gimple_build_assign (l, t1);
+ insert_before (g);
+ idx_next = make_ssa_name (sizetype);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
+ insert_before (g);
+ m_gsi = gsi_for_stmt (final_stmt);
+ tree nidx = make_ssa_name (sizetype);
+ phi = create_phi_node (nidx, gsi_bb (m_gsi));
+ e1 = find_edge (e1->src, gsi_bb (m_gsi));
+ e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
+ add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
+ add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
+ idx = nidx;
+ }
+ g = gimple_build_assign (make_ssa_name (sizetype), MINUS_EXPR, idx, n2);
+ insert_before (g);
+ idx = gimple_assign_lhs (g);
+ tree sext = ext;
+ if (!TYPE_UNSIGNED (type))
+ sext = add_cast (signed_type_for (m_limb_type), ext);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
+ RSHIFT_EXPR, sext, n1);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ if (!TYPE_UNSIGNED (type))
+ {
+ t1 = add_cast (m_limb_type, t1);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
+ RSHIFT_EXPR, sext,
+ build_int_cst (TREE_TYPE (n),
+ limb_prec - 1));
+ insert_before (g);
+ ext = add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ else
+ ext = build_zero_cst (m_limb_type);
+ l = limb_access (TREE_TYPE (lhs), obj, idx, true);
+ g = gimple_build_assign (l, t1);
+ insert_before (g);
+ g = gimple_build_assign (make_ssa_name (sizetype), PLUS_EXPR, idx,
+ size_one_node);
+ insert_before (g);
+ idx = gimple_assign_lhs (g);
+ g = gimple_build_cond (LE_EXPR, idx, p, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e1 = split_block (gsi_bb (m_gsi), g);
+ e2 = split_block (e1->dest, (gimple *) NULL);
+ e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ idx = create_loop (idx, &idx_next);
+ l = limb_access (TREE_TYPE (lhs), obj, idx, true);
+ g = gimple_build_assign (l, ext);
+ insert_before (g);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
+ insert_before (g);
+ g = gimple_build_cond (LE_EXPR, idx_next, p, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ }
+ else
+ {
+ /* Lower
+ dst = src << n;
+ as
+ unsigned n1 = n % limb_prec;
+ size_t n2 = n / limb_prec;
+ size_t n3 = n1 != 0;
+ unsigned n4 = (limb_prec - n1) % limb_prec;
+ size_t idx;
+ size_t p = prec / limb_prec - (prec % limb_prec == 0);
+ for (idx = p; (ssize_t) idx >= (ssize_t) (n2 + n3); --idx)
+ dst[idx] = (src[idx - n2] << n1) | (src[idx - n2 - n3] >> n4);
+ if (n1)
+ {
+ dst[idx] = src[idx - n2] << n1;
+ --idx;
+ }
+ for (; (ssize_t) idx >= 0; --idx)
+ dst[idx] = 0; */
+ tree n2pn3;
+ if (TREE_CODE (n2) == INTEGER_CST && TREE_CODE (n3) == INTEGER_CST)
+ n2pn3 = int_const_binop (PLUS_EXPR, n2, n3);
+ else
+ {
+ n2pn3 = make_ssa_name (sizetype);
+ g = gimple_build_assign (n2pn3, PLUS_EXPR, n2, n3);
+ insert_before (g);
+ }
+ /* For LSHIFT_EXPR, we can use handle_operand with non-INTEGER_CST
+ idx even to access the most significant partial limb. */
+ m_var_msb = true;
+ if (integer_zerop (n3))
+ /* For n3 == 0 p >= n2 + n3 is always true for all valid shift
+ counts. Emit if (true) condition that can be optimized later. */
+ g = gimple_build_cond (NE_EXPR, boolean_true_node, boolean_false_node,
+ NULL_TREE, NULL_TREE);
+ else
+ g = gimple_build_cond (LE_EXPR, n2pn3, p, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ tree idx_next;
+ tree idx = create_loop (p, &idx_next);
+ tree idxmn2 = make_ssa_name (sizetype);
+ tree idxmn2mn3 = make_ssa_name (sizetype);
+ g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
+ insert_before (g);
+ g = gimple_build_assign (idxmn2mn3, MINUS_EXPR, idxmn2, n3);
+ insert_before (g);
+ m_data_cnt = 0;
+ tree t1 = handle_operand (rhs1, idxmn2);
+ m_first = false;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, t1, n1);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ if (!integer_zerop (n3))
+ {
+ m_data_cnt = 0;
+ tree t2 = handle_operand (rhs1, idxmn2mn3);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ RSHIFT_EXPR, t2, n4);
+ insert_before (g);
+ t2 = gimple_assign_lhs (g);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ BIT_IOR_EXPR, t1, t2);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ }
+ tree l = limb_access (TREE_TYPE (lhs), obj, idx, true);
+ g = gimple_build_assign (l, t1);
+ insert_before (g);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
+ insert_before (g);
+ tree sn2pn3 = add_cast (ssizetype, n2pn3);
+ g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next), sn2pn3,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ idx = make_ssa_name (sizetype);
+ m_gsi = gsi_for_stmt (final_stmt);
+ gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
+ e1 = find_edge (e1->src, gsi_bb (m_gsi));
+ e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
+ add_phi_arg (phi, p, e1, UNKNOWN_LOCATION);
+ add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
+ m_data_cnt = 0;
+ if (!integer_zerop (n3))
+ {
+ g = gimple_build_cond (NE_EXPR, n3, size_zero_node,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e1 = split_block (gsi_bb (m_gsi), g);
+ e2 = split_block (e1->dest, (gimple *) NULL);
+ e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ idxmn2 = make_ssa_name (sizetype);
+ g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
+ insert_before (g);
+ m_data_cnt = 0;
+ t1 = handle_operand (rhs1, idxmn2);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, t1, n1);
+ insert_before (g);
+ t1 = gimple_assign_lhs (g);
+ l = limb_access (TREE_TYPE (lhs), obj, idx, true);
+ g = gimple_build_assign (l, t1);
+ insert_before (g);
+ idx_next = make_ssa_name (sizetype);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
+ insert_before (g);
+ m_gsi = gsi_for_stmt (final_stmt);
+ tree nidx = make_ssa_name (sizetype);
+ phi = create_phi_node (nidx, gsi_bb (m_gsi));
+ e1 = find_edge (e1->src, gsi_bb (m_gsi));
+ e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
+ add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
+ add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
+ idx = nidx;
+ }
+ g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx),
+ ssize_int (0), NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e1 = split_block (gsi_bb (m_gsi), g);
+ e2 = split_block (e1->dest, (gimple *) NULL);
+ e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ idx = create_loop (idx, &idx_next);
+ l = limb_access (TREE_TYPE (lhs), obj, idx, true);
+ g = gimple_build_assign (l, build_zero_cst (m_limb_type));
+ insert_before (g);
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
+ insert_before (g);
+ g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next),
+ ssize_int (0), NULL_TREE, NULL_TREE);
+ insert_before (g);
+ }
+}
+
+/* Lower large/huge _BitInt multiplication or division. */
+
+void
+bitint_large_huge::lower_muldiv_stmt (tree obj, gimple *stmt)
+{
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree rhs2 = gimple_assign_rhs2 (stmt);
+ tree lhs = gimple_assign_lhs (stmt);
+ tree_code rhs_code = gimple_assign_rhs_code (stmt);
+ tree type = TREE_TYPE (rhs1);
+ gcc_assert (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large);
+ int prec = TYPE_PRECISION (type), prec1, prec2;
+ rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec1);
+ rhs2 = handle_operand_addr (rhs2, stmt, NULL, &prec2);
+ if (obj == NULL_TREE)
+ {
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ lhs = build_fold_addr_expr (obj);
+ }
+ else
+ {
+ lhs = build_fold_addr_expr (obj);
+ lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
+ NULL_TREE, true, GSI_SAME_STMT);
+ }
+ tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
+ gimple *g;
+ switch (rhs_code)
+ {
+ case MULT_EXPR:
+ g = gimple_build_call_internal (IFN_MULBITINT, 6,
+ lhs, build_int_cst (sitype, prec),
+ rhs1, build_int_cst (sitype, prec1),
+ rhs2, build_int_cst (sitype, prec2));
+ insert_before (g);
+ break;
+ case TRUNC_DIV_EXPR:
+ g = gimple_build_call_internal (IFN_DIVMODBITINT, 8,
+ lhs, build_int_cst (sitype, prec),
+ null_pointer_node,
+ build_int_cst (sitype, 0),
+ rhs1, build_int_cst (sitype, prec1),
+ rhs2, build_int_cst (sitype, prec2));
+ if (!stmt_ends_bb_p (stmt))
+ gimple_call_set_nothrow (as_a <gcall *> (g), true);
+ insert_before (g);
+ break;
+ case TRUNC_MOD_EXPR:
+ g = gimple_build_call_internal (IFN_DIVMODBITINT, 8, null_pointer_node,
+ build_int_cst (sitype, 0),
+ lhs, build_int_cst (sitype, prec),
+ rhs1, build_int_cst (sitype, prec1),
+ rhs2, build_int_cst (sitype, prec2));
+ if (!stmt_ends_bb_p (stmt))
+ gimple_call_set_nothrow (as_a <gcall *> (g), true);
+ insert_before (g);
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ if (stmt_ends_bb_p (stmt))
+ {
+ maybe_duplicate_eh_stmt (g, stmt);
+ edge e1;
+ edge_iterator ei;
+ basic_block bb = gimple_bb (stmt);
+
+ FOR_EACH_EDGE (e1, ei, bb->succs)
+ if (e1->flags & EDGE_EH)
+ break;
+ if (e1)
+ {
+ edge e2 = split_block (gsi_bb (m_gsi), g);
+ m_gsi = gsi_after_labels (e2->dest);
+ make_edge (e2->src, e1->dest, EDGE_EH)->probability
+ = profile_probability::very_unlikely ();
+ }
+ }
+}
+
+/* Lower large/huge _BitInt conversion to/from floating point. */
+
+void
+bitint_large_huge::lower_float_conv_stmt (tree obj, gimple *stmt)
+{
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree lhs = gimple_assign_lhs (stmt);
+ tree_code rhs_code = gimple_assign_rhs_code (stmt);
+ if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (rhs1)))
+ || DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (lhs))))
+ {
+ sorry_at (gimple_location (stmt),
+ "unsupported conversion between %<_BitInt(%d)%> and %qT",
+ rhs_code == FIX_TRUNC_EXPR
+ ? TYPE_PRECISION (TREE_TYPE (lhs))
+ : TYPE_PRECISION (TREE_TYPE (rhs1)),
+ rhs_code == FIX_TRUNC_EXPR
+ ? TREE_TYPE (rhs1) : TREE_TYPE (lhs));
+ if (rhs_code == FLOAT_EXPR)
+ {
+ gimple *g
+ = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
+ gsi_replace (&m_gsi, g, true);
+ }
+ return;
+ }
+ tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
+ gimple *g;
+ if (rhs_code == FIX_TRUNC_EXPR)
+ {
+ int prec = TYPE_PRECISION (TREE_TYPE (lhs));
+ if (!TYPE_UNSIGNED (TREE_TYPE (lhs)))
+ prec = -prec;
+ if (obj == NULL_TREE)
+ {
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ lhs = build_fold_addr_expr (obj);
+ }
+ else
+ {
+ lhs = build_fold_addr_expr (obj);
+ lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
+ NULL_TREE, true, GSI_SAME_STMT);
+ }
+ scalar_mode from_mode
+ = as_a <scalar_mode> (TYPE_MODE (TREE_TYPE (rhs1)));
+#ifdef HAVE_SFmode
+ /* IEEE single is a full superset of both IEEE half and
+ bfloat formats, convert to float first and then to _BitInt
+ to avoid the need of another 2 library routines. */
+ if ((REAL_MODE_FORMAT (from_mode) == &arm_bfloat_half_format
+ || REAL_MODE_FORMAT (from_mode) == &ieee_half_format)
+ && REAL_MODE_FORMAT (SFmode) == &ieee_single_format)
+ {
+ tree type = lang_hooks.types.type_for_mode (SFmode, 0);
+ if (type)
+ rhs1 = add_cast (type, rhs1);
+ }
+#endif
+ g = gimple_build_call_internal (IFN_FLOATTOBITINT, 3,
+ lhs, build_int_cst (sitype, prec),
+ rhs1);
+ insert_before (g);
+ }
+ else
+ {
+ int prec;
+ rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec);
+ g = gimple_build_call_internal (IFN_BITINTTOFLOAT, 2,
+ rhs1, build_int_cst (sitype, prec));
+ gimple_call_set_lhs (g, lhs);
+ if (!stmt_ends_bb_p (stmt))
+ gimple_call_set_nothrow (as_a <gcall *> (g), true);
+ gsi_replace (&m_gsi, g, true);
+ }
+}
+
+/* Helper method for lower_addsub_overflow and lower_mul_overflow.
+ If check_zero is true, caller wants to check if all bits in [start, end)
+ are zero, otherwise if bits in [start, end) are either all zero or
+ all ones. L is the limb with index LIMB, START and END are measured
+ in bits. */
+
+tree
+bitint_large_huge::arith_overflow_extract_bits (unsigned int start,
+ unsigned int end, tree l,
+ unsigned int limb,
+ bool check_zero)
+{
+ unsigned startlimb = start / limb_prec;
+ unsigned endlimb = (end - 1) / limb_prec;
+ gimple *g;
+
+ if ((start % limb_prec) == 0 && (end % limb_prec) == 0)
+ return l;
+ if (startlimb == endlimb && limb == startlimb)
+ {
+ if (check_zero)
+ {
+ wide_int w = wi::shifted_mask (start % limb_prec,
+ end - start, false, limb_prec);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ BIT_AND_EXPR, l,
+ wide_int_to_tree (m_limb_type, w));
+ insert_before (g);
+ return gimple_assign_lhs (g);
+ }
+ unsigned int shift = start % limb_prec;
+ if ((end % limb_prec) != 0)
+ {
+ unsigned int lshift = (-end) % limb_prec;
+ shift += lshift;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, l,
+ build_int_cst (unsigned_type_node,
+ lshift));
+ insert_before (g);
+ l = gimple_assign_lhs (g);
+ }
+ l = add_cast (signed_type_for (m_limb_type), l);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
+ RSHIFT_EXPR, l,
+ build_int_cst (unsigned_type_node, shift));
+ insert_before (g);
+ return add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ else if (limb == startlimb)
+ {
+ if ((start % limb_prec) == 0)
+ return l;
+ if (!check_zero)
+ l = add_cast (signed_type_for (m_limb_type), l);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
+ RSHIFT_EXPR, l,
+ build_int_cst (unsigned_type_node,
+ start % limb_prec));
+ insert_before (g);
+ l = gimple_assign_lhs (g);
+ if (!check_zero)
+ l = add_cast (m_limb_type, l);
+ return l;
+ }
+ else if (limb == endlimb)
+ {
+ if ((end % limb_prec) == 0)
+ return l;
+ if (check_zero)
+ {
+ wide_int w = wi::mask (end % limb_prec, false, limb_prec);
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ BIT_AND_EXPR, l,
+ wide_int_to_tree (m_limb_type, w));
+ insert_before (g);
+ return gimple_assign_lhs (g);
+ }
+ unsigned int shift = (-end) % limb_prec;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ LSHIFT_EXPR, l,
+ build_int_cst (unsigned_type_node, shift));
+ insert_before (g);
+ l = add_cast (signed_type_for (m_limb_type), gimple_assign_lhs (g));
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
+ RSHIFT_EXPR, l,
+ build_int_cst (unsigned_type_node, shift));
+ insert_before (g);
+ return add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ return l;
+}
+
+/* Helper method for lower_addsub_overflow and lower_mul_overflow. Store
+ result including overflow flag into the right locations. */
+
+void
+bitint_large_huge::finish_arith_overflow (tree var, tree obj, tree type,
+ tree ovf, tree lhs, tree orig_obj,
+ gimple *stmt, tree_code code)
+{
+ gimple *g;
+
+ if (obj == NULL_TREE
+ && (TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) < bitint_prec_large))
+ {
+ /* Add support for 3 or more limbs filled in from normal integral
+ type if this assert fails. If no target chooses limb mode smaller
+ than half of largest supported normal integral type, this will not
+ be needed. */
+ gcc_assert (TYPE_PRECISION (type) <= 2 * limb_prec);
+ tree lhs_type = type;
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) == bitint_prec_middle)
+ lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (type),
+ TYPE_UNSIGNED (type));
+ tree r1 = limb_access (NULL_TREE, var, size_int (0), true);
+ g = gimple_build_assign (make_ssa_name (m_limb_type), r1);
+ insert_before (g);
+ r1 = gimple_assign_lhs (g);
+ if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
+ r1 = add_cast (lhs_type, r1);
+ if (TYPE_PRECISION (lhs_type) > limb_prec)
+ {
+ tree r2 = limb_access (NULL_TREE, var, size_int (1), true);
+ g = gimple_build_assign (make_ssa_name (m_limb_type), r2);
+ insert_before (g);
+ r2 = gimple_assign_lhs (g);
+ r2 = add_cast (lhs_type, r2);
+ g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
+ build_int_cst (unsigned_type_node,
+ limb_prec));
+ insert_before (g);
+ g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
+ gimple_assign_lhs (g));
+ insert_before (g);
+ r1 = gimple_assign_lhs (g);
+ }
+ if (lhs_type != type)
+ r1 = add_cast (type, r1);
+ ovf = add_cast (lhs_type, ovf);
+ if (lhs_type != type)
+ ovf = add_cast (type, ovf);
+ g = gimple_build_assign (lhs, COMPLEX_EXPR, r1, ovf);
+ m_gsi = gsi_for_stmt (stmt);
+ gsi_replace (&m_gsi, g, true);
+ }
+ else
+ {
+ unsigned HOST_WIDE_INT nelts = 0;
+ tree atype = NULL_TREE;
+ if (obj)
+ {
+ nelts = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
+ if (orig_obj == NULL_TREE)
+ nelts >>= 1;
+ atype = build_array_type_nelts (m_limb_type, nelts);
+ }
+ if (var && obj)
+ {
+ tree v1, v2;
+ tree zero;
+ if (orig_obj == NULL_TREE)
+ {
+ zero = build_zero_cst (build_pointer_type (TREE_TYPE (obj)));
+ v1 = build2 (MEM_REF, atype,
+ build_fold_addr_expr (unshare_expr (obj)), zero);
+ }
+ else if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
+ v1 = build1 (VIEW_CONVERT_EXPR, atype, unshare_expr (obj));
+ else
+ v1 = unshare_expr (obj);
+ zero = build_zero_cst (build_pointer_type (TREE_TYPE (var)));
+ v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), zero);
+ g = gimple_build_assign (v1, v2);
+ insert_before (g);
+ }
+ if (orig_obj == NULL_TREE && obj)
+ {
+ ovf = add_cast (m_limb_type, ovf);
+ tree l = limb_access (NULL_TREE, obj, size_int (nelts), true);
+ g = gimple_build_assign (l, ovf);
+ insert_before (g);
+ if (nelts > 1)
+ {
+ atype = build_array_type_nelts (m_limb_type, nelts - 1);
+ tree off = build_int_cst (build_pointer_type (TREE_TYPE (obj)),
+ (nelts + 1) * m_limb_size);
+ tree v1 = build2 (MEM_REF, atype,
+ build_fold_addr_expr (unshare_expr (obj)),
+ off);
+ g = gimple_build_assign (v1, build_zero_cst (atype));
+ insert_before (g);
+ }
+ }
+ else if (TREE_CODE (TREE_TYPE (lhs)) == COMPLEX_TYPE)
+ {
+ imm_use_iterator ui;
+ use_operand_p use_p;
+ FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
+ {
+ g = USE_STMT (use_p);
+ if (!is_gimple_assign (g)
+ || gimple_assign_rhs_code (g) != IMAGPART_EXPR)
+ continue;
+ tree lhs2 = gimple_assign_lhs (g);
+ gimple *use_stmt;
+ single_imm_use (lhs2, &use_p, &use_stmt);
+ lhs2 = gimple_assign_lhs (use_stmt);
+ gimple_stmt_iterator gsi = gsi_for_stmt (use_stmt);
+ if (useless_type_conversion_p (TREE_TYPE (lhs2), TREE_TYPE (ovf)))
+ g = gimple_build_assign (lhs2, ovf);
+ else
+ g = gimple_build_assign (lhs2, NOP_EXPR, ovf);
+ gsi_replace (&gsi, g, true);
+ break;
+ }
+ }
+ else if (ovf != boolean_false_node)
+ {
+ g = gimple_build_cond (NE_EXPR, ovf, boolean_false_node,
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::very_likely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ tree zero = build_zero_cst (TREE_TYPE (lhs));
+ tree fn = ubsan_build_overflow_builtin (code, m_loc,
+ TREE_TYPE (lhs),
+ zero, zero, NULL);
+ force_gimple_operand_gsi (&m_gsi, fn, true, NULL_TREE,
+ true, GSI_SAME_STMT);
+ m_gsi = gsi_after_labels (e2->dest);
+ }
+ }
+ if (var)
+ {
+ tree clobber = build_clobber (TREE_TYPE (var), CLOBBER_EOL);
+ g = gimple_build_assign (var, clobber);
+ gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
+ }
+}
+
+/* Helper function for lower_addsub_overflow and lower_mul_overflow.
+ Given precisions of result TYPE (PREC), argument 0 precision PREC0,
+ argument 1 precision PREC1 and minimum precision for the result
+ PREC2, compute *START, *END, *CHECK_ZERO and return OVF. */
+
+static tree
+arith_overflow (tree_code code, tree type, int prec, int prec0, int prec1,
+ int prec2, unsigned *start, unsigned *end, bool *check_zero)
+{
+ *start = 0;
+ *end = 0;
+ *check_zero = true;
+ /* Ignore this special rule for subtraction, even if both
+ prec0 >= 0 and prec1 >= 0, their subtraction can be negative
+ in infinite precision. */
+ if (code != MINUS_EXPR && prec0 >= 0 && prec1 >= 0)
+ {
+ /* Result in [0, prec2) is unsigned, if prec > prec2,
+ all bits above it will be zero. */
+ if ((prec - !TYPE_UNSIGNED (type)) >= prec2)
+ return boolean_false_node;
+ else
+ {
+ /* ovf if any of bits in [start, end) is non-zero. */
+ *start = prec - !TYPE_UNSIGNED (type);
+ *end = prec2;
+ }
+ }
+ else if (TYPE_UNSIGNED (type))
+ {
+ /* If result in [0, prec2) is signed and if prec > prec2,
+ all bits above it will be sign bit copies. */
+ if (prec >= prec2)
+ {
+ /* ovf if bit prec - 1 is non-zero. */
+ *start = prec - 1;
+ *end = prec;
+ }
+ else
+ {
+ /* ovf if any of bits in [start, end) is non-zero. */
+ *start = prec;
+ *end = prec2;
+ }
+ }
+ else if (prec >= prec2)
+ return boolean_false_node;
+ else
+ {
+ /* ovf if [start, end) bits aren't all zeros or all ones. */
+ *start = prec - 1;
+ *end = prec2;
+ *check_zero = false;
+ }
+ return NULL_TREE;
+}
+
+/* Lower a .{ADD,SUB}_OVERFLOW call with at least one large/huge _BitInt
+ argument or return type _Complex large/huge _BitInt. */
+
+void
+bitint_large_huge::lower_addsub_overflow (tree obj, gimple *stmt)
+{
+ tree arg0 = gimple_call_arg (stmt, 0);
+ tree arg1 = gimple_call_arg (stmt, 1);
+ tree lhs = gimple_call_lhs (stmt);
+ gimple *g;
+
+ if (!lhs)
+ {
+ gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+ gsi_remove (&gsi, true);
+ return;
+ }
+ gimple *final_stmt = gsi_stmt (m_gsi);
+ tree type = TREE_TYPE (lhs);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ int prec = TYPE_PRECISION (type);
+ int prec0 = range_to_prec (arg0, stmt);
+ int prec1 = range_to_prec (arg1, stmt);
+ int prec2 = ((prec0 < 0) == (prec1 < 0)
+ ? MAX (prec0 < 0 ? -prec0 : prec0,
+ prec1 < 0 ? -prec1 : prec1) + 1
+ : MAX (prec0 < 0 ? -prec0 : prec0 + 1,
+ prec1 < 0 ? -prec1 : prec1 + 1) + 1);
+ int prec3 = MAX (prec0 < 0 ? -prec0 : prec0,
+ prec1 < 0 ? -prec1 : prec1);
+ prec3 = MAX (prec3, prec);
+ tree var = NULL_TREE;
+ tree orig_obj = obj;
+ if (obj == NULL_TREE
+ && TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large
+ && m_names
+ && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
+ {
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ if (TREE_TYPE (lhs) == type)
+ orig_obj = obj;
+ }
+ if (TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) < bitint_prec_large)
+ {
+ unsigned HOST_WIDE_INT nelts = CEIL (prec, limb_prec);
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ var = create_tmp_var (atype);
+ }
+
+ enum tree_code code;
+ switch (gimple_call_internal_fn (stmt))
+ {
+ case IFN_ADD_OVERFLOW:
+ case IFN_UBSAN_CHECK_ADD:
+ code = PLUS_EXPR;
+ break;
+ case IFN_SUB_OVERFLOW:
+ case IFN_UBSAN_CHECK_SUB:
+ code = MINUS_EXPR;
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ unsigned start, end;
+ bool check_zero;
+ tree ovf = arith_overflow (code, type, prec, prec0, prec1, prec2,
+ &start, &end, &check_zero);
+
+ unsigned startlimb, endlimb;
+ if (ovf)
+ {
+ startlimb = ~0U;
+ endlimb = ~0U;
+ }
+ else
+ {
+ startlimb = start / limb_prec;
+ endlimb = (end - 1) / limb_prec;
+ }
+
+ int prec4 = ovf != NULL_TREE ? prec : prec3;
+ bitint_prec_kind kind = bitint_precision_kind (prec4);
+ unsigned cnt, rem = 0, fin = 0;
+ tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
+ bool last_ovf = (ovf == NULL_TREE
+ && CEIL (prec2, limb_prec) > CEIL (prec3, limb_prec));
+ if (kind != bitint_prec_huge)
+ cnt = CEIL (prec4, limb_prec) + last_ovf;
+ else
+ {
+ rem = (prec4 % (2 * limb_prec));
+ fin = (prec4 - rem) / limb_prec;
+ cnt = 2 + CEIL (rem, limb_prec) + last_ovf;
+ idx = idx_first = create_loop (size_zero_node, &idx_next);
+ }
+
+ if (kind == bitint_prec_huge)
+ m_upwards_2limb = fin;
+
+ tree type0 = TREE_TYPE (arg0);
+ tree type1 = TREE_TYPE (arg1);
+ if (TYPE_PRECISION (type0) < prec3)
+ {
+ type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
+ if (TREE_CODE (arg0) == INTEGER_CST)
+ arg0 = fold_convert (type0, arg0);
+ }
+ if (TYPE_PRECISION (type1) < prec3)
+ {
+ type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
+ if (TREE_CODE (arg1) == INTEGER_CST)
+ arg1 = fold_convert (type1, arg1);
+ }
+ unsigned int data_cnt = 0;
+ tree last_rhs1 = NULL_TREE, last_rhs2 = NULL_TREE;
+ tree cmp = build_zero_cst (m_limb_type);
+ unsigned prec_limbs = CEIL ((unsigned) prec, limb_prec);
+ tree ovf_out = NULL_TREE, cmp_out = NULL_TREE;
+ for (unsigned i = 0; i < cnt; i++)
+ {
+ m_data_cnt = 0;
+ tree rhs1, rhs2;
+ if (kind != bitint_prec_huge)
+ idx = size_int (i);
+ else if (i >= 2)
+ idx = size_int (fin + (i > 2));
+ if (!last_ovf || i < cnt - 1)
+ {
+ if (type0 != TREE_TYPE (arg0))
+ rhs1 = handle_cast (type0, arg0, idx);
+ else
+ rhs1 = handle_operand (arg0, idx);
+ if (type1 != TREE_TYPE (arg1))
+ rhs2 = handle_cast (type1, arg1, idx);
+ else
+ rhs2 = handle_operand (arg1, idx);
+ if (i == 0)
+ data_cnt = m_data_cnt;
+ if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
+ rhs1 = add_cast (m_limb_type, rhs1);
+ if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs2)))
+ rhs2 = add_cast (m_limb_type, rhs2);
+ last_rhs1 = rhs1;
+ last_rhs2 = rhs2;
+ }
+ else
+ {
+ m_data_cnt = data_cnt;
+ if (TYPE_UNSIGNED (type0))
+ rhs1 = build_zero_cst (m_limb_type);
+ else
+ {
+ rhs1 = add_cast (signed_type_for (m_limb_type), last_rhs1);
+ if (TREE_CODE (rhs1) == INTEGER_CST)
+ rhs1 = build_int_cst (m_limb_type,
+ tree_int_cst_sgn (rhs1) < 0 ? -1 : 0);
+ else
+ {
+ tree lpm1 = build_int_cst (unsigned_type_node,
+ limb_prec - 1);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
+ RSHIFT_EXPR, rhs1, lpm1);
+ insert_before (g);
+ rhs1 = add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ }
+ if (TYPE_UNSIGNED (type1))
+ rhs2 = build_zero_cst (m_limb_type);
+ else
+ {
+ rhs2 = add_cast (signed_type_for (m_limb_type), last_rhs2);
+ if (TREE_CODE (rhs2) == INTEGER_CST)
+ rhs2 = build_int_cst (m_limb_type,
+ tree_int_cst_sgn (rhs2) < 0 ? -1 : 0);
+ else
+ {
+ tree lpm1 = build_int_cst (unsigned_type_node,
+ limb_prec - 1);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs2)),
+ RSHIFT_EXPR, rhs2, lpm1);
+ insert_before (g);
+ rhs2 = add_cast (m_limb_type, gimple_assign_lhs (g));
+ }
+ }
+ }
+ tree rhs = handle_plus_minus (code, rhs1, rhs2, idx);
+ if (ovf != boolean_false_node)
+ {
+ if (tree_fits_uhwi_p (idx))
+ {
+ unsigned limb = tree_to_uhwi (idx);
+ if (limb >= startlimb && limb <= endlimb)
+ {
+ tree l = arith_overflow_extract_bits (start, end, rhs,
+ limb, check_zero);
+ tree this_ovf = make_ssa_name (boolean_type_node);
+ if (ovf == NULL_TREE && !check_zero)
+ {
+ cmp = l;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ PLUS_EXPR, l,
+ build_int_cst (m_limb_type, 1));
+ insert_before (g);
+ g = gimple_build_assign (this_ovf, GT_EXPR,
+ gimple_assign_lhs (g),
+ build_int_cst (m_limb_type, 1));
+ }
+ else
+ g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
+ insert_before (g);
+ if (ovf == NULL_TREE)
+ ovf = this_ovf;
+ else
+ {
+ tree b = make_ssa_name (boolean_type_node);
+ g = gimple_build_assign (b, BIT_IOR_EXPR, ovf, this_ovf);
+ insert_before (g);
+ ovf = b;
+ }
+ }
+ }
+ else if (startlimb < fin)
+ {
+ if (m_first && startlimb + 2 < fin)
+ {
+ tree data_out;
+ ovf = prepare_data_in_out (boolean_false_node, idx, &data_out);
+ ovf_out = m_data.pop ();
+ m_data.pop ();
+ if (!check_zero)
+ {
+ cmp = prepare_data_in_out (cmp, idx, &data_out);
+ cmp_out = m_data.pop ();
+ m_data.pop ();
+ }
+ }
+ if (i != 0 || startlimb != fin - 1)
+ {
+ tree_code cmp_code;
+ bool single_comparison
+ = (startlimb + 2 >= fin || (startlimb & 1) != (i & 1));
+ if (!single_comparison)
+ {
+ cmp_code = GE_EXPR;
+ if (!check_zero && (start % limb_prec) == 0)
+ single_comparison = true;
+ }
+ else if ((startlimb & 1) == (i & 1))
+ cmp_code = EQ_EXPR;
+ else
+ cmp_code = GT_EXPR;
+ g = gimple_build_cond (cmp_code, idx, size_int (startlimb),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ edge e4 = NULL;
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e1->dest);
+ g = gimple_build_cond (EQ_EXPR, idx,
+ size_int (startlimb), NULL_TREE,
+ NULL_TREE);
+ insert_before (g);
+ e2 = split_block (gsi_bb (m_gsi), g);
+ basic_block bb = create_empty_bb (e2->dest);
+ add_bb_to_loop (bb, e2->dest->loop_father);
+ e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
+ set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
+ e4->probability = profile_probability::unlikely ();
+ e2->flags = EDGE_FALSE_VALUE;
+ e2->probability = e4->probability.invert ();
+ e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
+ e2 = find_edge (e2->dest, e3->dest);
+ }
+ m_gsi = gsi_after_labels (e2->src);
+ unsigned tidx = startlimb + (cmp_code == GT_EXPR);
+ tree l = arith_overflow_extract_bits (start, end, rhs, tidx,
+ check_zero);
+ tree this_ovf = make_ssa_name (boolean_type_node);
+ if (cmp_code != GT_EXPR && !check_zero)
+ {
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ PLUS_EXPR, l,
+ build_int_cst (m_limb_type, 1));
+ insert_before (g);
+ g = gimple_build_assign (this_ovf, GT_EXPR,
+ gimple_assign_lhs (g),
+ build_int_cst (m_limb_type, 1));
+ }
+ else
+ g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
+ insert_before (g);
+ if (cmp_code == GT_EXPR)
+ {
+ tree t = make_ssa_name (boolean_type_node);
+ g = gimple_build_assign (t, BIT_IOR_EXPR, ovf, this_ovf);
+ insert_before (g);
+ this_ovf = t;
+ }
+ tree this_ovf2 = NULL_TREE;
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e4->src);
+ tree t = make_ssa_name (boolean_type_node);
+ g = gimple_build_assign (t, NE_EXPR, rhs, cmp);
+ insert_before (g);
+ this_ovf2 = make_ssa_name (boolean_type_node);
+ g = gimple_build_assign (this_ovf2, BIT_IOR_EXPR,
+ ovf, t);
+ insert_before (g);
+ }
+ m_gsi = gsi_after_labels (e2->dest);
+ tree t;
+ if (i == 1 && ovf_out)
+ t = ovf_out;
+ else
+ t = make_ssa_name (boolean_type_node);
+ gphi *phi = create_phi_node (t, e2->dest);
+ add_phi_arg (phi, this_ovf, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, ovf ? ovf
+ : boolean_false_node, e3,
+ UNKNOWN_LOCATION);
+ if (e4)
+ add_phi_arg (phi, this_ovf2, e4, UNKNOWN_LOCATION);
+ ovf = t;
+ if (!check_zero && cmp_code != GT_EXPR)
+ {
+ t = cmp_out ? cmp_out : make_ssa_name (m_limb_type);
+ phi = create_phi_node (t, e2->dest);
+ add_phi_arg (phi, l, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, cmp, e3, UNKNOWN_LOCATION);
+ if (e4)
+ add_phi_arg (phi, cmp, e4, UNKNOWN_LOCATION);
+ cmp = t;
+ }
+ }
+ }
+ }
+
+ if (var || obj)
+ {
+ if (tree_fits_uhwi_p (idx) && tree_to_uhwi (idx) >= prec_limbs)
+ ;
+ else if (!tree_fits_uhwi_p (idx)
+ && (unsigned) prec < (fin - (i == 0)) * limb_prec)
+ {
+ bool single_comparison
+ = (((unsigned) prec % limb_prec) == 0
+ || prec_limbs + 1 >= fin
+ || (prec_limbs & 1) == (i & 1));
+ g = gimple_build_cond (LE_EXPR, idx, size_int (prec_limbs - 1),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ edge e4 = NULL;
+ e3->probability = profile_probability::unlikely ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e1->dest);
+ g = gimple_build_cond (LT_EXPR, idx,
+ size_int (prec_limbs - 1),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ e2 = split_block (gsi_bb (m_gsi), g);
+ basic_block bb = create_empty_bb (e2->dest);
+ add_bb_to_loop (bb, e2->dest->loop_father);
+ e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
+ set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
+ e4->probability = profile_probability::unlikely ();
+ e2->flags = EDGE_FALSE_VALUE;
+ e2->probability = e4->probability.invert ();
+ e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
+ e2 = find_edge (e2->dest, e3->dest);
+ }
+ m_gsi = gsi_after_labels (e2->src);
+ tree l = limb_access (type, var ? var : obj, idx, true);
+ g = gimple_build_assign (l, rhs);
+ insert_before (g);
+ if (!single_comparison)
+ {
+ m_gsi = gsi_after_labels (e4->src);
+ l = limb_access (type, var ? var : obj,
+ size_int (prec_limbs - 1), true);
+ if (!useless_type_conversion_p (TREE_TYPE (l),
+ TREE_TYPE (rhs)))
+ rhs = add_cast (TREE_TYPE (l), rhs);
+ g = gimple_build_assign (l, rhs);
+ insert_before (g);
+ }
+ m_gsi = gsi_after_labels (e2->dest);
+ }
+ else
+ {
+ tree l = limb_access (type, var ? var : obj, idx, true);
+ if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs)))
+ rhs = add_cast (TREE_TYPE (l), rhs);
+ g = gimple_build_assign (l, rhs);
+ insert_before (g);
+ }
+ }
+ m_first = false;
+ if (kind == bitint_prec_huge && i <= 1)
+ {
+ if (i == 0)
+ {
+ idx = make_ssa_name (sizetype);
+ g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
+ size_one_node);
+ insert_before (g);
+ }
+ else
+ {
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
+ size_int (2));
+ insert_before (g);
+ g = gimple_build_cond (NE_EXPR, idx_next, size_int (fin),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ m_gsi = gsi_for_stmt (final_stmt);
+ }
+ }
+ }
+
+ finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, code);
+}
+
+/* Lower a .MUL_OVERFLOW call with at least one large/huge _BitInt
+ argument or return type _Complex large/huge _BitInt. */
+
+void
+bitint_large_huge::lower_mul_overflow (tree obj, gimple *stmt)
+{
+ tree arg0 = gimple_call_arg (stmt, 0);
+ tree arg1 = gimple_call_arg (stmt, 1);
+ tree lhs = gimple_call_lhs (stmt);
+ if (!lhs)
+ {
+ gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+ gsi_remove (&gsi, true);
+ return;
+ }
+ gimple *final_stmt = gsi_stmt (m_gsi);
+ tree type = TREE_TYPE (lhs);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ int prec = TYPE_PRECISION (type), prec0, prec1;
+ arg0 = handle_operand_addr (arg0, stmt, NULL, &prec0);
+ arg1 = handle_operand_addr (arg1, stmt, NULL, &prec1);
+ int prec2 = ((prec0 < 0 ? -prec0 : prec0)
+ + (prec1 < 0 ? -prec1 : prec1)
+ + ((prec0 < 0) != (prec1 < 0)));
+ tree var = NULL_TREE;
+ tree orig_obj = obj;
+ bool force_var = false;
+ if (obj == NULL_TREE
+ && TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large
+ && m_names
+ && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
+ {
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ if (TREE_TYPE (lhs) == type)
+ orig_obj = obj;
+ }
+ else if (obj != NULL_TREE && DECL_P (obj))
+ {
+ for (int i = 0; i < 2; ++i)
+ {
+ tree arg = i ? arg1 : arg0;
+ if (TREE_CODE (arg) == ADDR_EXPR)
+ arg = TREE_OPERAND (arg, 0);
+ if (get_base_address (arg) == obj)
+ {
+ force_var = true;
+ break;
+ }
+ }
+ }
+ if (obj == NULL_TREE
+ || force_var
+ || TREE_CODE (type) != BITINT_TYPE
+ || bitint_precision_kind (type) < bitint_prec_large
+ || prec2 > (CEIL (prec, limb_prec) * limb_prec * (orig_obj ? 1 : 2)))
+ {
+ unsigned HOST_WIDE_INT nelts = CEIL (MAX (prec, prec2), limb_prec);
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ var = create_tmp_var (atype);
+ }
+ tree addr = build_fold_addr_expr (var ? var : obj);
+ addr = force_gimple_operand_gsi (&m_gsi, addr, true,
+ NULL_TREE, true, GSI_SAME_STMT);
+ tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
+ gimple *g
+ = gimple_build_call_internal (IFN_MULBITINT, 6,
+ addr, build_int_cst (sitype,
+ MAX (prec2, prec)),
+ arg0, build_int_cst (sitype, prec0),
+ arg1, build_int_cst (sitype, prec1));
+ insert_before (g);
+
+ unsigned start, end;
+ bool check_zero;
+ tree ovf = arith_overflow (MULT_EXPR, type, prec, prec0, prec1, prec2,
+ &start, &end, &check_zero);
+ if (ovf == NULL_TREE)
+ {
+ unsigned startlimb = start / limb_prec;
+ unsigned endlimb = (end - 1) / limb_prec;
+ unsigned cnt;
+ bool use_loop = false;
+ if (startlimb == endlimb)
+ cnt = 1;
+ else if (startlimb + 1 == endlimb)
+ cnt = 2;
+ else if ((end % limb_prec) == 0)
+ {
+ cnt = 2;
+ use_loop = true;
+ }
+ else
+ {
+ cnt = 3;
+ use_loop = startlimb + 2 < endlimb;
+ }
+ if (cnt == 1)
+ {
+ tree l = limb_access (NULL_TREE, var ? var : obj,
+ size_int (startlimb), true);
+ g = gimple_build_assign (make_ssa_name (m_limb_type), l);
+ insert_before (g);
+ l = arith_overflow_extract_bits (start, end, gimple_assign_lhs (g),
+ startlimb, check_zero);
+ ovf = make_ssa_name (boolean_type_node);
+ if (check_zero)
+ g = gimple_build_assign (ovf, NE_EXPR, l,
+ build_zero_cst (m_limb_type));
+ else
+ {
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ PLUS_EXPR, l,
+ build_int_cst (m_limb_type, 1));
+ insert_before (g);
+ g = gimple_build_assign (ovf, GT_EXPR, gimple_assign_lhs (g),
+ build_int_cst (m_limb_type, 1));
+ }
+ insert_before (g);
+ }
+ else
+ {
+ basic_block edge_bb = NULL;
+ gimple_stmt_iterator gsi = m_gsi;
+ gsi_prev (&gsi);
+ edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
+ edge_bb = e->src;
+ m_gsi = gsi_last_bb (edge_bb);
+ if (!gsi_end_p (m_gsi))
+ gsi_next (&m_gsi);
+
+ tree cmp = build_zero_cst (m_limb_type);
+ for (unsigned i = 0; i < cnt; i++)
+ {
+ tree idx, idx_next = NULL_TREE;
+ if (i == 0)
+ idx = size_int (startlimb);
+ else if (i == 2)
+ idx = size_int (endlimb);
+ else if (use_loop)
+ idx = create_loop (size_int (startlimb + 1), &idx_next);
+ else
+ idx = size_int (startlimb + 1);
+ tree l = limb_access (NULL_TREE, var ? var : obj, idx, true);
+ g = gimple_build_assign (make_ssa_name (m_limb_type), l);
+ insert_before (g);
+ l = gimple_assign_lhs (g);
+ if (i == 0 || i == 2)
+ l = arith_overflow_extract_bits (start, end, l,
+ tree_to_uhwi (idx),
+ check_zero);
+ if (i == 0 && !check_zero)
+ {
+ cmp = l;
+ g = gimple_build_assign (make_ssa_name (m_limb_type),
+ PLUS_EXPR, l,
+ build_int_cst (m_limb_type, 1));
+ insert_before (g);
+ g = gimple_build_cond (GT_EXPR, gimple_assign_lhs (g),
+ build_int_cst (m_limb_type, 1),
+ NULL_TREE, NULL_TREE);
+ }
+ else
+ g = gimple_build_cond (NE_EXPR, l, cmp, NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge e1 = split_block (gsi_bb (m_gsi), g);
+ e1->flags = EDGE_FALSE_VALUE;
+ edge e2 = make_edge (e1->src, gimple_bb (final_stmt),
+ EDGE_TRUE_VALUE);
+ e1->probability = profile_probability::likely ();
+ e2->probability = e1->probability.invert ();
+ if (i == 0)
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
+ m_gsi = gsi_after_labels (e1->dest);
+ if (i == 1 && use_loop)
+ {
+ g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
+ size_one_node);
+ insert_before (g);
+ g = gimple_build_cond (NE_EXPR, idx_next,
+ size_int (endlimb + (cnt == 1)),
+ NULL_TREE, NULL_TREE);
+ insert_before (g);
+ edge true_edge, false_edge;
+ extract_true_false_edges_from_block (gsi_bb (m_gsi),
+ &true_edge,
+ &false_edge);
+ m_gsi = gsi_after_labels (false_edge->dest);
+ }
+ }
+
+ ovf = make_ssa_name (boolean_type_node);
+ basic_block bb = gimple_bb (final_stmt);
+ gphi *phi = create_phi_node (ovf, bb);
+ edge e1 = find_edge (gsi_bb (m_gsi), bb);
+ edge_iterator ei;
+ FOR_EACH_EDGE (e, ei, bb->preds)
+ {
+ tree val = e == e1 ? boolean_false_node : boolean_true_node;
+ add_phi_arg (phi, val, e, UNKNOWN_LOCATION);
+ }
+ m_gsi = gsi_for_stmt (final_stmt);
+ }
+ }
+
+ finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, MULT_EXPR);
+}
+
+/* Lower REALPART_EXPR or IMAGPART_EXPR stmt extracting part of result from
+ .{ADD,SUB,MUL}_OVERFLOW call. */
+
+void
+bitint_large_huge::lower_cplxpart_stmt (tree obj, gimple *stmt)
+{
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ rhs1 = TREE_OPERAND (rhs1, 0);
+ if (obj == NULL_TREE)
+ {
+ int part = var_to_partition (m_map, gimple_assign_lhs (stmt));
+ gcc_assert (m_vars[part] != NULL_TREE);
+ obj = m_vars[part];
+ }
+ if (TREE_CODE (rhs1) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
+ {
+ lower_call (obj, SSA_NAME_DEF_STMT (rhs1));
+ return;
+ }
+ int part = var_to_partition (m_map, rhs1);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ tree var = m_vars[part];
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
+ obj = build1 (VIEW_CONVERT_EXPR, atype, obj);
+ tree off = build_int_cst (build_pointer_type (TREE_TYPE (var)),
+ gimple_assign_rhs_code (stmt) == REALPART_EXPR
+ ? 0 : nelts * m_limb_size);
+ tree v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), off);
+ gimple *g = gimple_build_assign (obj, v2);
+ insert_before (g);
+}
+
+/* Lower COMPLEX_EXPR stmt. */
+
+void
+bitint_large_huge::lower_complexexpr_stmt (gimple *stmt)
+{
+ tree lhs = gimple_assign_lhs (stmt);
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree rhs2 = gimple_assign_rhs2 (stmt);
+ int part = var_to_partition (m_map, lhs);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ lhs = m_vars[part];
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (rhs1))) / limb_prec;
+ tree atype = build_array_type_nelts (m_limb_type, nelts);
+ tree zero = build_zero_cst (build_pointer_type (TREE_TYPE (lhs)));
+ tree v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), zero);
+ tree v2;
+ if (TREE_CODE (rhs1) == SSA_NAME)
+ {
+ part = var_to_partition (m_map, rhs1);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ v2 = m_vars[part];
+ }
+ else if (integer_zerop (rhs1))
+ v2 = build_zero_cst (atype);
+ else
+ v2 = tree_output_constant_def (rhs1);
+ if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
+ v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
+ gimple *g = gimple_build_assign (v1, v2);
+ insert_before (g);
+ tree off = fold_convert (build_pointer_type (TREE_TYPE (lhs)),
+ TYPE_SIZE_UNIT (atype));
+ v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), off);
+ if (TREE_CODE (rhs2) == SSA_NAME)
+ {
+ part = var_to_partition (m_map, rhs2);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ v2 = m_vars[part];
+ }
+ else if (integer_zerop (rhs2))
+ v2 = build_zero_cst (atype);
+ else
+ v2 = tree_output_constant_def (rhs2);
+ if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
+ v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
+ g = gimple_build_assign (v1, v2);
+ insert_before (g);
+}
+
+/* Lower a call statement with one or more large/huge _BitInt
+ arguments or large/huge _BitInt return value. */
+
+void
+bitint_large_huge::lower_call (tree obj, gimple *stmt)
+{
+ gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+ unsigned int nargs = gimple_call_num_args (stmt);
+ if (gimple_call_internal_p (stmt))
+ switch (gimple_call_internal_fn (stmt))
+ {
+ case IFN_ADD_OVERFLOW:
+ case IFN_SUB_OVERFLOW:
+ case IFN_UBSAN_CHECK_ADD:
+ case IFN_UBSAN_CHECK_SUB:
+ lower_addsub_overflow (obj, stmt);
+ return;
+ case IFN_MUL_OVERFLOW:
+ case IFN_UBSAN_CHECK_MUL:
+ lower_mul_overflow (obj, stmt);
+ return;
+ default:
+ break;
+ }
+ for (unsigned int i = 0; i < nargs; ++i)
+ {
+ tree arg = gimple_call_arg (stmt, i);
+ if (TREE_CODE (arg) != SSA_NAME
+ || TREE_CODE (TREE_TYPE (arg)) != BITINT_TYPE
+ || bitint_precision_kind (TREE_TYPE (arg)) <= bitint_prec_middle)
+ continue;
+ int p = var_to_partition (m_map, arg);
+ tree v = m_vars[p];
+ gcc_assert (v != NULL_TREE);
+ if (!types_compatible_p (TREE_TYPE (arg), TREE_TYPE (v)))
+ v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (arg), v);
+ arg = make_ssa_name (TREE_TYPE (arg));
+ gimple *g = gimple_build_assign (arg, v);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_call_set_arg (stmt, i, arg);
+ if (m_preserved == NULL)
+ m_preserved = BITMAP_ALLOC (NULL);
+ bitmap_set_bit (m_preserved, SSA_NAME_VERSION (arg));
+ }
+ tree lhs = gimple_call_lhs (stmt);
+ if (lhs
+ && TREE_CODE (lhs) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
+ {
+ int p = var_to_partition (m_map, lhs);
+ tree v = m_vars[p];
+ gcc_assert (v != NULL_TREE);
+ if (!types_compatible_p (TREE_TYPE (lhs), TREE_TYPE (v)))
+ v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), v);
+ gimple_call_set_lhs (stmt, v);
+ SSA_NAME_DEF_STMT (lhs) = gimple_build_nop ();
+ }
+ update_stmt (stmt);
+}
+
+/* Lower __asm STMT which involves large/huge _BitInt values. */
+
+void
+bitint_large_huge::lower_asm (gimple *stmt)
+{
+ gasm *g = as_a <gasm *> (stmt);
+ unsigned noutputs = gimple_asm_noutputs (g);
+ unsigned ninputs = gimple_asm_ninputs (g);
+
+ for (unsigned i = 0; i < noutputs; ++i)
+ {
+ tree t = gimple_asm_output_op (g, i);
+ tree s = TREE_VALUE (t);
+ if (TREE_CODE (s) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
+ {
+ int part = var_to_partition (m_map, s);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ TREE_VALUE (t) = m_vars[part];
+ }
+ }
+ for (unsigned i = 0; i < ninputs; ++i)
+ {
+ tree t = gimple_asm_input_op (g, i);
+ tree s = TREE_VALUE (t);
+ if (TREE_CODE (s) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
+ {
+ int part = var_to_partition (m_map, s);
+ gcc_assert (m_vars[part] != NULL_TREE);
+ TREE_VALUE (t) = m_vars[part];
+ }
+ }
+ update_stmt (stmt);
+}
+
+/* Lower statement STMT which involves large/huge _BitInt values
+ into code accessing individual limbs. */
+
+void
+bitint_large_huge::lower_stmt (gimple *stmt)
+{
+ m_first = true;
+ m_lhs = NULL_TREE;
+ m_data.truncate (0);
+ m_data_cnt = 0;
+ m_gsi = gsi_for_stmt (stmt);
+ m_after_stmt = NULL;
+ m_bb = NULL;
+ m_init_gsi = m_gsi;
+ gsi_prev (&m_init_gsi);
+ m_preheader_bb = NULL;
+ m_upwards_2limb = 0;
+ m_var_msb = false;
+ m_loc = gimple_location (stmt);
+ if (is_gimple_call (stmt))
+ {
+ lower_call (NULL_TREE, stmt);
+ return;
+ }
+ if (gimple_code (stmt) == GIMPLE_ASM)
+ {
+ lower_asm (stmt);
+ return;
+ }
+ tree lhs = NULL_TREE, cmp_op1 = NULL_TREE, cmp_op2 = NULL_TREE;
+ tree_code cmp_code = comparison_op (stmt, &cmp_op1, &cmp_op2);
+ bool eq_p = (cmp_code == EQ_EXPR || cmp_code == NE_EXPR);
+ bool mergeable_cast_p = false;
+ bool final_cast_p = false;
+ if (gimple_assign_cast_p (stmt))
+ {
+ lhs = gimple_assign_lhs (stmt);
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
+ && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
+ mergeable_cast_p = true;
+ else if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
+ && INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
+ {
+ final_cast_p = true;
+ if (TREE_CODE (rhs1) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
+ {
+ gimple *g = SSA_NAME_DEF_STMT (rhs1);
+ if (is_gimple_assign (g)
+ && gimple_assign_rhs_code (g) == IMAGPART_EXPR)
+ {
+ tree rhs2 = TREE_OPERAND (gimple_assign_rhs1 (g), 0);
+ if (TREE_CODE (rhs2) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs2))))
+ {
+ g = SSA_NAME_DEF_STMT (rhs2);
+ int ovf = optimizable_arith_overflow (g);
+ if (ovf == 2)
+ /* If .{ADD,SUB,MUL}_OVERFLOW has both REALPART_EXPR
+ and IMAGPART_EXPR uses, where the latter is cast to
+ non-_BitInt, it will be optimized when handling
+ the REALPART_EXPR. */
+ return;
+ if (ovf == 1)
+ {
+ lower_call (NULL_TREE, g);
+ return;
+ }
+ }
+ }
+ }
+ }
+ }
+ if (gimple_store_p (stmt))
+ {
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ if (TREE_CODE (rhs1) == SSA_NAME
+ && (m_names == NULL
+ || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
+ {
+ gimple *g = SSA_NAME_DEF_STMT (rhs1);
+ m_loc = gimple_location (g);
+ lhs = gimple_assign_lhs (stmt);
+ if (is_gimple_assign (g) && !mergeable_op (g))
+ switch (gimple_assign_rhs_code (g))
+ {
+ case LSHIFT_EXPR:
+ case RSHIFT_EXPR:
+ lower_shift_stmt (lhs, g);
+ handled:
+ m_gsi = gsi_for_stmt (stmt);
+ unlink_stmt_vdef (stmt);
+ release_ssa_name (gimple_vdef (stmt));
+ gsi_remove (&m_gsi, true);
+ return;
+ case MULT_EXPR:
+ case TRUNC_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ lower_muldiv_stmt (lhs, g);
+ goto handled;
+ case FIX_TRUNC_EXPR:
+ lower_float_conv_stmt (lhs, g);
+ goto handled;
+ case REALPART_EXPR:
+ case IMAGPART_EXPR:
+ lower_cplxpart_stmt (lhs, g);
+ goto handled;
+ default:
+ break;
+ }
+ else if (optimizable_arith_overflow (g) == 3)
+ {
+ lower_call (lhs, g);
+ goto handled;
+ }
+ m_loc = gimple_location (stmt);
+ }
+ }
+ if (mergeable_op (stmt)
+ || gimple_store_p (stmt)
+ || gimple_assign_load_p (stmt)
+ || eq_p
+ || mergeable_cast_p)
+ {
+ lhs = lower_mergeable_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
+ if (!eq_p)
+ return;
+ }
+ else if (cmp_code != ERROR_MARK)
+ lhs = lower_comparison_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
+ if (cmp_code != ERROR_MARK)
+ {
+ if (gimple_code (stmt) == GIMPLE_COND)
+ {
+ gcond *cstmt = as_a <gcond *> (stmt);
+ gimple_cond_set_lhs (cstmt, lhs);
+ gimple_cond_set_rhs (cstmt, boolean_false_node);
+ gimple_cond_set_code (cstmt, cmp_code);
+ update_stmt (stmt);
+ return;
+ }
+ if (gimple_assign_rhs_code (stmt) == COND_EXPR)
+ {
+ tree cond = build2 (cmp_code, boolean_type_node, lhs,
+ boolean_false_node);
+ gimple_assign_set_rhs1 (stmt, cond);
+ lhs = gimple_assign_lhs (stmt);
+ gcc_assert (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
+ || (bitint_precision_kind (TREE_TYPE (lhs))
+ <= bitint_prec_middle));
+ update_stmt (stmt);
+ return;
+ }
+ gimple_assign_set_rhs1 (stmt, lhs);
+ gimple_assign_set_rhs2 (stmt, boolean_false_node);
+ gimple_assign_set_rhs_code (stmt, cmp_code);
+ update_stmt (stmt);
+ return;
+ }
+ if (final_cast_p)
+ {
+ tree lhs_type = TREE_TYPE (lhs);
+ /* Add support for 3 or more limbs filled in from normal integral
+ type if this assert fails. If no target chooses limb mode smaller
+ than half of largest supported normal integral type, this will not
+ be needed. */
+ gcc_assert (TYPE_PRECISION (lhs_type) <= 2 * limb_prec);
+ gimple *g;
+ if (TREE_CODE (lhs_type) == BITINT_TYPE
+ && bitint_precision_kind (lhs_type) == bitint_prec_middle)
+ lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (lhs_type),
+ TYPE_UNSIGNED (lhs_type));
+ m_data_cnt = 0;
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree r1 = handle_operand (rhs1, size_int (0));
+ if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
+ r1 = add_cast (lhs_type, r1);
+ if (TYPE_PRECISION (lhs_type) > limb_prec)
+ {
+ m_data_cnt = 0;
+ m_first = false;
+ tree r2 = handle_operand (rhs1, size_int (1));
+ r2 = add_cast (lhs_type, r2);
+ g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
+ build_int_cst (unsigned_type_node,
+ limb_prec));
+ insert_before (g);
+ g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
+ gimple_assign_lhs (g));
+ insert_before (g);
+ r1 = gimple_assign_lhs (g);
+ }
+ if (lhs_type != TREE_TYPE (lhs))
+ g = gimple_build_assign (lhs, NOP_EXPR, r1);
+ else
+ g = gimple_build_assign (lhs, r1);
+ gsi_replace (&m_gsi, g, true);
+ return;
+ }
+ if (is_gimple_assign (stmt))
+ switch (gimple_assign_rhs_code (stmt))
+ {
+ case LSHIFT_EXPR:
+ case RSHIFT_EXPR:
+ lower_shift_stmt (NULL_TREE, stmt);
+ return;
+ case MULT_EXPR:
+ case TRUNC_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ lower_muldiv_stmt (NULL_TREE, stmt);
+ return;
+ case FIX_TRUNC_EXPR:
+ case FLOAT_EXPR:
+ lower_float_conv_stmt (NULL_TREE, stmt);
+ return;
+ case REALPART_EXPR:
+ case IMAGPART_EXPR:
+ lower_cplxpart_stmt (NULL_TREE, stmt);
+ return;
+ case COMPLEX_EXPR:
+ lower_complexexpr_stmt (stmt);
+ return;
+ default:
+ break;
+ }
+ gcc_unreachable ();
+}
+
+/* Helper for walk_non_aliased_vuses. Determine if we arrived at
+ the desired memory state. */
+
+void *
+vuse_eq (ao_ref *, tree vuse1, void *data)
+{
+ tree vuse2 = (tree) data;
+ if (vuse1 == vuse2)
+ return data;
+
+ return NULL;
+}
+
+/* Dominator walker used to discover which large/huge _BitInt
+ loads could be sunk into all their uses. */
+
+class bitint_dom_walker : public dom_walker
+{
+public:
+ bitint_dom_walker (bitmap names, bitmap loads)
+ : dom_walker (CDI_DOMINATORS), m_names (names), m_loads (loads) {}
+
+ edge before_dom_children (basic_block) final override;
+
+private:
+ bitmap m_names, m_loads;
+};
+
+edge
+bitint_dom_walker::before_dom_children (basic_block bb)
+{
+ gphi *phi = get_virtual_phi (bb);
+ tree vop;
+ if (phi)
+ vop = gimple_phi_result (phi);
+ else if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+ vop = NULL_TREE;
+ else
+ vop = (tree) get_immediate_dominator (CDI_DOMINATORS, bb)->aux;
+
+ auto_vec<tree, 16> worklist;
+ for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
+ !gsi_end_p (gsi); gsi_next (&gsi))
+ {
+ gimple *stmt = gsi_stmt (gsi);
+ if (is_gimple_debug (stmt))
+ continue;
+
+ if (!vop && gimple_vuse (stmt))
+ vop = gimple_vuse (stmt);
+
+ tree cvop = vop;
+ if (gimple_vdef (stmt))
+ vop = gimple_vdef (stmt);
+
+ tree lhs = gimple_get_lhs (stmt);
+ if (lhs
+ && TREE_CODE (lhs) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
+ && !bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
+ /* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names,
+ it means it will be handled in a loop or straight line code
+ at the location of its (ultimate) immediate use, so for
+ vop checking purposes check these only at the ultimate
+ immediate use. */
+ continue;
+
+ ssa_op_iter oi;
+ use_operand_p use_p;
+ FOR_EACH_SSA_USE_OPERAND (use_p, stmt, oi, SSA_OP_USE)
+ {
+ tree s = USE_FROM_PTR (use_p);
+ if (TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
+ worklist.safe_push (s);
+ }
+
+ while (worklist.length () > 0)
+ {
+ tree s = worklist.pop ();
+
+ if (!bitmap_bit_p (m_names, SSA_NAME_VERSION (s)))
+ {
+ FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
+ oi, SSA_OP_USE)
+ {
+ tree s2 = USE_FROM_PTR (use_p);
+ if (TREE_CODE (TREE_TYPE (s2)) == BITINT_TYPE
+ && (bitint_precision_kind (TREE_TYPE (s2))
+ >= bitint_prec_large))
+ worklist.safe_push (s2);
+ }
+ continue;
+ }
+ if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
+ && gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
+ {
+ tree rhs = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
+ if (TREE_CODE (rhs) == SSA_NAME
+ && bitmap_bit_p (m_loads, SSA_NAME_VERSION (rhs)))
+ s = rhs;
+ else
+ continue;
+ }
+ else if (!bitmap_bit_p (m_loads, SSA_NAME_VERSION (s)))
+ continue;
+
+ ao_ref ref;
+ ao_ref_init (&ref, gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)));
+ tree lvop = gimple_vuse (SSA_NAME_DEF_STMT (s));
+ unsigned limit = 64;
+ tree vuse = cvop;
+ if (vop != cvop
+ && is_gimple_assign (stmt)
+ && gimple_store_p (stmt)
+ && !operand_equal_p (lhs,
+ gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)),
+ 0))
+ vuse = vop;
+ if (vuse != lvop
+ && walk_non_aliased_vuses (&ref, vuse, false, vuse_eq,
+ NULL, NULL, limit, lvop) == NULL)
+ bitmap_clear_bit (m_loads, SSA_NAME_VERSION (s));
+ }
+ }
+
+ bb->aux = (void *) vop;
+ return NULL;
+}
+
+}
+
+/* Replacement for normal processing of STMT in tree-ssa-coalesce.cc
+ build_ssa_conflict_graph.
+ The differences are:
+ 1) don't process assignments with large/huge _BitInt lhs not in NAMES
+ 2) for large/huge _BitInt multiplication/division/modulo process def
+ only after processing uses rather than before to make uses conflict
+ with the definition
+ 3) for large/huge _BitInt uses not in NAMES mark the uses of their
+ SSA_NAME_DEF_STMT (recursively), because those uses will be sunk into
+ the final statement. */
+
+void
+build_bitint_stmt_ssa_conflicts (gimple *stmt, live_track *live,
+ ssa_conflicts *graph, bitmap names,
+ void (*def) (live_track *, tree,
+ ssa_conflicts *),
+ void (*use) (live_track *, tree))
+{
+ bool muldiv_p = false;
+ tree lhs = NULL_TREE;
+ if (is_gimple_assign (stmt))
+ {
+ lhs = gimple_assign_lhs (stmt);
+ if (TREE_CODE (lhs) == SSA_NAME
+ && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
+ {
+ if (!bitmap_bit_p (names, SSA_NAME_VERSION (lhs)))
+ return;
+ switch (gimple_assign_rhs_code (stmt))
+ {
+ case MULT_EXPR:
+ case TRUNC_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ muldiv_p = true;
+ default:
+ break;
+ }
+ }
+ }
+
+ ssa_op_iter iter;
+ tree var;
+ if (!muldiv_p)
+ {
+ /* For stmts with more than one SSA_NAME definition pretend all the
+ SSA_NAME outputs but the first one are live at this point, so
+ that conflicts are added in between all those even when they are
+ actually not really live after the asm, because expansion might
+ copy those into pseudos after the asm and if multiple outputs
+ share the same partition, it might overwrite those that should
+ be live. E.g.
+ asm volatile (".." : "=r" (a) : "=r" (b) : "0" (a), "1" (a));
+ return a;
+ See PR70593. */
+ bool first = true;
+ FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
+ if (first)
+ first = false;
+ else
+ use (live, var);
+
+ FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
+ def (live, var, graph);
+ }
+
+ auto_vec<tree, 16> worklist;
+ FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_USE)
+ if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
+ {
+ if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
+ use (live, var);
+ else
+ worklist.safe_push (var);
+ }
+
+ while (worklist.length () > 0)
+ {
+ tree s = worklist.pop ();
+ FOR_EACH_SSA_TREE_OPERAND (var, SSA_NAME_DEF_STMT (s), iter, SSA_OP_USE)
+ if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
+ {
+ if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
+ use (live, var);
+ else
+ worklist.safe_push (var);
+ }
+ }
+
+ if (muldiv_p)
+ def (live, lhs, graph);
+}
+
+/* Entry point for _BitInt(N) operation lowering during optimization. */
+
+static unsigned int
+gimple_lower_bitint (void)
+{
+ small_max_prec = mid_min_prec = large_min_prec = huge_min_prec = 0;
+ limb_prec = 0;
+
+ unsigned int i;
+ tree vop = gimple_vop (cfun);
+ for (i = 0; i < num_ssa_names; ++i)
+ {
+ tree s = ssa_name (i);
+ if (s == NULL)
+ continue;
+ tree type = TREE_TYPE (s);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) != bitint_prec_small)
+ break;
+ /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
+ into memory. Such functions could have no large/huge SSA_NAMEs. */
+ if (vop && SSA_NAME_VAR (s) == vop)
+ {
+ gimple *g = SSA_NAME_DEF_STMT (s);
+ if (is_gimple_assign (g) && gimple_store_p (g))
+ {
+ tree t = gimple_assign_rhs1 (g);
+ if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
+ && (bitint_precision_kind (TREE_TYPE (t))
+ >= bitint_prec_large))
+ break;
+ }
+ }
+ }
+ if (i == num_ssa_names)
+ return 0;
+
+ basic_block bb;
+ auto_vec<gimple *, 4> switch_statements;
+ FOR_EACH_BB_FN (bb, cfun)
+ {
+ if (gswitch *swtch = safe_dyn_cast <gswitch *> (*gsi_last_bb (bb)))
+ {
+ tree idx = gimple_switch_index (swtch);
+ if (TREE_CODE (TREE_TYPE (idx)) != BITINT_TYPE
+ || bitint_precision_kind (TREE_TYPE (idx)) < bitint_prec_large)
+ continue;
+
+ if (optimize)
+ group_case_labels_stmt (swtch);
+ switch_statements.safe_push (swtch);
+ }
+ }
+
+ if (!switch_statements.is_empty ())
+ {
+ bool expanded = false;
+ gimple *stmt;
+ unsigned int j;
+ i = 0;
+ FOR_EACH_VEC_ELT (switch_statements, j, stmt)
+ {
+ gswitch *swtch = as_a<gswitch *> (stmt);
+ tree_switch_conversion::switch_decision_tree dt (swtch);
+ expanded |= dt.analyze_switch_statement ();
+ }
+
+ if (expanded)
+ {
+ free_dominance_info (CDI_DOMINATORS);
+ free_dominance_info (CDI_POST_DOMINATORS);
+ mark_virtual_operands_for_renaming (cfun);
+ cleanup_tree_cfg (TODO_update_ssa);
+ }
+ }
+
+ struct bitint_large_huge large_huge;
+ bool has_large_huge_parm_result = false;
+ bool has_large_huge = false;
+ unsigned int ret = 0, first_large_huge = ~0U;
+ bool edge_insertions = false;
+ for (; i < num_ssa_names; ++i)
+ {
+ tree s = ssa_name (i);
+ if (s == NULL)
+ continue;
+ tree type = TREE_TYPE (s);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large)
+ {
+ if (first_large_huge == ~0U)
+ first_large_huge = i;
+ gimple *stmt = SSA_NAME_DEF_STMT (s), *g;
+ gimple_stmt_iterator gsi;
+ tree_code rhs_code;
+ /* Unoptimize certain constructs to simpler alternatives to
+ avoid having to lower all of them. */
+ if (is_gimple_assign (stmt))
+ switch (rhs_code = gimple_assign_rhs_code (stmt))
+ {
+ default:
+ break;
+ case LROTATE_EXPR:
+ case RROTATE_EXPR:
+ {
+ first_large_huge = 0;
+ location_t loc = gimple_location (stmt);
+ gsi = gsi_for_stmt (stmt);
+ tree rhs1 = gimple_assign_rhs1 (stmt);
+ tree type = TREE_TYPE (rhs1);
+ tree n = gimple_assign_rhs2 (stmt), m;
+ tree p = build_int_cst (TREE_TYPE (n),
+ TYPE_PRECISION (type));
+ if (TREE_CODE (n) == INTEGER_CST)
+ m = fold_build2 (MINUS_EXPR, TREE_TYPE (n), p, n);
+ else
+ {
+ m = make_ssa_name (TREE_TYPE (n));
+ g = gimple_build_assign (m, MINUS_EXPR, p, n);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ }
+ if (!TYPE_UNSIGNED (type))
+ {
+ tree utype = build_bitint_type (TYPE_PRECISION (type),
+ 1);
+ if (TREE_CODE (rhs1) == INTEGER_CST)
+ rhs1 = fold_convert (utype, rhs1);
+ else
+ {
+ tree t = make_ssa_name (type);
+ g = gimple_build_assign (t, NOP_EXPR, rhs1);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ }
+ }
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
+ rhs_code == LROTATE_EXPR
+ ? LSHIFT_EXPR : RSHIFT_EXPR,
+ rhs1, n);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ tree op1 = gimple_assign_lhs (g);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
+ rhs_code == LROTATE_EXPR
+ ? RSHIFT_EXPR : LSHIFT_EXPR,
+ rhs1, m);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ tree op2 = gimple_assign_lhs (g);
+ tree lhs = gimple_assign_lhs (stmt);
+ if (!TYPE_UNSIGNED (type))
+ {
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (op1)),
+ BIT_IOR_EXPR, op1, op2);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ g = gimple_build_assign (lhs, NOP_EXPR,
+ gimple_assign_lhs (g));
+ }
+ else
+ g = gimple_build_assign (lhs, BIT_IOR_EXPR, op1, op2);
+ gsi_replace (&gsi, g, true);
+ gimple_set_location (g, loc);
+ }
+ break;
+ case ABS_EXPR:
+ case ABSU_EXPR:
+ case MIN_EXPR:
+ case MAX_EXPR:
+ case COND_EXPR:
+ first_large_huge = 0;
+ gsi = gsi_for_stmt (stmt);
+ tree lhs = gimple_assign_lhs (stmt);
+ tree rhs1 = gimple_assign_rhs1 (stmt), rhs2 = NULL_TREE;
+ location_t loc = gimple_location (stmt);
+ if (rhs_code == ABS_EXPR)
+ g = gimple_build_cond (LT_EXPR, rhs1,
+ build_zero_cst (TREE_TYPE (rhs1)),
+ NULL_TREE, NULL_TREE);
+ else if (rhs_code == ABSU_EXPR)
+ {
+ rhs2 = make_ssa_name (TREE_TYPE (lhs));
+ g = gimple_build_assign (rhs2, NOP_EXPR, rhs1);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ g = gimple_build_cond (LT_EXPR, rhs1,
+ build_zero_cst (TREE_TYPE (rhs1)),
+ NULL_TREE, NULL_TREE);
+ rhs1 = rhs2;
+ }
+ else if (rhs_code == MIN_EXPR || rhs_code == MAX_EXPR)
+ {
+ rhs2 = gimple_assign_rhs2 (stmt);
+ if (TREE_CODE (rhs1) == INTEGER_CST)
+ std::swap (rhs1, rhs2);
+ g = gimple_build_cond (LT_EXPR, rhs1, rhs2,
+ NULL_TREE, NULL_TREE);
+ if (rhs_code == MAX_EXPR)
+ std::swap (rhs1, rhs2);
+ }
+ else
+ {
+ g = gimple_build_cond (TREE_CODE (rhs1),
+ TREE_OPERAND (rhs1, 0),
+ TREE_OPERAND (rhs1, 1),
+ NULL_TREE, NULL_TREE);
+ rhs1 = gimple_assign_rhs2 (stmt);
+ rhs2 = gimple_assign_rhs3 (stmt);
+ }
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ edge e1 = split_block (gsi_bb (gsi), g);
+ edge e2 = split_block (e1->dest, (gimple *) NULL);
+ edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
+ e3->probability = profile_probability::even ();
+ e1->flags = EDGE_TRUE_VALUE;
+ e1->probability = e3->probability.invert ();
+ if (dom_info_available_p (CDI_DOMINATORS))
+ set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
+ if (rhs_code == ABS_EXPR || rhs_code == ABSU_EXPR)
+ {
+ gsi = gsi_after_labels (e1->dest);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
+ NEGATE_EXPR, rhs1);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_set_location (g, loc);
+ rhs2 = gimple_assign_lhs (g);
+ std::swap (rhs1, rhs2);
+ }
+ gsi = gsi_for_stmt (stmt);
+ gsi_remove (&gsi, true);
+ gphi *phi = create_phi_node (lhs, e2->dest);
+ add_phi_arg (phi, rhs1, e2, UNKNOWN_LOCATION);
+ add_phi_arg (phi, rhs2, e3, UNKNOWN_LOCATION);
+ break;
+ }
+ }
+ /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
+ into memory. Such functions could have no large/huge SSA_NAMEs. */
+ else if (vop && SSA_NAME_VAR (s) == vop)
+ {
+ gimple *g = SSA_NAME_DEF_STMT (s);
+ if (is_gimple_assign (g) && gimple_store_p (g))
+ {
+ tree t = gimple_assign_rhs1 (g);
+ if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
+ && (bitint_precision_kind (TREE_TYPE (t))
+ >= bitint_prec_large))
+ has_large_huge = true;
+ }
+ }
+ }
+ for (i = first_large_huge; i < num_ssa_names; ++i)
+ {
+ tree s = ssa_name (i);
+ if (s == NULL)
+ continue;
+ tree type = TREE_TYPE (s);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large)
+ {
+ use_operand_p use_p;
+ gimple *use_stmt;
+ has_large_huge = true;
+ if (optimize
+ && optimizable_arith_overflow (SSA_NAME_DEF_STMT (s)))
+ continue;
+ /* Ignore large/huge _BitInt SSA_NAMEs which have single use in
+ the same bb and could be handled in the same loop with the
+ immediate use. */
+ if (optimize
+ && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
+ && single_imm_use (s, &use_p, &use_stmt)
+ && gimple_bb (SSA_NAME_DEF_STMT (s)) == gimple_bb (use_stmt))
+ {
+ if (mergeable_op (SSA_NAME_DEF_STMT (s)))
+ {
+ if (mergeable_op (use_stmt))
+ continue;
+ tree_code cmp_code = comparison_op (use_stmt, NULL, NULL);
+ if (cmp_code == EQ_EXPR || cmp_code == NE_EXPR)
+ continue;
+ if (gimple_assign_cast_p (use_stmt))
+ {
+ tree lhs = gimple_assign_lhs (use_stmt);
+ if (INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
+ continue;
+ }
+ else if (gimple_store_p (use_stmt)
+ && is_gimple_assign (use_stmt)
+ && !gimple_has_volatile_ops (use_stmt)
+ && !stmt_ends_bb_p (use_stmt))
+ continue;
+ }
+ if (gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
+ {
+ tree rhs1 = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
+ if (INTEGRAL_TYPE_P (TREE_TYPE (rhs1))
+ && ((is_gimple_assign (use_stmt)
+ && (gimple_assign_rhs_code (use_stmt)
+ != COMPLEX_EXPR))
+ || gimple_code (use_stmt) == GIMPLE_COND)
+ && (!gimple_store_p (use_stmt)
+ || (is_gimple_assign (use_stmt)
+ && !gimple_has_volatile_ops (use_stmt)
+ && !stmt_ends_bb_p (use_stmt)))
+ && (TREE_CODE (rhs1) != SSA_NAME
+ || !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (rhs1)))
+ {
+ if (TREE_CODE (TREE_TYPE (rhs1)) != BITINT_TYPE
+ || (bitint_precision_kind (TREE_TYPE (rhs1))
+ < bitint_prec_large)
+ || (TYPE_PRECISION (TREE_TYPE (rhs1))
+ >= TYPE_PRECISION (TREE_TYPE (s)))
+ || mergeable_op (SSA_NAME_DEF_STMT (s)))
+ continue;
+ /* Prevent merging a widening non-mergeable cast
+ on result of some narrower mergeable op
+ together with later mergeable operations. E.g.
+ result of _BitInt(223) addition shouldn't be
+ sign-extended to _BitInt(513) and have another
+ _BitInt(513) added to it, as handle_plus_minus
+ with its PHI node handling inside of handle_cast
+ will not work correctly. An exception is if
+ use_stmt is a store, this is handled directly
+ in lower_mergeable_stmt. */
+ if (TREE_CODE (rhs1) != SSA_NAME
+ || !has_single_use (rhs1)
+ || (gimple_bb (SSA_NAME_DEF_STMT (rhs1))
+ != gimple_bb (SSA_NAME_DEF_STMT (s)))
+ || !mergeable_op (SSA_NAME_DEF_STMT (rhs1))
+ || gimple_store_p (use_stmt))
+ continue;
+ if (gimple_assign_cast_p (SSA_NAME_DEF_STMT (rhs1)))
+ {
+ /* Another exception is if the widening cast is
+ from mergeable same precision cast from something
+ not mergeable. */
+ tree rhs2
+ = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (rhs1));
+ if (TREE_CODE (TREE_TYPE (rhs2)) == BITINT_TYPE
+ && (TYPE_PRECISION (TREE_TYPE (rhs1))
+ == TYPE_PRECISION (TREE_TYPE (rhs2))))
+ {
+ if (TREE_CODE (rhs2) != SSA_NAME
+ || !has_single_use (rhs2)
+ || (gimple_bb (SSA_NAME_DEF_STMT (rhs2))
+ != gimple_bb (SSA_NAME_DEF_STMT (s)))
+ || !mergeable_op (SSA_NAME_DEF_STMT (rhs2)))
+ continue;
+ }
+ }
+ }
+ }
+ if (is_gimple_assign (SSA_NAME_DEF_STMT (s)))
+ switch (gimple_assign_rhs_code (SSA_NAME_DEF_STMT (s)))
+ {
+ case IMAGPART_EXPR:
+ {
+ tree rhs1 = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
+ rhs1 = TREE_OPERAND (rhs1, 0);
+ if (TREE_CODE (rhs1) == SSA_NAME)
+ {
+ gimple *g = SSA_NAME_DEF_STMT (rhs1);
+ if (optimizable_arith_overflow (g))
+ continue;
+ }
+ }
+ /* FALLTHRU */
+ case LSHIFT_EXPR:
+ case RSHIFT_EXPR:
+ case MULT_EXPR:
+ case TRUNC_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ case FIX_TRUNC_EXPR:
+ case REALPART_EXPR:
+ if (gimple_store_p (use_stmt)
+ && is_gimple_assign (use_stmt)
+ && !gimple_has_volatile_ops (use_stmt)
+ && !stmt_ends_bb_p (use_stmt))
+ continue;
+ default:
+ break;
+ }
+ }
+
+ /* Also ignore uninitialized uses. */
+ if (SSA_NAME_IS_DEFAULT_DEF (s)
+ && (!SSA_NAME_VAR (s) || VAR_P (SSA_NAME_VAR (s))))
+ continue;
+
+ if (!large_huge.m_names)
+ large_huge.m_names = BITMAP_ALLOC (NULL);
+ bitmap_set_bit (large_huge.m_names, SSA_NAME_VERSION (s));
+ if (has_single_use (s))
+ {
+ if (!large_huge.m_single_use_names)
+ large_huge.m_single_use_names = BITMAP_ALLOC (NULL);
+ bitmap_set_bit (large_huge.m_single_use_names,
+ SSA_NAME_VERSION (s));
+ }
+ if (SSA_NAME_VAR (s)
+ && ((TREE_CODE (SSA_NAME_VAR (s)) == PARM_DECL
+ && SSA_NAME_IS_DEFAULT_DEF (s))
+ || TREE_CODE (SSA_NAME_VAR (s)) == RESULT_DECL))
+ has_large_huge_parm_result = true;
+ if (optimize
+ && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
+ && gimple_assign_load_p (SSA_NAME_DEF_STMT (s))
+ && !gimple_has_volatile_ops (SSA_NAME_DEF_STMT (s))
+ && !stmt_ends_bb_p (SSA_NAME_DEF_STMT (s)))
+ {
+ use_operand_p use_p;
+ imm_use_iterator iter;
+ bool optimizable_load = true;
+ FOR_EACH_IMM_USE_FAST (use_p, iter, s)
+ {
+ gimple *use_stmt = USE_STMT (use_p);
+ if (is_gimple_debug (use_stmt))
+ continue;
+ if (gimple_code (use_stmt) == GIMPLE_PHI
+ || is_gimple_call (use_stmt))
+ {
+ optimizable_load = false;
+ break;
+ }
+ }
+
+ ssa_op_iter oi;
+ FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
+ oi, SSA_OP_USE)
+ {
+ tree s2 = USE_FROM_PTR (use_p);
+ if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s2))
+ {
+ optimizable_load = false;
+ break;
+ }
+ }
+
+ if (optimizable_load && !stmt_ends_bb_p (SSA_NAME_DEF_STMT (s)))
+ {
+ if (!large_huge.m_loads)
+ large_huge.m_loads = BITMAP_ALLOC (NULL);
+ bitmap_set_bit (large_huge.m_loads, SSA_NAME_VERSION (s));
+ }
+ }
+ }
+ /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
+ into memory. Such functions could have no large/huge SSA_NAMEs. */
+ else if (vop && SSA_NAME_VAR (s) == vop)
+ {
+ gimple *g = SSA_NAME_DEF_STMT (s);
+ if (is_gimple_assign (g) && gimple_store_p (g))
+ {
+ tree t = gimple_assign_rhs1 (g);
+ if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE
+ && bitint_precision_kind (TREE_TYPE (t)) >= bitint_prec_large)
+ has_large_huge = true;
+ }
+ }
+ }
+
+ if (large_huge.m_names || has_large_huge)
+ {
+ ret = TODO_update_ssa_only_virtuals | TODO_cleanup_cfg;
+ calculate_dominance_info (CDI_DOMINATORS);
+ if (optimize)
+ enable_ranger (cfun);
+ if (large_huge.m_loads)
+ {
+ basic_block entry = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+ entry->aux = NULL;
+ bitint_dom_walker (large_huge.m_names,
+ large_huge.m_loads).walk (entry);
+ bitmap_and_compl_into (large_huge.m_names, large_huge.m_loads);
+ clear_aux_for_blocks ();
+ BITMAP_FREE (large_huge.m_loads);
+ }
+ large_huge.m_limb_type = build_nonstandard_integer_type (limb_prec, 1);
+ large_huge.m_limb_size
+ = tree_to_uhwi (TYPE_SIZE_UNIT (large_huge.m_limb_type));
+ }
+ if (large_huge.m_names)
+ {
+ large_huge.m_map
+ = init_var_map (num_ssa_names, NULL, large_huge.m_names);
+ coalesce_ssa_name (large_huge.m_map);
+ partition_view_normal (large_huge.m_map);
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ {
+ fprintf (dump_file, "After Coalescing:\n");
+ dump_var_map (dump_file, large_huge.m_map);
+ }
+ large_huge.m_vars
+ = XCNEWVEC (tree, num_var_partitions (large_huge.m_map));
+ bitmap_iterator bi;
+ if (has_large_huge_parm_result)
+ EXECUTE_IF_SET_IN_BITMAP (large_huge.m_names, 0, i, bi)
+ {
+ tree s = ssa_name (i);
+ if (SSA_NAME_VAR (s)
+ && ((TREE_CODE (SSA_NAME_VAR (s)) == PARM_DECL
+ && SSA_NAME_IS_DEFAULT_DEF (s))
+ || TREE_CODE (SSA_NAME_VAR (s)) == RESULT_DECL))
+ {
+ int p = var_to_partition (large_huge.m_map, s);
+ if (large_huge.m_vars[p] == NULL_TREE)
+ {
+ large_huge.m_vars[p] = SSA_NAME_VAR (s);
+ mark_addressable (SSA_NAME_VAR (s));
+ }
+ }
+ }
+ tree atype = NULL_TREE;
+ EXECUTE_IF_SET_IN_BITMAP (large_huge.m_names, 0, i, bi)
+ {
+ tree s = ssa_name (i);
+ int p = var_to_partition (large_huge.m_map, s);
+ if (large_huge.m_vars[p] != NULL_TREE)
+ continue;
+ if (atype == NULL_TREE
+ || !tree_int_cst_equal (TYPE_SIZE (atype),
+ TYPE_SIZE (TREE_TYPE (s))))
+ {
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (s))) / limb_prec;
+ atype = build_array_type_nelts (large_huge.m_limb_type, nelts);
+ }
+ large_huge.m_vars[p] = create_tmp_var (atype, "bitint");
+ mark_addressable (large_huge.m_vars[p]);
+ }
+ }
+
+ FOR_EACH_BB_REVERSE_FN (bb, cfun)
+ {
+ gimple_stmt_iterator prev;
+ for (gimple_stmt_iterator gsi = gsi_last_bb (bb); !gsi_end_p (gsi);
+ gsi = prev)
+ {
+ prev = gsi;
+ gsi_prev (&prev);
+ ssa_op_iter iter;
+ gimple *stmt = gsi_stmt (gsi);
+ if (is_gimple_debug (stmt))
+ continue;
+ bitint_prec_kind kind = bitint_prec_small;
+ tree t;
+ FOR_EACH_SSA_TREE_OPERAND (t, stmt, iter, SSA_OP_ALL_OPERANDS)
+ if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE)
+ {
+ bitint_prec_kind this_kind
+ = bitint_precision_kind (TREE_TYPE (t));
+ if (this_kind > kind)
+ kind = this_kind;
+ }
+ if (is_gimple_assign (stmt) && gimple_store_p (stmt))
+ {
+ t = gimple_assign_rhs1 (stmt);
+ if (TREE_CODE (TREE_TYPE (t)) == BITINT_TYPE)
+ {
+ bitint_prec_kind this_kind
+ = bitint_precision_kind (TREE_TYPE (t));
+ if (this_kind > kind)
+ kind = this_kind;
+ }
+ }
+ if (is_gimple_call (stmt))
+ {
+ t = gimple_call_lhs (stmt);
+ if (t
+ && TREE_CODE (TREE_TYPE (t)) == COMPLEX_TYPE
+ && TREE_CODE (TREE_TYPE (TREE_TYPE (t))) == BITINT_TYPE)
+ {
+ bitint_prec_kind this_kind
+ = bitint_precision_kind (TREE_TYPE (TREE_TYPE (t)));
+ if (this_kind > kind)
+ kind = this_kind;
+ }
+ }
+ if (kind == bitint_prec_small)
+ continue;
+ switch (gimple_code (stmt))
+ {
+ case GIMPLE_CALL:
+ /* For now. We'll need to handle some internal functions and
+ perhaps some builtins. */
+ if (kind == bitint_prec_middle)
+ continue;
+ break;
+ case GIMPLE_ASM:
+ if (kind == bitint_prec_middle)
+ continue;
+ break;
+ case GIMPLE_RETURN:
+ continue;
+ case GIMPLE_ASSIGN:
+ if (gimple_clobber_p (stmt))
+ continue;
+ if (kind >= bitint_prec_large)
+ break;
+ if (gimple_assign_single_p (stmt))
+ /* No need to lower copies, loads or stores. */
+ continue;
+ if (gimple_assign_cast_p (stmt))
+ {
+ tree lhs = gimple_assign_lhs (stmt);
+ tree rhs = gimple_assign_rhs1 (stmt);
+ if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
+ && INTEGRAL_TYPE_P (TREE_TYPE (rhs))
+ && (TYPE_PRECISION (TREE_TYPE (lhs))
+ == TYPE_PRECISION (TREE_TYPE (rhs))))
+ /* No need to lower casts to same precision. */
+ continue;
+ }
+ break;
+ default:
+ break;
+ }
+
+ if (kind == bitint_prec_middle)
+ {
+ tree type = NULL_TREE;
+ /* Middle _BitInt(N) is rewritten to casts to INTEGER_TYPEs
+ with the same precision and back. */
+ if (tree lhs = gimple_get_lhs (stmt))
+ if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && (bitint_precision_kind (TREE_TYPE (lhs))
+ == bitint_prec_middle))
+ {
+ int prec = TYPE_PRECISION (TREE_TYPE (lhs));
+ int uns = TYPE_UNSIGNED (TREE_TYPE (lhs));
+ type = build_nonstandard_integer_type (prec, uns);
+ tree lhs2 = make_ssa_name (type);
+ gimple *g = gimple_build_assign (lhs, NOP_EXPR, lhs2);
+ gsi_insert_after (&gsi, g, GSI_SAME_STMT);
+ gimple_set_lhs (stmt, lhs2);
+ }
+ unsigned int nops = gimple_num_ops (stmt);
+ for (unsigned int i = 0; i < nops; ++i)
+ if (tree op = gimple_op (stmt, i))
+ {
+ tree nop = maybe_cast_middle_bitint (&gsi, op, type);
+ if (nop != op)
+ gimple_set_op (stmt, i, nop);
+ else if (COMPARISON_CLASS_P (op))
+ {
+ TREE_OPERAND (op, 0)
+ = maybe_cast_middle_bitint (&gsi,
+ TREE_OPERAND (op, 0),
+ type);
+ TREE_OPERAND (op, 1)
+ = maybe_cast_middle_bitint (&gsi,
+ TREE_OPERAND (op, 1),
+ type);
+ }
+ else if (TREE_CODE (op) == CASE_LABEL_EXPR)
+ {
+ CASE_LOW (op)
+ = maybe_cast_middle_bitint (&gsi, CASE_LOW (op),
+ type);
+ CASE_HIGH (op)
+ = maybe_cast_middle_bitint (&gsi, CASE_HIGH (op),
+ type);
+ }
+ }
+ update_stmt (stmt);
+ continue;
+ }
+
+ if (tree lhs = gimple_get_lhs (stmt))
+ if (TREE_CODE (lhs) == SSA_NAME)
+ {
+ tree type = TREE_TYPE (lhs);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large
+ && (large_huge.m_names == NULL
+ || !bitmap_bit_p (large_huge.m_names,
+ SSA_NAME_VERSION (lhs))))
+ continue;
+ }
+
+ large_huge.lower_stmt (stmt);
+ }
+
+ tree atype = NULL_TREE;
+ for (gphi_iterator gsi = gsi_start_phis (bb); !gsi_end_p (gsi);
+ gsi_next (&gsi))
+ {
+ gphi *phi = gsi.phi ();
+ tree lhs = gimple_phi_result (phi);
+ if (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
+ || bitint_precision_kind (TREE_TYPE (lhs)) < bitint_prec_large)
+ continue;
+ int p1 = var_to_partition (large_huge.m_map, lhs);
+ gcc_assert (large_huge.m_vars[p1] != NULL_TREE);
+ tree v1 = large_huge.m_vars[p1];
+ for (unsigned i = 0; i < gimple_phi_num_args (phi); ++i)
+ {
+ tree arg = gimple_phi_arg_def (phi, i);
+ edge e = gimple_phi_arg_edge (phi, i);
+ gimple *g;
+ switch (TREE_CODE (arg))
+ {
+ case INTEGER_CST:
+ if (integer_zerop (arg) && VAR_P (v1))
+ {
+ tree zero = build_zero_cst (TREE_TYPE (v1));
+ g = gimple_build_assign (v1, zero);
+ gsi_insert_on_edge (e, g);
+ edge_insertions = true;
+ break;
+ }
+ int ext;
+ unsigned int min_prec, prec, rem;
+ tree c;
+ prec = TYPE_PRECISION (TREE_TYPE (arg));
+ rem = prec % (2 * limb_prec);
+ min_prec = bitint_min_cst_precision (arg, ext);
+ if (min_prec > prec - rem - 2 * limb_prec
+ && min_prec > (unsigned) limb_prec)
+ /* Constant which has enough significant bits that it
+ isn't worth trying to save .rodata space by extending
+ from smaller number. */
+ min_prec = prec;
+ else
+ min_prec = CEIL (min_prec, limb_prec) * limb_prec;
+ if (min_prec == 0)
+ c = NULL_TREE;
+ else if (min_prec == prec)
+ c = tree_output_constant_def (arg);
+ else if (min_prec == (unsigned) limb_prec)
+ c = fold_convert (large_huge.m_limb_type, arg);
+ else
+ {
+ tree ctype = build_bitint_type (min_prec, 1);
+ c = tree_output_constant_def (fold_convert (ctype, arg));
+ }
+ if (c)
+ {
+ if (VAR_P (v1) && min_prec == prec)
+ {
+ tree v2 = build1 (VIEW_CONVERT_EXPR,
+ TREE_TYPE (v1), c);
+ g = gimple_build_assign (v1, v2);
+ gsi_insert_on_edge (e, g);
+ edge_insertions = true;
+ break;
+ }
+ if (TREE_CODE (TREE_TYPE (c)) == INTEGER_TYPE)
+ g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
+ TREE_TYPE (c), v1),
+ c);
+ else
+ {
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (c)))
+ / limb_prec;
+ tree vtype
+ = build_array_type_nelts (large_huge.m_limb_type,
+ nelts);
+ g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
+ vtype, v1),
+ build1 (VIEW_CONVERT_EXPR,
+ vtype, c));
+ }
+ gsi_insert_on_edge (e, g);
+ }
+ if (ext == 0)
+ {
+ unsigned HOST_WIDE_INT nelts
+ = (tree_to_uhwi (TYPE_SIZE (TREE_TYPE (v1)))
+ - min_prec) / limb_prec;
+ tree vtype
+ = build_array_type_nelts (large_huge.m_limb_type,
+ nelts);
+ tree ptype = build_pointer_type (TREE_TYPE (v1));
+ tree off = fold_convert (ptype,
+ TYPE_SIZE_UNIT (TREE_TYPE (c)));
+ tree vd = build2 (MEM_REF, vtype,
+ build_fold_addr_expr (v1), off);
+ g = gimple_build_assign (vd, build_zero_cst (vtype));
+ }
+ else
+ {
+ tree vd = v1;
+ if (c)
+ {
+ tree ptype = build_pointer_type (TREE_TYPE (v1));
+ tree off
+ = fold_convert (ptype,
+ TYPE_SIZE_UNIT (TREE_TYPE (c)));
+ vd = build2 (MEM_REF, large_huge.m_limb_type,
+ build_fold_addr_expr (v1), off);
+ }
+ vd = build_fold_addr_expr (vd);
+ unsigned HOST_WIDE_INT nbytes
+ = tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (v1)));
+ if (c)
+ nbytes
+ -= tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (c)));
+ tree fn = builtin_decl_implicit (BUILT_IN_MEMSET);
+ g = gimple_build_call (fn, 3, vd,
+ integer_minus_one_node,
+ build_int_cst (sizetype,
+ nbytes));
+ }
+ gsi_insert_on_edge (e, g);
+ edge_insertions = true;
+ break;
+ default:
+ gcc_unreachable ();
+ case SSA_NAME:
+ if (gimple_code (SSA_NAME_DEF_STMT (arg)) == GIMPLE_NOP)
+ {
+ if (large_huge.m_names == NULL
+ || !bitmap_bit_p (large_huge.m_names,
+ SSA_NAME_VERSION (arg)))
+ continue;
+ }
+ int p2 = var_to_partition (large_huge.m_map, arg);
+ if (p1 == p2)
+ continue;
+ gcc_assert (large_huge.m_vars[p2] != NULL_TREE);
+ tree v2 = large_huge.m_vars[p2];
+ if (VAR_P (v1) && VAR_P (v2))
+ g = gimple_build_assign (v1, v2);
+ else if (VAR_P (v1))
+ g = gimple_build_assign (v1, build1 (VIEW_CONVERT_EXPR,
+ TREE_TYPE (v1), v2));
+ else if (VAR_P (v2))
+ g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
+ TREE_TYPE (v2), v1), v2);
+ else
+ {
+ if (atype == NULL_TREE
+ || !tree_int_cst_equal (TYPE_SIZE (atype),
+ TYPE_SIZE (TREE_TYPE (lhs))))
+ {
+ unsigned HOST_WIDE_INT nelts
+ = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (lhs)))
+ / limb_prec;
+ atype
+ = build_array_type_nelts (large_huge.m_limb_type,
+ nelts);
+ }
+ g = gimple_build_assign (build1 (VIEW_CONVERT_EXPR,
+ atype, v1),
+ build1 (VIEW_CONVERT_EXPR,
+ atype, v2));
+ }
+ gsi_insert_on_edge (e, g);
+ edge_insertions = true;
+ break;
+ }
+ }
+ }
+ }
+
+ if (large_huge.m_names || has_large_huge)
+ {
+ gimple *nop = NULL;
+ for (i = 0; i < num_ssa_names; ++i)
+ {
+ tree s = ssa_name (i);
+ if (s == NULL_TREE)
+ continue;
+ tree type = TREE_TYPE (s);
+ if (TREE_CODE (type) == COMPLEX_TYPE)
+ type = TREE_TYPE (type);
+ if (TREE_CODE (type) == BITINT_TYPE
+ && bitint_precision_kind (type) >= bitint_prec_large)
+ {
+ if (large_huge.m_preserved
+ && bitmap_bit_p (large_huge.m_preserved,
+ SSA_NAME_VERSION (s)))
+ continue;
+ gimple *g = SSA_NAME_DEF_STMT (s);
+ if (gimple_code (g) == GIMPLE_NOP)
+ {
+ if (SSA_NAME_VAR (s))
+ set_ssa_default_def (cfun, SSA_NAME_VAR (s), NULL_TREE);
+ release_ssa_name (s);
+ continue;
+ }
+ if (gimple_code (g) != GIMPLE_ASM)
+ {
+ gimple_stmt_iterator gsi = gsi_for_stmt (g);
+ bool save_vta = flag_var_tracking_assignments;
+ flag_var_tracking_assignments = false;
+ gsi_remove (&gsi, true);
+ flag_var_tracking_assignments = save_vta;
+ }
+ if (nop == NULL)
+ nop = gimple_build_nop ();
+ SSA_NAME_DEF_STMT (s) = nop;
+ release_ssa_name (s);
+ }
+ }
+ if (optimize)
+ disable_ranger (cfun);
+ }
+
+ if (edge_insertions)
+ gsi_commit_edge_inserts ();
+
+ return ret;
+}
+
+namespace {
+
+const pass_data pass_data_lower_bitint =
+{
+ GIMPLE_PASS, /* type */
+ "bitintlower", /* name */
+ OPTGROUP_NONE, /* optinfo_flags */
+ TV_NONE, /* tv_id */
+ PROP_ssa, /* properties_required */
+ PROP_gimple_lbitint, /* properties_provided */
+ 0, /* properties_destroyed */
+ 0, /* todo_flags_start */
+ 0, /* todo_flags_finish */
+};
+
+class pass_lower_bitint : public gimple_opt_pass
+{
+public:
+ pass_lower_bitint (gcc::context *ctxt)
+ : gimple_opt_pass (pass_data_lower_bitint, ctxt)
+ {}
+
+ /* opt_pass methods: */
+ opt_pass * clone () final override { return new pass_lower_bitint (m_ctxt); }
+ unsigned int execute (function *) final override
+ {
+ return gimple_lower_bitint ();
+ }
+
+}; // class pass_lower_bitint
+
+} // anon namespace
+
+gimple_opt_pass *
+make_pass_lower_bitint (gcc::context *ctxt)
+{
+ return new pass_lower_bitint (ctxt);
+}
+
+
+namespace {
+
+const pass_data pass_data_lower_bitint_O0 =
+{
+ GIMPLE_PASS, /* type */
+ "bitintlower0", /* name */
+ OPTGROUP_NONE, /* optinfo_flags */
+ TV_NONE, /* tv_id */
+ PROP_cfg, /* properties_required */
+ PROP_gimple_lbitint, /* properties_provided */
+ 0, /* properties_destroyed */
+ 0, /* todo_flags_start */
+ 0, /* todo_flags_finish */
+};
+
+class pass_lower_bitint_O0 : public gimple_opt_pass
+{
+public:
+ pass_lower_bitint_O0 (gcc::context *ctxt)
+ : gimple_opt_pass (pass_data_lower_bitint_O0, ctxt)
+ {}
+
+ /* opt_pass methods: */
+ bool gate (function *fun) final override
+ {
+ /* With errors, normal optimization passes are not run. If we don't
+ lower bitint operations at all, rtl expansion will abort. */
+ return !(fun->curr_properties & PROP_gimple_lbitint);
+ }
+
+ unsigned int execute (function *) final override
+ {
+ return gimple_lower_bitint ();
+ }
+
+}; // class pass_lower_bitint_O0
+
+} // anon namespace
+
+gimple_opt_pass *
+make_pass_lower_bitint_O0 (gcc::context *ctxt)
+{
+ return new pass_lower_bitint_O0 (ctxt);
+}
@@ -0,0 +1,31 @@
+/* Header file for gimple-lower-bitint.cc exports.
+ Copyright (C) 2023 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#ifndef GCC_GIMPLE_LOWER_BITINT_H
+#define GCC_GIMPLE_LOWER_BITINT_H
+
+class live_track;
+struct ssa_conflicts;
+extern void build_bitint_stmt_ssa_conflicts (gimple *, live_track *,
+ ssa_conflicts *, bitmap,
+ void (*) (live_track *, tree,
+ ssa_conflicts *),
+ void (*) (live_track *, tree));
+
+#endif /* GCC_GIMPLE_LOWER_BITINT_H */
@@ -981,8 +981,38 @@ expand_arith_overflow_result_store (tree
/* Helper for expand_*_overflow. Store RES into TARGET. */
static void
-expand_ubsan_result_store (rtx target, rtx res)
+expand_ubsan_result_store (tree lhs, rtx target, scalar_int_mode mode,
+ rtx res, rtx_code_label *do_error)
{
+ if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && TYPE_PRECISION (TREE_TYPE (lhs)) < GET_MODE_PRECISION (mode))
+ {
+ int uns = TYPE_UNSIGNED (TREE_TYPE (lhs));
+ int prec = TYPE_PRECISION (TREE_TYPE (lhs));
+ int tgtprec = GET_MODE_PRECISION (mode);
+ rtx resc = gen_reg_rtx (mode), lres;
+ emit_move_insn (resc, res);
+ if (uns)
+ {
+ rtx mask
+ = immed_wide_int_const (wi::shifted_mask (0, prec, false, tgtprec),
+ mode);
+ lres = expand_simple_binop (mode, AND, res, mask, NULL_RTX,
+ true, OPTAB_LIB_WIDEN);
+ }
+ else
+ {
+ lres = expand_shift (LSHIFT_EXPR, mode, res, tgtprec - prec,
+ NULL_RTX, 1);
+ lres = expand_shift (RSHIFT_EXPR, mode, lres, tgtprec - prec,
+ NULL_RTX, 0);
+ }
+ if (lres != res)
+ emit_move_insn (res, lres);
+ do_compare_rtx_and_jump (res, resc,
+ NE, true, mode, NULL_RTX, NULL, do_error,
+ profile_probability::very_unlikely ());
+ }
if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
/* If this is a scalar in a register that is stored in a wider mode
than the declared mode, compute the result into its declared mode
@@ -1431,7 +1461,7 @@ expand_addsub_overflow (location_t loc,
if (lhs)
{
if (is_ubsan)
- expand_ubsan_result_store (target, res);
+ expand_ubsan_result_store (lhs, target, mode, res, do_error);
else
{
if (do_xor)
@@ -1528,7 +1558,7 @@ expand_neg_overflow (location_t loc, tre
if (lhs)
{
if (is_ubsan)
- expand_ubsan_result_store (target, res);
+ expand_ubsan_result_store (lhs, target, mode, res, do_error);
else
expand_arith_overflow_result_store (lhs, target, mode, res);
}
@@ -1646,6 +1676,12 @@ expand_mul_overflow (location_t loc, tre
int pos_neg0 = get_range_pos_neg (arg0);
int pos_neg1 = get_range_pos_neg (arg1);
+ /* Unsigned types with smaller than mode precision, even if they have most
+ significant bit set, are still zero-extended. */
+ if (uns0_p && TYPE_PRECISION (TREE_TYPE (arg0)) < GET_MODE_PRECISION (mode))
+ pos_neg0 = 1;
+ if (uns1_p && TYPE_PRECISION (TREE_TYPE (arg1)) < GET_MODE_PRECISION (mode))
+ pos_neg1 = 1;
/* s1 * u2 -> ur */
if (!uns0_p && uns1_p && unsr_p)
@@ -2414,7 +2450,7 @@ expand_mul_overflow (location_t loc, tre
if (lhs)
{
if (is_ubsan)
- expand_ubsan_result_store (target, res);
+ expand_ubsan_result_store (lhs, target, mode, res, do_error);
else
expand_arith_overflow_result_store (lhs, target, mode, res);
}
@@ -4899,3 +4935,76 @@ expand_MASK_CALL (internal_fn, gcall *)
/* This IFN should only exist between ifcvt and vect passes. */
gcc_unreachable ();
}
+
+void
+expand_MULBITINT (internal_fn, gcall *stmt)
+{
+ rtx_mode_t args[6];
+ for (int i = 0; i < 6; i++)
+ args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
+ (i & 1) ? SImode : ptr_mode);
+ rtx fun = init_one_libfunc ("__mulbitint3");
+ emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 6, args);
+}
+
+void
+expand_DIVMODBITINT (internal_fn, gcall *stmt)
+{
+ rtx_mode_t args[8];
+ for (int i = 0; i < 8; i++)
+ args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
+ (i & 1) ? SImode : ptr_mode);
+ rtx fun = init_one_libfunc ("__divmodbitint4");
+ emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 8, args);
+}
+
+void
+expand_FLOATTOBITINT (internal_fn, gcall *stmt)
+{
+ machine_mode mode = TYPE_MODE (TREE_TYPE (gimple_call_arg (stmt, 2)));
+ rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
+ rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
+ rtx arg2 = expand_normal (gimple_call_arg (stmt, 2));
+ const char *mname = GET_MODE_NAME (mode);
+ unsigned mname_len = strlen (mname);
+ int len = 12 + mname_len;
+ char *libfunc_name = XALLOCAVEC (char, len);
+ char *p = libfunc_name;
+ const char *q;
+ memcpy (p, "__fix", 5);
+ p += 5;
+ for (q = mname; *q; q++)
+ *p++ = TOLOWER (*q);
+ memcpy (p, "bitint", 7);
+ rtx fun = init_one_libfunc (libfunc_name);
+ emit_library_call (fun, LCT_NORMAL, VOIDmode, arg0, ptr_mode, arg1,
+ SImode, arg2, mode);
+}
+
+void
+expand_BITINTTOFLOAT (internal_fn, gcall *stmt)
+{
+ tree lhs = gimple_call_lhs (stmt);
+ if (!lhs)
+ return;
+ machine_mode mode = TYPE_MODE (TREE_TYPE (lhs));
+ rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
+ rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
+ const char *mname = GET_MODE_NAME (mode);
+ unsigned mname_len = strlen (mname);
+ int len = 14 + mname_len;
+ char *libfunc_name = XALLOCAVEC (char, len);
+ char *p = libfunc_name;
+ const char *q;
+ memcpy (p, "__floatbitint", 13);
+ p += 13;
+ for (q = mname; *q; q++)
+ *p++ = TOLOWER (*q);
+ *p = '\0';
+ rtx fun = init_one_libfunc (libfunc_name);
+ rtx target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
+ rtx val = emit_library_call_value (fun, target, LCT_PURE, mode,
+ arg0, ptr_mode, arg1, SImode);
+ if (val != target)
+ emit_move_insn (target, val);
+}
@@ -559,6 +559,12 @@ DEF_INTERNAL_FN (ASSUME, ECF_CONST | ECF
/* For if-conversion of inbranch SIMD clones. */
DEF_INTERNAL_FN (MASK_CALL, ECF_NOVOPS, NULL)
+/* _BitInt support. */
+DEF_INTERNAL_FN (MULBITINT, ECF_LEAF | ECF_NOTHROW, ". O . R . R . ")
+DEF_INTERNAL_FN (DIVMODBITINT, ECF_LEAF, ". O . O . R . R . ")
+DEF_INTERNAL_FN (FLOATTOBITINT, ECF_LEAF | ECF_NOTHROW, ". O . . ")
+DEF_INTERNAL_FN (BITINTTOFLOAT, ECF_PURE | ECF_LEAF, ". R . ")
+
#undef DEF_INTERNAL_INT_FN
#undef DEF_INTERNAL_FLT_FN
#undef DEF_INTERNAL_FLT_FLOATN_FN
@@ -256,6 +256,10 @@ extern void expand_SPACESHIP (internal_f
extern void expand_TRAP (internal_fn, gcall *);
extern void expand_ASSUME (internal_fn, gcall *);
extern void expand_MASK_CALL (internal_fn, gcall *);
+extern void expand_MULBITINT (internal_fn, gcall *);
+extern void expand_DIVMODBITINT (internal_fn, gcall *);
+extern void expand_FLOATTOBITINT (internal_fn, gcall *);
+extern void expand_BITINTTOFLOAT (internal_fn, gcall *);
extern bool vectorized_internal_fn_supported_p (internal_fn, tree);
@@ -1888,7 +1888,7 @@ lto_input_tree_1 (class lto_input_block
for (i = 0; i < len; i++)
a[i] = streamer_read_hwi (ib);
- gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
+ gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
result = wide_int_to_tree (type, wide_int::from_array
(a, len, TYPE_PRECISION (type)));
streamer_tree_cache_append (data_in->reader_cache, result, hash);
@@ -1453,6 +1453,7 @@ OBJS = \
gimple-loop-jam.o \
gimple-loop-versioning.o \
gimple-low.o \
+ gimple-lower-bitint.o \
gimple-predicate-analysis.o \
gimple-pretty-print.o \
gimple-range.o \
@@ -6433,6 +6433,7 @@ (define_operator_list SYNC_FETCH_AND_AND
- 1)); }))))
(if (wi::to_wide (cst) == signed_max
&& TYPE_UNSIGNED (arg1_type)
+ && TYPE_MODE (arg1_type) != BLKmode
/* We will flip the signedness of the comparison operator
associated with the mode of @1, so the sign bit is
specified by this mode. Check that @1 is the signed
@@ -237,6 +237,7 @@ along with GCC; see the file COPYING3.
NEXT_PASS (pass_tail_recursion);
NEXT_PASS (pass_ch);
NEXT_PASS (pass_lower_complex);
+ NEXT_PASS (pass_lower_bitint);
NEXT_PASS (pass_sra);
/* The dom pass will also resolve all __builtin_constant_p calls
that are still there to 0. This has to be done after some
@@ -386,6 +387,7 @@ along with GCC; see the file COPYING3.
NEXT_PASS (pass_strip_predict_hints, false /* early_p */);
/* Lower remaining pieces of GIMPLE. */
NEXT_PASS (pass_lower_complex);
+ NEXT_PASS (pass_lower_bitint);
NEXT_PASS (pass_lower_vector_ssa);
NEXT_PASS (pass_lower_switch);
/* Perform simple scalar cleanup which is constant/copy propagation. */
@@ -429,6 +431,7 @@ along with GCC; see the file COPYING3.
NEXT_PASS (pass_lower_vaarg);
NEXT_PASS (pass_lower_vector);
NEXT_PASS (pass_lower_complex_O0);
+ NEXT_PASS (pass_lower_bitint_O0);
NEXT_PASS (pass_sancov_O0);
NEXT_PASS (pass_lower_switch_O0);
NEXT_PASS (pass_asan_O0);
@@ -336,8 +336,23 @@ pp_get_prefix (const pretty_printer *pp)
#define pp_wide_int(PP, W, SGN) \
do \
{ \
- print_dec (W, pp_buffer (PP)->digit_buffer, SGN); \
- pp_string (PP, pp_buffer (PP)->digit_buffer); \
+ const wide_int_ref &pp_wide_int_ref = (W); \
+ unsigned int pp_wide_int_prec \
+ = pp_wide_int_ref.get_precision (); \
+ if ((pp_wide_int_prec + 3) / 4 \
+ > sizeof (pp_buffer (PP)->digit_buffer) - 3) \
+ { \
+ char *pp_wide_int_buf \
+ = XALLOCAVEC (char, (pp_wide_int_prec + 3) / 4 + 3);\
+ print_dec (pp_wide_int_ref, pp_wide_int_buf, SGN); \
+ pp_string (PP, pp_wide_int_buf); \
+ } \
+ else \
+ { \
+ print_dec (pp_wide_int_ref, \
+ pp_buffer (PP)->digit_buffer, SGN); \
+ pp_string (PP, pp_buffer (PP)->digit_buffer); \
+ } \
} \
while (0)
#define pp_vrange(PP, R) \
@@ -2393,6 +2393,64 @@ layout_type (tree type)
break;
}
+ case BITINT_TYPE:
+ {
+ struct bitint_info info;
+ int cnt;
+ gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type), &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
+ {
+ SET_TYPE_MODE (type, limb_mode);
+ cnt = 1;
+ }
+ else
+ {
+ SET_TYPE_MODE (type, BLKmode);
+ cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
+ }
+ TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
+ TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
+ SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
+ if (cnt > 1)
+ {
+ /* Use same mode as compute_record_mode would use for a structure
+ containing cnt limb_mode elements. */
+ machine_mode mode = mode_for_size_tree (TYPE_SIZE (type),
+ MODE_INT, 1).else_blk ();
+ if (mode == BLKmode)
+ break;
+ finalize_type_size (type);
+ SET_TYPE_MODE (type, mode);
+ if (STRICT_ALIGNMENT
+ && !(TYPE_ALIGN (type) >= BIGGEST_ALIGNMENT
+ || TYPE_ALIGN (type) >= GET_MODE_ALIGNMENT (mode)))
+ {
+ /* If this is the only reason this type is BLKmode, then
+ don't force containing types to be BLKmode. */
+ TYPE_NO_FORCE_BLK (type) = 1;
+ SET_TYPE_MODE (type, BLKmode);
+ }
+ if (TYPE_NEXT_VARIANT (type) || type != TYPE_MAIN_VARIANT (type))
+ for (tree variant = TYPE_MAIN_VARIANT (type);
+ variant != NULL_TREE;
+ variant = TYPE_NEXT_VARIANT (variant))
+ {
+ SET_TYPE_MODE (variant, mode);
+ if (STRICT_ALIGNMENT
+ && !(TYPE_ALIGN (variant) >= BIGGEST_ALIGNMENT
+ || (TYPE_ALIGN (variant)
+ >= GET_MODE_ALIGNMENT (mode))))
+ {
+ TYPE_NO_FORCE_BLK (variant) = 1;
+ SET_TYPE_MODE (variant, BLKmode);
+ }
+ }
+ return;
+ }
+ break;
+ }
+
case REAL_TYPE:
{
/* Allow the caller to choose the type mode, which is how decimal
@@ -2417,6 +2475,18 @@ layout_type (tree type)
case COMPLEX_TYPE:
TYPE_UNSIGNED (type) = TYPE_UNSIGNED (TREE_TYPE (type));
+ if (TYPE_MODE (TREE_TYPE (type)) == BLKmode)
+ {
+ gcc_checking_assert (TREE_CODE (TREE_TYPE (type)) == BITINT_TYPE);
+ SET_TYPE_MODE (type, BLKmode);
+ TYPE_SIZE (type)
+ = int_const_binop (MULT_EXPR, TYPE_SIZE (TREE_TYPE (type)),
+ bitsize_int (2));
+ TYPE_SIZE_UNIT (type)
+ = int_const_binop (MULT_EXPR, TYPE_SIZE_UNIT (TREE_TYPE (type)),
+ bitsize_int (2));
+ break;
+ }
SET_TYPE_MODE (type,
GET_MODE_COMPLEX_MODE (TYPE_MODE (TREE_TYPE (type))));
@@ -6241,6 +6241,15 @@ when @var{type} is @code{EXCESS_PRECISIO
enum flt_eval_method, (enum excess_precision_type type),
default_excess_precision)
+/* Return true if _BitInt(N) is supported and fill details about it into
+ *INFO. */
+DEFHOOK
+(bitint_type_info,
+ "This target hook returns true if _BitInt(N) is supported and provides some\n\
+details on it.",
+ bool, (int n, struct bitint_info *info),
+ default_bitint_type_info)
+
HOOK_VECTOR_END (c)
/* Functions specific to the C++ frontend. */
@@ -68,6 +68,20 @@ union cumulative_args_t { void *p; };
#endif /* !CHECKING_P */
+/* Target properties of _BitInt(N) type. _BitInt(N) is to be represented
+ as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
+ ordered from least significant to most significant if !big_endian,
+ otherwise from most significant to least significant. If extended is
+ false, the bits above or equal to N are undefined when stored in a register
+ or memory, otherwise they are zero or sign extended depending on if
+ it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N). */
+
+struct bitint_info {
+ machine_mode limb_mode;
+ bool big_endian;
+ bool extended;
+};
+
/* Types of memory operation understood by the "by_pieces" infrastructure.
Used by the TARGET_USE_BY_PIECES_INFRASTRUCTURE_P target hook and
internally by the functions in expr.cc. */
@@ -2595,6 +2595,14 @@ default_excess_precision (enum excess_pr
return FLT_EVAL_METHOD_PROMOTE_TO_FLOAT;
}
+/* Return true if _BitInt(N) is supported and fill details about it into
+ *INFO. */
+bool
+default_bitint_type_info (int, struct bitint_info *)
+{
+ return false;
+}
+
/* Default implementation for
TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE. */
HOST_WIDE_INT
@@ -284,6 +284,7 @@ extern unsigned int default_min_arithmet
extern enum flt_eval_method
default_excess_precision (enum excess_precision_type ATTRIBUTE_UNUSED);
+extern bool default_bitint_type_info (int, struct bitint_info *);
extern HOST_WIDE_INT default_stack_clash_protection_alloca_probe_range (void);
extern void default_select_early_remat_modes (sbitmap);
extern tree default_preferred_else_value (unsigned, tree, unsigned, tree *);
@@ -229,6 +229,7 @@ protected:
have completed. */
#define PROP_assumptions_done (1 << 19) /* Assume function kept
around. */
+#define PROP_gimple_lbitint (1 << 20) /* lowered large _BitInt */
#define PROP_gimple \
(PROP_gimple_any | PROP_gimple_lcf | PROP_gimple_leh | PROP_gimple_lomp)
@@ -420,6 +421,8 @@ extern gimple_opt_pass *make_pass_strip_
extern gimple_opt_pass *make_pass_rebuild_frequencies (gcc::context *ctxt);
extern gimple_opt_pass *make_pass_lower_complex_O0 (gcc::context *ctxt);
extern gimple_opt_pass *make_pass_lower_complex (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_lower_bitint_O0 (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_lower_bitint (gcc::context *ctxt);
extern gimple_opt_pass *make_pass_lower_switch (gcc::context *ctxt);
extern gimple_opt_pass *make_pass_lower_switch_O0 (gcc::context *ctxt);
extern gimple_opt_pass *make_pass_lower_vector (gcc::context *ctxt);
@@ -1924,6 +1924,7 @@ dump_generic_node (pretty_printer *pp, t
case VECTOR_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
+ case BITINT_TYPE:
case OPAQUE_TYPE:
{
unsigned int quals = TYPE_QUALS (node);
@@ -2038,6 +2039,14 @@ dump_generic_node (pretty_printer *pp, t
pp_decimal_int (pp, TYPE_PRECISION (node));
pp_greater (pp);
}
+ else if (TREE_CODE (node) == BITINT_TYPE)
+ {
+ if (TYPE_UNSIGNED (node))
+ pp_string (pp, "unsigned ");
+ pp_string (pp, "_BitInt(");
+ pp_decimal_int (pp, TYPE_PRECISION (node));
+ pp_right_paren (pp);
+ }
else if (TREE_CODE (node) == VOID_TYPE)
pp_string (pp, "void");
else
@@ -2234,8 +2243,18 @@ dump_generic_node (pretty_printer *pp, t
pp_minus (pp);
val = -val;
}
- print_hex (val, pp_buffer (pp)->digit_buffer);
- pp_string (pp, pp_buffer (pp)->digit_buffer);
+ unsigned int prec = val.get_precision ();
+ if ((prec + 3) / 4 > sizeof (pp_buffer (pp)->digit_buffer) - 3)
+ {
+ char *buf = XALLOCAVEC (char, (prec + 3) / 4 + 3);
+ print_hex (val, buf);
+ pp_string (pp, buf);
+ }
+ else
+ {
+ print_hex (val, pp_buffer (pp)->digit_buffer);
+ pp_string (pp, pp_buffer (pp)->digit_buffer);
+ }
}
if ((flags & TDF_GIMPLE)
&& ! (POINTER_TYPE_P (TREE_TYPE (node))
@@ -38,6 +38,7 @@ along with GCC; see the file COPYING3.
#include "explow.h"
#include "tree-dfa.h"
#include "stor-layout.h"
+#include "gimple-lower-bitint.h"
/* This set of routines implements a coalesce_list. This is an object which
is used to track pairs of ssa_names which are desirable to coalesce
@@ -914,6 +915,14 @@ build_ssa_conflict_graph (tree_live_info
else if (is_gimple_debug (stmt))
continue;
+ if (map->bitint)
+ {
+ build_bitint_stmt_ssa_conflicts (stmt, live, graph, map->bitint,
+ live_track_process_def,
+ live_track_process_use);
+ continue;
+ }
+
/* For stmts with more than one SSA_NAME definition pretend all the
SSA_NAME outputs but the first one are live at this point, so
that conflicts are added in between all those even when they are
@@ -1058,6 +1067,8 @@ create_coalesce_list_for_region (var_map
if (virtual_operand_p (res))
continue;
ver = SSA_NAME_VERSION (res);
+ if (map->bitint && !bitmap_bit_p (map->bitint, ver))
+ continue;
/* Register ssa_names and coalesces between the args and the result
of all PHI. */
@@ -1106,6 +1117,8 @@ create_coalesce_list_for_region (var_map
{
v1 = SSA_NAME_VERSION (lhs);
v2 = SSA_NAME_VERSION (rhs1);
+ if (map->bitint && !bitmap_bit_p (map->bitint, v1))
+ break;
cost = coalesce_cost_bb (bb);
add_coalesce (cl, v1, v2, cost);
bitmap_set_bit (used_in_copy, v1);
@@ -1124,12 +1137,16 @@ create_coalesce_list_for_region (var_map
if (!rhs1)
break;
tree lhs = ssa_default_def (cfun, res);
+ if (map->bitint && !lhs)
+ break;
gcc_assert (lhs);
if (TREE_CODE (rhs1) == SSA_NAME
&& gimple_can_coalesce_p (lhs, rhs1))
{
v1 = SSA_NAME_VERSION (lhs);
v2 = SSA_NAME_VERSION (rhs1);
+ if (map->bitint && !bitmap_bit_p (map->bitint, v1))
+ break;
cost = coalesce_cost_bb (bb);
add_coalesce (cl, v1, v2, cost);
bitmap_set_bit (used_in_copy, v1);
@@ -1177,6 +1194,8 @@ create_coalesce_list_for_region (var_map
v1 = SSA_NAME_VERSION (outputs[match]);
v2 = SSA_NAME_VERSION (input);
+ if (map->bitint && !bitmap_bit_p (map->bitint, v1))
+ continue;
if (gimple_can_coalesce_p (outputs[match], input))
{
@@ -1651,6 +1670,33 @@ compute_optimized_partition_bases (var_m
}
}
+ if (map->bitint
+ && flag_tree_coalesce_vars
+ && (optimize > 1 || parts < 500))
+ for (i = 0; i < (unsigned) parts; ++i)
+ {
+ tree s1 = partition_to_var (map, i);
+ int p1 = partition_find (tentative, i);
+ for (unsigned j = i + 1; j < (unsigned) parts; ++j)
+ {
+ tree s2 = partition_to_var (map, j);
+ if (s1 == s2)
+ continue;
+ if (tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
+ TYPE_SIZE (TREE_TYPE (s2))))
+ {
+ int p2 = partition_find (tentative, j);
+
+ if (p1 == p2)
+ continue;
+
+ partition_union (tentative, p1, p2);
+ if (partition_find (tentative, i) != p1)
+ break;
+ }
+ }
+ }
+
map->partition_to_base_index = XCNEWVEC (int, parts);
auto_vec<unsigned int> index_map (parts);
if (parts)
@@ -1692,6 +1738,101 @@ compute_optimized_partition_bases (var_m
partition_delete (tentative);
}
+/* For the bitint lowering pass, try harder. Partitions which contain
+ SSA_NAME default def of a PARM_DECL or have RESULT_DECL need to have
+ compatible types because they will use that RESULT_DECL or PARM_DECL.
+ Other partitions can have even incompatible _BitInt types, as long
+ as they have the same size - those will use VAR_DECLs which are just
+ arrays of the limbs. */
+
+static void
+coalesce_bitint (var_map map, ssa_conflicts *graph)
+{
+ unsigned n = num_var_partitions (map);
+ if (optimize <= 1 && n > 500)
+ return;
+
+ bool try_same_size = false;
+ FILE *debug_file = (dump_flags & TDF_DETAILS) ? dump_file : NULL;
+ for (unsigned i = 0; i < n; ++i)
+ {
+ tree s1 = partition_to_var (map, i);
+ if ((unsigned) var_to_partition (map, s1) != i)
+ continue;
+ int v1 = SSA_NAME_VERSION (s1);
+ for (unsigned j = i + 1; j < n; ++j)
+ {
+ tree s2 = partition_to_var (map, j);
+ if (s1 == s2 || (unsigned) var_to_partition (map, s2) != j)
+ continue;
+ if (!types_compatible_p (TREE_TYPE (s1), TREE_TYPE (s2)))
+ {
+ if (!try_same_size
+ && tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
+ TYPE_SIZE (TREE_TYPE (s2))))
+ try_same_size = true;
+ continue;
+ }
+ int v2 = SSA_NAME_VERSION (s2);
+ if (attempt_coalesce (map, graph, v1, v2, debug_file)
+ && partition_to_var (map, i) != s1)
+ break;
+ }
+ }
+
+ if (!try_same_size)
+ return;
+
+ unsigned i;
+ bitmap_iterator bi;
+ bitmap same_type = NULL;
+
+ EXECUTE_IF_SET_IN_BITMAP (map->bitint, 0, i, bi)
+ {
+ tree s = ssa_name (i);
+ if (!SSA_NAME_VAR (s))
+ continue;
+ if (TREE_CODE (SSA_NAME_VAR (s)) != RESULT_DECL
+ && (TREE_CODE (SSA_NAME_VAR (s)) != PARM_DECL
+ || !SSA_NAME_IS_DEFAULT_DEF (s)))
+ continue;
+ if (same_type == NULL)
+ same_type = BITMAP_ALLOC (NULL);
+ int p = var_to_partition (map, s);
+ bitmap_set_bit (same_type, p);
+ }
+
+ for (i = 0; i < n; ++i)
+ {
+ if (same_type && bitmap_bit_p (same_type, i))
+ continue;
+ tree s1 = partition_to_var (map, i);
+ if ((unsigned) var_to_partition (map, s1) != i)
+ continue;
+ int v1 = SSA_NAME_VERSION (s1);
+ for (unsigned j = i + 1; j < n; ++j)
+ {
+ if (same_type && bitmap_bit_p (same_type, j))
+ continue;
+
+ tree s2 = partition_to_var (map, j);
+ if (s1 == s2 || (unsigned) var_to_partition (map, s2) != j)
+ continue;
+
+ if (!tree_int_cst_equal (TYPE_SIZE (TREE_TYPE (s1)),
+ TYPE_SIZE (TREE_TYPE (s2))))
+ continue;
+
+ int v2 = SSA_NAME_VERSION (s2);
+ if (attempt_coalesce (map, graph, v1, v2, debug_file)
+ && partition_to_var (map, i) != s1)
+ break;
+ }
+ }
+
+ BITMAP_FREE (same_type);
+}
+
/* Given an initial var_map MAP, coalesce variables and return a partition map
with the resulting coalesce. Note that this function is called in either
live range computation context or out-of-ssa context, indicated by MAP. */
@@ -1709,6 +1850,8 @@ coalesce_ssa_name (var_map map)
if (map->outofssa_p)
populate_coalesce_list_for_outofssa (cl, used_in_copies);
bitmap_list_view (used_in_copies);
+ if (map->bitint)
+ bitmap_ior_into (used_in_copies, map->bitint);
if (dump_file && (dump_flags & TDF_DETAILS))
dump_var_map (dump_file, map);
@@ -1756,6 +1899,9 @@ coalesce_ssa_name (var_map map)
((dump_flags & TDF_DETAILS) ? dump_file : NULL));
delete_coalesce_list (cl);
+
+ if (map->bitint && flag_tree_coalesce_vars)
+ coalesce_bitint (map, graph);
+
ssa_conflicts_delete (graph);
}
-
@@ -76,10 +76,11 @@ var_map_base_fini (var_map map)
}
/* Create a variable partition map of SIZE for region, initialize and return
it. Region is a loop if LOOP is non-NULL, otherwise is the current
- function. */
+ function. If BITINT is non-NULL, only SSA_NAMEs from that bitmap
+ will be coalesced. */
var_map
-init_var_map (int size, class loop *loop)
+init_var_map (int size, class loop *loop, bitmap bitint)
{
var_map map;
@@ -108,7 +109,8 @@ init_var_map (int size, class loop *loop
else
{
map->bmp_bbs = NULL;
- map->outofssa_p = true;
+ map->outofssa_p = bitint == NULL;
+ map->bitint = bitint;
basic_block bb;
FOR_EACH_BB_FN (bb, cfun)
map->vec_bbs.safe_push (bb);
@@ -70,6 +70,10 @@ typedef struct _var_map
/* Vector of basic block in the region. */
vec<basic_block> vec_bbs;
+ /* If non-NULL, only coalesce SSA_NAMEs from this bitmap, and try harder
+ for those (for bitint lowering pass). */
+ bitmap bitint;
+
/* True if this map is for out-of-ssa, otherwise for live range
computation. When for out-of-ssa, it also means the var map is computed
for whole current function. */
@@ -80,7 +84,7 @@ typedef struct _var_map
/* Value used to represent no partition number. */
#define NO_PARTITION -1
-extern var_map init_var_map (int, class loop* = NULL);
+extern var_map init_var_map (int, class loop * = NULL, bitmap = NULL);
extern void delete_var_map (var_map);
extern int var_union (var_map, tree, tree);
extern void partition_view_normal (var_map);
@@ -100,7 +104,7 @@ inline bool
region_contains_p (var_map map, basic_block bb)
{
/* It's possible that the function is called with ENTRY_BLOCK/EXIT_BLOCK. */
- if (map->outofssa_p)
+ if (map->outofssa_p || map->bitint)
return (bb->index != ENTRY_BLOCK && bb->index != EXIT_BLOCK);
return bitmap_bit_p (map->bmp_bbs, bb->index);
@@ -74,6 +74,7 @@ along with GCC; see the file COPYING3.
#include "ipa-modref-tree.h"
#include "ipa-modref.h"
#include "tree-ssa-sccvn.h"
+#include "target.h"
/* This algorithm is based on the SCC algorithm presented by Keith
Cooper and L. Taylor Simpson in "SCC-Based Value numbering"
@@ -6969,8 +6970,14 @@ eliminate_dom_walker::eliminate_stmt (ba
|| !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
&& !type_has_mode_precision_p (TREE_TYPE (lhs)))
{
- if (TREE_CODE (lhs) == COMPONENT_REF
- || TREE_CODE (lhs) == MEM_REF)
+ if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
+ && (TYPE_PRECISION (TREE_TYPE (lhs))
+ > (targetm.scalar_mode_supported_p (TImode)
+ ? GET_MODE_PRECISION (TImode)
+ : GET_MODE_PRECISION (DImode))))
+ lookup_lhs = NULL_TREE;
+ else if (TREE_CODE (lhs) == COMPONENT_REF
+ || TREE_CODE (lhs) == MEM_REF)
{
tree ltype = build_nonstandard_integer_type
(TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (lhs))),
@@ -1143,32 +1143,93 @@ jump_table_cluster::emit (tree index_exp
tree default_label_expr, basic_block default_bb,
location_t loc)
{
- unsigned HOST_WIDE_INT range = get_range (get_low (), get_high ());
+ tree low = get_low ();
+ unsigned HOST_WIDE_INT range = get_range (low, get_high ());
unsigned HOST_WIDE_INT nondefault_range = 0;
+ bool bitint = false;
+ gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
+
+ /* For large/huge _BitInt, subtract low from index_expr, cast to unsigned
+ DImode type (get_range doesn't support ranges larger than 64-bits)
+ and subtract low from all case values as well. */
+ if (TREE_CODE (TREE_TYPE (index_expr)) == BITINT_TYPE
+ && TYPE_PRECISION (TREE_TYPE (index_expr)) > GET_MODE_PRECISION (DImode))
+ {
+ bitint = true;
+ tree this_low = low, type;
+ gimple *g;
+ if (!TYPE_OVERFLOW_WRAPS (TREE_TYPE (index_expr)))
+ {
+ type = unsigned_type_for (TREE_TYPE (index_expr));
+ g = gimple_build_assign (make_ssa_name (type), NOP_EXPR, index_expr);
+ gimple_set_location (g, loc);
+ gsi_insert_after (&gsi, g, GSI_NEW_STMT);
+ index_expr = gimple_assign_lhs (g);
+ this_low = fold_convert (type, this_low);
+ }
+ this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (index_expr)),
+ PLUS_EXPR, index_expr, this_low);
+ gimple_set_location (g, loc);
+ gsi_insert_after (&gsi, g, GSI_NEW_STMT);
+ index_expr = gimple_assign_lhs (g);
+ type = build_nonstandard_integer_type (GET_MODE_PRECISION (DImode), 1);
+ g = gimple_build_cond (GT_EXPR, index_expr,
+ fold_convert (TREE_TYPE (index_expr),
+ TYPE_MAX_VALUE (type)),
+ NULL_TREE, NULL_TREE);
+ gimple_set_location (g, loc);
+ gsi_insert_after (&gsi, g, GSI_NEW_STMT);
+ edge e1 = split_block (m_case_bb, g);
+ e1->flags = EDGE_FALSE_VALUE;
+ e1->probability = profile_probability::likely ();
+ edge e2 = make_edge (e1->src, default_bb, EDGE_TRUE_VALUE);
+ e2->probability = e1->probability.invert ();
+ gsi = gsi_start_bb (e1->dest);
+ g = gimple_build_assign (make_ssa_name (type), NOP_EXPR, index_expr);
+ gimple_set_location (g, loc);
+ gsi_insert_after (&gsi, g, GSI_NEW_STMT);
+ index_expr = gimple_assign_lhs (g);
+ }
/* For jump table we just emit a new gswitch statement that will
be latter lowered to jump table. */
auto_vec <tree> labels;
labels.create (m_cases.length ());
- make_edge (m_case_bb, default_bb, 0);
+ basic_block case_bb = gsi_bb (gsi);
+ make_edge (case_bb, default_bb, 0);
for (unsigned i = 0; i < m_cases.length (); i++)
{
- labels.quick_push (unshare_expr (m_cases[i]->m_case_label_expr));
- make_edge (m_case_bb, m_cases[i]->m_case_bb, 0);
+ tree lab = unshare_expr (m_cases[i]->m_case_label_expr);
+ if (bitint)
+ {
+ CASE_LOW (lab)
+ = fold_convert (TREE_TYPE (index_expr),
+ const_binop (MINUS_EXPR,
+ TREE_TYPE (CASE_LOW (lab)),
+ CASE_LOW (lab), low));
+ if (CASE_HIGH (lab))
+ CASE_HIGH (lab)
+ = fold_convert (TREE_TYPE (index_expr),
+ const_binop (MINUS_EXPR,
+ TREE_TYPE (CASE_HIGH (lab)),
+ CASE_HIGH (lab), low));
+ }
+ labels.quick_push (lab);
+ make_edge (case_bb, m_cases[i]->m_case_bb, 0);
}
gswitch *s = gimple_build_switch (index_expr,
unshare_expr (default_label_expr), labels);
gimple_set_location (s, loc);
- gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
gsi_insert_after (&gsi, s, GSI_NEW_STMT);
/* Set up even probabilities for all cases. */
for (unsigned i = 0; i < m_cases.length (); i++)
{
simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
- edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
+ edge case_edge = find_edge (case_bb, sc->m_case_bb);
unsigned HOST_WIDE_INT case_range
= sc->get_range (sc->get_low (), sc->get_high ());
nondefault_range += case_range;
@@ -1184,7 +1245,7 @@ jump_table_cluster::emit (tree index_exp
for (unsigned i = 0; i < m_cases.length (); i++)
{
simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
- edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
+ edge case_edge = find_edge (case_bb, sc->m_case_bb);
case_edge->probability
= profile_probability::always ().apply_scale ((intptr_t)case_edge->aux,
range);
@@ -37,7 +37,8 @@ enum type_class
function_type_class, method_type_class,
record_type_class, union_type_class,
array_type_class, string_type_class,
- lang_type_class, opaque_type_class
+ lang_type_class, opaque_type_class,
+ bitint_type_class
};
#endif /* GCC_TYPECLASS_H */
@@ -50,6 +50,8 @@ along with GCC; see the file COPYING3.
#include "gimple-fold.h"
#include "varasm.h"
#include "realmpfr.h"
+#include "target.h"
+#include "langhooks.h"
/* Map from a tree to a VAR_DECL tree. */
@@ -125,6 +127,25 @@ tree
ubsan_encode_value (tree t, enum ubsan_encode_value_phase phase)
{
tree type = TREE_TYPE (t);
+ if (TREE_CODE (type) == BITINT_TYPE)
+ {
+ if (TYPE_PRECISION (type) <= POINTER_SIZE)
+ {
+ type = pointer_sized_int_node;
+ t = fold_build1 (NOP_EXPR, type, t);
+ }
+ else
+ {
+ scalar_int_mode arith_mode
+ = (targetm.scalar_mode_supported_p (TImode) ? TImode : DImode);
+ if (TYPE_PRECISION (type) > GET_MODE_PRECISION (arith_mode))
+ return build_zero_cst (pointer_sized_int_node);
+ type
+ = build_nonstandard_integer_type (GET_MODE_PRECISION (arith_mode),
+ TYPE_UNSIGNED (type));
+ t = fold_build1 (NOP_EXPR, type, t);
+ }
+ }
scalar_mode mode = SCALAR_TYPE_MODE (type);
const unsigned int bitsize = GET_MODE_BITSIZE (mode);
if (bitsize <= POINTER_SIZE)
@@ -355,14 +376,32 @@ ubsan_type_descriptor (tree type, enum u
{
/* See through any typedefs. */
type = TYPE_MAIN_VARIANT (type);
+ tree type3 = type;
+ if (pstyle == UBSAN_PRINT_FORCE_INT)
+ {
+ /* Temporary hack for -fsanitize=shift with _BitInt(129) and more.
+ libubsan crashes if it is not TK_Integer type. */
+ if (TREE_CODE (type) == BITINT_TYPE)
+ {
+ scalar_int_mode arith_mode
+ = (targetm.scalar_mode_supported_p (TImode)
+ ? TImode : DImode);
+ if (TYPE_PRECISION (type) > GET_MODE_PRECISION (arith_mode))
+ type3 = build_qualified_type (type, TYPE_QUAL_CONST);
+ }
+ if (type3 == type)
+ pstyle = UBSAN_PRINT_NORMAL;
+ }
- tree decl = decl_for_type_lookup (type);
+ tree decl = decl_for_type_lookup (type3);
/* It is possible that some of the earlier created DECLs were found
unused, in that case they weren't emitted and varpool_node::get
returns NULL node on them. But now we really need them. Thus,
renew them here. */
if (decl != NULL_TREE && varpool_node::get (decl))
- return build_fold_addr_expr (decl);
+ {
+ return build_fold_addr_expr (decl);
+ }
tree dtype = ubsan_get_type_descriptor_type ();
tree type2 = type;
@@ -370,6 +409,7 @@ ubsan_type_descriptor (tree type, enum u
pretty_printer pretty_name;
unsigned char deref_depth = 0;
unsigned short tkind, tinfo;
+ char tname_bitint[sizeof ("unsigned _BitInt(2147483647)")];
/* Get the name of the type, or the name of the pointer type. */
if (pstyle == UBSAN_PRINT_POINTER)
@@ -403,8 +443,18 @@ ubsan_type_descriptor (tree type, enum u
}
if (tname == NULL)
- /* We weren't able to determine the type name. */
- tname = "<unknown>";
+ {
+ if (TREE_CODE (type2) == BITINT_TYPE)
+ {
+ snprintf (tname_bitint, sizeof (tname_bitint),
+ "%s_BitInt(%d)", TYPE_UNSIGNED (type2) ? "unsigned " : "",
+ TYPE_PRECISION (type2));
+ tname = tname_bitint;
+ }
+ else
+ /* We weren't able to determine the type name. */
+ tname = "<unknown>";
+ }
pp_quote (&pretty_name);
@@ -472,6 +522,18 @@ ubsan_type_descriptor (tree type, enum u
case INTEGER_TYPE:
tkind = 0x0000;
break;
+ case BITINT_TYPE:
+ {
+ /* FIXME: libubsan right now only supports _BitInts which
+ fit into DImode or TImode. */
+ scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
+ ? TImode : DImode);
+ if (TYPE_PRECISION (eltype) <= GET_MODE_PRECISION (arith_mode))
+ tkind = 0x0000;
+ else
+ tkind = 0xffff;
+ }
+ break;
case REAL_TYPE:
/* FIXME: libubsan right now only supports float, double and
long double type formats. */
@@ -486,7 +548,17 @@ ubsan_type_descriptor (tree type, enum u
tkind = 0xffff;
break;
}
- tinfo = get_ubsan_type_info_for_type (eltype);
+ tinfo = tkind == 0xffff ? 0 : get_ubsan_type_info_for_type (eltype);
+
+ if (pstyle == UBSAN_PRINT_FORCE_INT)
+ {
+ tkind = 0x0000;
+ scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
+ ? TImode : DImode);
+ tree t = lang_hooks.types.type_for_mode (arith_mode,
+ TYPE_UNSIGNED (eltype));
+ tinfo = get_ubsan_type_info_for_type (t);
+ }
/* Create a new VAR_DECL of type descriptor. */
const char *tmp = pp_formatted_text (&pretty_name);
@@ -522,7 +594,7 @@ ubsan_type_descriptor (tree type, enum u
varpool_node::finalize_decl (decl);
/* Save the VAR_DECL into the hash table. */
- decl_for_type_insert (type, decl);
+ decl_for_type_insert (type3, decl);
return build_fold_addr_expr (decl);
}
@@ -1604,8 +1676,9 @@ instrument_si_overflow (gimple_stmt_iter
Also punt on bit-fields. */
if (!INTEGRAL_TYPE_P (lhsinner)
|| TYPE_OVERFLOW_WRAPS (lhsinner)
- || maybe_ne (GET_MODE_BITSIZE (TYPE_MODE (lhsinner)),
- TYPE_PRECISION (lhsinner)))
+ || (TREE_CODE (lhsinner) != BITINT_TYPE
+ && maybe_ne (GET_MODE_BITSIZE (TYPE_MODE (lhsinner)),
+ TYPE_PRECISION (lhsinner))))
return;
switch (code)
@@ -39,7 +39,8 @@ enum ubsan_null_ckind {
enum ubsan_print_style {
UBSAN_PRINT_NORMAL,
UBSAN_PRINT_POINTER,
- UBSAN_PRINT_ARRAY
+ UBSAN_PRINT_ARRAY,
+ UBSAN_PRINT_FORCE_INT
};
/* This controls ubsan_encode_value behavior. */
@@ -5281,6 +5281,61 @@ output_constant (tree exp, unsigned HOST
reverse, false);
break;
+ case BITINT_TYPE:
+ if (TREE_CODE (exp) != INTEGER_CST)
+ error ("initializer for %<_BitInt(%d)%> value is not an integer "
+ "constant", TYPE_PRECISION (TREE_TYPE (exp)));
+ else
+ {
+ struct bitint_info info;
+ tree type = TREE_TYPE (exp);
+ gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
+ &info));
+ scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
+ if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
+ {
+ cst = expand_expr (exp, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
+ if (reverse)
+ cst = flip_storage_order (TYPE_MODE (TREE_TYPE (exp)), cst);
+ if (!assemble_integer (cst, MIN (size, thissize), align, 0))
+ error ("initializer for integer/fixed-point value is too "
+ "complicated");
+ break;
+ }
+ int prec = GET_MODE_PRECISION (limb_mode);
+ int cnt = CEIL (TYPE_PRECISION (type), prec);
+ tree limb_type = build_nonstandard_integer_type (prec, 1);
+ int elt_size = GET_MODE_SIZE (limb_mode);
+ unsigned int nalign = MIN (align, GET_MODE_ALIGNMENT (limb_mode));
+ thissize = 0;
+ if (prec == HOST_BITS_PER_WIDE_INT)
+ for (int i = 0; i < cnt; i++)
+ {
+ int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
+ tree c;
+ if (idx >= TREE_INT_CST_EXT_NUNITS (exp))
+ c = build_int_cst (limb_type,
+ tree_int_cst_sgn (exp) < 0 ? -1 : 0);
+ else
+ c = build_int_cst (limb_type,
+ TREE_INT_CST_ELT (exp, idx));
+ output_constant (c, elt_size, nalign, reverse, false);
+ thissize += elt_size;
+ }
+ else
+ for (int i = 0; i < cnt; i++)
+ {
+ int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
+ wide_int w = wi::rshift (wi::to_wide (exp), idx * prec,
+ TYPE_SIGN (TREE_TYPE (exp)));
+ tree c = wide_int_to_tree (limb_type,
+ wide_int::from (w, prec, UNSIGNED));
+ output_constant (c, elt_size, nalign, reverse, false);
+ thissize += elt_size;
+ }
+ }
+ break;
+
case ARRAY_TYPE:
case VECTOR_TYPE:
switch (TREE_CODE (exp))
@@ -111,21 +111,21 @@ check_for_binary_op_overflow (range_quer
{
/* So far we found that there is an overflow on the boundaries.
That doesn't prove that there is an overflow even for all values
- in between the boundaries. For that compute widest_int range
+ in between the boundaries. For that compute widest2_int range
of the result and see if it doesn't overlap the range of
type. */
- widest_int wmin, wmax;
- widest_int w[4];
+ widest2_int wmin, wmax;
+ widest2_int w[4];
int i;
signop sign0 = TYPE_SIGN (TREE_TYPE (op0));
signop sign1 = TYPE_SIGN (TREE_TYPE (op1));
- w[0] = widest_int::from (vr0.lower_bound (), sign0);
- w[1] = widest_int::from (vr0.upper_bound (), sign0);
- w[2] = widest_int::from (vr1.lower_bound (), sign1);
- w[3] = widest_int::from (vr1.upper_bound (), sign1);
+ w[0] = widest2_int::from (vr0.lower_bound (), sign0);
+ w[1] = widest2_int::from (vr0.upper_bound (), sign0);
+ w[2] = widest2_int::from (vr1.lower_bound (), sign1);
+ w[3] = widest2_int::from (vr1.upper_bound (), sign1);
for (i = 0; i < 4; i++)
{
- widest_int wt;
+ widest2_int wt;
switch (subcode)
{
case PLUS_EXPR:
@@ -153,10 +153,10 @@ check_for_binary_op_overflow (range_quer
}
/* The result of op0 CODE op1 is known to be in range
[wmin, wmax]. */
- widest_int wtmin
- = widest_int::from (irange_val_min (type), TYPE_SIGN (type));
- widest_int wtmax
- = widest_int::from (irange_val_max (type), TYPE_SIGN (type));
+ widest2_int wtmin
+ = widest2_int::from (irange_val_min (type), TYPE_SIGN (type));
+ widest2_int wtmax
+ = widest2_int::from (irange_val_max (type), TYPE_SIGN (type));
/* If all values in [wmin, wmax] are smaller than
[wtmin, wtmax] or all are larger than [wtmin, wtmax],
the arithmetic operation will always overflow. */
@@ -1717,12 +1717,11 @@ simplify_using_ranges::simplify_internal
g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
else
{
- int prec = TYPE_PRECISION (type);
tree utype = type;
if (ovf
|| !useless_type_conversion_p (type, TREE_TYPE (op0))
|| !useless_type_conversion_p (type, TREE_TYPE (op1)))
- utype = build_nonstandard_integer_type (prec, 1);
+ utype = unsigned_type_for (type);
if (TREE_CODE (op0) == INTEGER_CST)
op0 = fold_convert (utype, op0);
else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))