LGTM, I must say the number of modifications is less than my expect :)
And it's a really big move for RVV implementation!
Juzhe-Zhong <juzhe.zhong@rivai.ai> 於 2023年7月20日 週四 07:22 寫道:
> Current machine modes layout is hard to maintain && read && understand.
>
> For a LMUL = 1 SI vector mode:
> 1. VNx1SI mode when TARGET_MIN_VLEN = 32.
> 2. VNx2SI mode when TARGET_MIN_VLEN = 64.
> 3. VNx4SI mode when TARGET_MIN_VLEN = 128.
>
> Such implementation produces redundant machine modes and thus redudant
> machine description patterns.
>
> Now, this patch refactor machine modes into 3 follow formats:
>
> 1. mask mode: RVVMF64BImode, RVVMF32BImode, ...., RVVM1BImode.
> RVVMF64BImode means such mask mode occupy 1/64 of a RVV M1
> reg.
> RVVM1BImode size = LMUL = 1 reg.
> 2. non-tuple vector modes:
> RVV<LMUL><BASE_MODE>: E.g. RVVMF8QImode = SEW = 8 && LMUL
> = MF8
> 3. tuple vector modes:
> RVV<LMUL>x<NF><BASE_MODE>.
>
> For example, for SEW = 16, LMUL = MF2 , int mode is always RVVMF4HImode,
> then adjust its size according to TARGET_MIN_VLEN.
>
> Before this patch, the machine description patterns: 17551
> After this patch, the machine description patterns: 14132 =====> reduce
> 3K+ patterns.
>
> Regression of gcc/g++ rv32/rv64 all passed.
>
> Ok for trunk?
>
> gcc/ChangeLog:
>
> * config/riscv/autovec.md
> (len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Refactor RVV
> machine modes.
> (len_mask_gather_load<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
> (len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
> (len_mask_gather_load<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (len_mask_gather_load<mode><mode>): Ditto.
> (len_mask_gather_load<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> (len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
> (len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
> (len_mask_scatter_store<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
> (len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (len_mask_scatter_store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (len_mask_scatter_store<mode><mode>): Ditto.
> (len_mask_scatter_store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> * config/riscv/riscv-modes.def (VECTOR_BOOL_MODE): Ditto.
> (ADJUST_NUNITS): Ditto.
> (ADJUST_ALIGNMENT): Ditto.
> (ADJUST_BYTESIZE): Ditto.
> (ADJUST_PRECISION): Ditto.
> (RVV_MODES): Ditto.
> (RVV_WHOLE_MODES): Ditto.
> (RVV_FRACT_MODE): Ditto.
> (RVV_NF8_MODES): Ditto.
> (RVV_NF4_MODES): Ditto.
> (VECTOR_MODES_WITH_PREFIX): Ditto.
> (VECTOR_MODE_WITH_PREFIX): Ditto.
> (RVV_TUPLE_MODES): Ditto.
> (RVV_NF2_MODES): Ditto.
> (RVV_TUPLE_PARTIAL_MODES): Ditto.
> * config/riscv/riscv-v.cc (struct mode_vtype_group): Ditto.
> (ENTRY): Ditto.
> (TUPLE_ENTRY): Ditto.
> (get_vlmul): Ditto.
> (get_nf): Ditto.
> (get_ratio): Ditto.
> (preferred_simd_mode): Ditto.
> (autovectorize_vector_modes): Ditto.
> * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Ditto.
> * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Ditto.
> (vbool64_t): Ditto.
> (vbool32_t): Ditto.
> (vbool16_t): Ditto.
> (vbool8_t): Ditto.
> (vbool4_t): Ditto.
> (vbool2_t): Ditto.
> (vbool1_t): Ditto.
> (vint8mf8_t): Ditto.
> (vuint8mf8_t): Ditto.
> (vint8mf4_t): Ditto.
> (vuint8mf4_t): Ditto.
> (vint8mf2_t): Ditto.
> (vuint8mf2_t): Ditto.
> (vint8m1_t): Ditto.
> (vuint8m1_t): Ditto.
> (vint8m2_t): Ditto.
> (vuint8m2_t): Ditto.
> (vint8m4_t): Ditto.
> (vuint8m4_t): Ditto.
> (vint8m8_t): Ditto.
> (vuint8m8_t): Ditto.
> (vint16mf4_t): Ditto.
> (vuint16mf4_t): Ditto.
> (vint16mf2_t): Ditto.
> (vuint16mf2_t): Ditto.
> (vint16m1_t): Ditto.
> (vuint16m1_t): Ditto.
> (vint16m2_t): Ditto.
> (vuint16m2_t): Ditto.
> (vint16m4_t): Ditto.
> (vuint16m4_t): Ditto.
> (vint16m8_t): Ditto.
> (vuint16m8_t): Ditto.
> (vint32mf2_t): Ditto.
> (vuint32mf2_t): Ditto.
> (vint32m1_t): Ditto.
> (vuint32m1_t): Ditto.
> (vint32m2_t): Ditto.
> (vuint32m2_t): Ditto.
> (vint32m4_t): Ditto.
> (vuint32m4_t): Ditto.
> (vint32m8_t): Ditto.
> (vuint32m8_t): Ditto.
> (vint64m1_t): Ditto.
> (vuint64m1_t): Ditto.
> (vint64m2_t): Ditto.
> (vuint64m2_t): Ditto.
> (vint64m4_t): Ditto.
> (vuint64m4_t): Ditto.
> (vint64m8_t): Ditto.
> (vuint64m8_t): Ditto.
> (vfloat16mf4_t): Ditto.
> (vfloat16mf2_t): Ditto.
> (vfloat16m1_t): Ditto.
> (vfloat16m2_t): Ditto.
> (vfloat16m4_t): Ditto.
> (vfloat16m8_t): Ditto.
> (vfloat32mf2_t): Ditto.
> (vfloat32m1_t): Ditto.
> (vfloat32m2_t): Ditto.
> (vfloat32m4_t): Ditto.
> (vfloat32m8_t): Ditto.
> (vfloat64m1_t): Ditto.
> (vfloat64m2_t): Ditto.
> (vfloat64m4_t): Ditto.
> (vfloat64m8_t): Ditto.
> * config/riscv/riscv-vector-switch.def (ENTRY): Ditto.
> (TUPLE_ENTRY): Ditto.
> * config/riscv/riscv-vsetvl.cc (change_insn): Ditto.
> * config/riscv/riscv.cc (riscv_valid_lo_sum_p): Ditto.
> (riscv_v_adjust_nunits): Ditto.
> (riscv_v_adjust_bytesize): Ditto.
> (riscv_v_adjust_precision): Ditto.
> (riscv_convert_vector_bits): Ditto.
> * config/riscv/riscv.h (riscv_v_adjust_nunits): Ditto.
> * config/riscv/riscv.md: Ditto.
> * config/riscv/vector-iterators.md: Ditto.
> * config/riscv/vector.md
> (@pred_indexed_<order>store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX16_QHS:mode><VNX16_QHSI:mode>):
> Ditto.
> (@pred_indexed_<order>store<VNX32_QHS:mode><VNX32_QHSI:mode>):
> Ditto.
> (@pred_indexed_<order>store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> (@pred_indexed_<order>store<VNX128_Q:mode><VNX128_Q:mode>): Ditto.
> (@pred_indexed_<order>load<V1T:mode><V1I:mode>): Ditto.
> (@pred_indexed_<order>load<V1T:mode><VNX1_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V2T:mode><V2I:mode>): Ditto.
> (@pred_indexed_<order>load<V2T:mode><VNX2_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V4T:mode><V4I:mode>): Ditto.
> (@pred_indexed_<order>load<V4T:mode><VNX4_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V8T:mode><V8I:mode>): Ditto.
> (@pred_indexed_<order>load<V8T:mode><VNX8_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V16T:mode><V16I:mode>): Ditto.
> (@pred_indexed_<order>load<V16T:mode><VNX16_QHSI:mode>): Ditto.
> (@pred_indexed_<order>load<V32T:mode><V32I:mode>): Ditto.
> (@pred_indexed_<order>load<V32T:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>load<V64T:mode><V64I:mode>): Ditto.
> (@pred_indexed_<order>store<V1T:mode><V1I:mode>): Ditto.
> (@pred_indexed_<order>store<V1T:mode><VNX1_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V2T:mode><V2I:mode>): Ditto.
> (@pred_indexed_<order>store<V2T:mode><VNX2_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V4T:mode><V4I:mode>): Ditto.
> (@pred_indexed_<order>store<V4T:mode><VNX4_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V8T:mode><V8I:mode>): Ditto.
> (@pred_indexed_<order>store<V8T:mode><VNX8_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V16T:mode><V16I:mode>): Ditto.
> (@pred_indexed_<order>store<V16T:mode><VNX16_QHSI:mode>): Ditto.
> (@pred_indexed_<order>store<V32T:mode><V32I:mode>): Ditto.
> (@pred_indexed_<order>store<V32T:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<V64T:mode><V64I:mode>): Ditto.
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-7.c:
> Adapt test.
> * gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-8.c:
> Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store-9.c: Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-8.c:
> Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/scatter_store_run-8.c: Ditto.
>
> ---
> gcc/config/riscv/autovec.md | 198 +-
> gcc/config/riscv/riscv-modes.def | 570 ++---
> gcc/config/riscv/riscv-v.cc | 67 +-
> gcc/config/riscv/riscv-vector-builtins.cc | 15 +-
> gcc/config/riscv/riscv-vector-builtins.def | 361 ++-
> gcc/config/riscv/riscv-vector-switch.def | 581 ++---
> gcc/config/riscv/riscv-vsetvl.cc | 24 +-
> gcc/config/riscv/riscv.cc | 87 +-
> gcc/config/riscv/riscv.h | 1 +
> gcc/config/riscv/riscv.md | 87 +-
> gcc/config/riscv/vector-iterators.md | 2236 ++++++++---------
> gcc/config/riscv/vector.md | 826 +++---
> .../gather-scatter/gather_load_run-7.c | 6 +-
> .../gather-scatter/gather_load_run-8.c | 6 +-
> .../gather-scatter/mask_scatter_store-9.c | 5 +
> .../gather-scatter/mask_scatter_store_run-8.c | 6 +-
> .../gather-scatter/scatter_store_run-8.c | 6 +-
> 17 files changed, 2496 insertions(+), 2586 deletions(-)
>
> diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
> index cd5b19457f8..00947207f3f 100644
> --- a/gcc/config/riscv/autovec.md
> +++ b/gcc/config/riscv/autovec.md
> @@ -61,105 +61,90 @@
> ;; == Gather Load
> ;;
> =========================================================================
>
> -(define_expand "len_mask_gather_load<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
> - [(match_operand:VNX1_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO64:mode><RATIO64I:mode>"
> + [(match_operand:RATIO64 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX1_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX1_QHSD:gs_extension>")
> - (match_operand 4 "<VNX1_QHSD:gs_scale>")
> + (match_operand:RATIO64I 2 "register_operand")
> + (match_operand 3 "<RATIO64:gs_extension>")
> + (match_operand 4 "<RATIO64:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
> - [(match_operand:VNX2_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO32:mode><RATIO32I:mode>"
> + [(match_operand:RATIO32 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX2_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX2_QHSD:gs_extension>")
> - (match_operand 4 "<VNX2_QHSD:gs_scale>")
> + (match_operand:RATIO32I 2 "register_operand")
> + (match_operand 3 "<RATIO32:gs_extension>")
> + (match_operand 4 "<RATIO32:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
> - [(match_operand:VNX4_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO16:mode><RATIO16I:mode>"
> + [(match_operand:RATIO16 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX4_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX4_QHSD:gs_extension>")
> - (match_operand 4 "<VNX4_QHSD:gs_scale>")
> + (match_operand:RATIO16I 2 "register_operand")
> + (match_operand 3 "<RATIO16:gs_extension>")
> + (match_operand 4 "<RATIO16:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
> - [(match_operand:VNX8_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO8:mode><RATIO8I:mode>"
> + [(match_operand:RATIO8 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX8_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX8_QHSD:gs_extension>")
> - (match_operand 4 "<VNX8_QHSD:gs_scale>")
> + (match_operand:RATIO8I 2 "register_operand")
> + (match_operand 3 "<RATIO8:gs_extension>")
> + (match_operand 4 "<RATIO8:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
> - [(match_operand:VNX16_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO4:mode><RATIO4I:mode>"
> + [(match_operand:RATIO4 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX16_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX16_QHSD:gs_extension>")
> - (match_operand 4 "<VNX16_QHSD:gs_scale>")
> + (match_operand:RATIO4I 2 "register_operand")
> + (match_operand 3 "<RATIO4:gs_extension>")
> + (match_operand 4 "<RATIO4:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>"
> - [(match_operand:VNX32_QHS 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO2:mode><RATIO2I:mode>"
> + [(match_operand:RATIO2 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX32_QHSI 2 "register_operand")
> - (match_operand 3 "<VNX32_QHS:gs_extension>")
> - (match_operand 4 "<VNX32_QHS:gs_scale>")
> + (match_operand:RATIO2I 2 "register_operand")
> + (match_operand 3 "<RATIO2:gs_extension>")
> + (match_operand 4 "<RATIO2:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
> - "TARGET_VECTOR"
> -{
> - riscv_vector::expand_gather_scatter (operands, true);
> - DONE;
> -})
> -
> -(define_expand "len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>"
> - [(match_operand:VNX64_QH 0 "register_operand")
> - (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX64_QHI 2 "register_operand")
> - (match_operand 3 "<VNX64_QH:gs_extension>")
> - (match_operand 4 "<VNX64_QH:gs_scale>")
> - (match_operand 5 "autovec_length_operand")
> - (match_operand 6 "const_0_operand")
> - (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> @@ -170,15 +155,15 @@
> ;; larger SEW. Since RVV indexed load/store support zero extend
> ;; implicitly and not support scaling, we should only allow
> ;; operands[3] and operands[4] to be const_1_operand.
> -(define_expand "len_mask_gather_load<mode><mode>"
> - [(match_operand:VNX128_Q 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO1:mode><RATIO1:mode>"
> + [(match_operand:RATIO1 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX128_Q 2 "register_operand")
> - (match_operand 3 "const_1_operand")
> - (match_operand 4 "const_1_operand")
> + (match_operand:RATIO1 2 "register_operand")
> + (match_operand 3 "<RATIO1:gs_extension>")
> + (match_operand 4 "<RATIO1:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> @@ -189,105 +174,90 @@
> ;; == Scatter Store
> ;;
> =========================================================================
>
> -(define_expand "len_mask_scatter_store<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
> - [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX1_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX1_QHSD:gs_extension>")
> - (match_operand 3 "<VNX1_QHSD:gs_scale>")
> - (match_operand:VNX1_QHSD 4 "register_operand")
> - (match_operand 5 "autovec_length_operand")
> - (match_operand 6 "const_0_operand")
> - (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
> - "TARGET_VECTOR"
> -{
> - riscv_vector::expand_gather_scatter (operands, false);
> - DONE;
> -})
> -
> -(define_expand "len_mask_scatter_store<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO64:mode><RATIO64I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX2_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX2_QHSD:gs_extension>")
> - (match_operand 3 "<VNX2_QHSD:gs_scale>")
> - (match_operand:VNX2_QHSD 4 "register_operand")
> + (match_operand:RATIO64I 1 "register_operand")
> + (match_operand 2 "<RATIO64:gs_extension>")
> + (match_operand 3 "<RATIO64:gs_scale>")
> + (match_operand:RATIO64 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO32:mode><RATIO32I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX4_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX4_QHSD:gs_extension>")
> - (match_operand 3 "<VNX4_QHSD:gs_scale>")
> - (match_operand:VNX4_QHSD 4 "register_operand")
> + (match_operand:RATIO32I 1 "register_operand")
> + (match_operand 2 "<RATIO32:gs_extension>")
> + (match_operand 3 "<RATIO32:gs_scale>")
> + (match_operand:RATIO32 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO16:mode><RATIO16I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX8_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX8_QHSD:gs_extension>")
> - (match_operand 3 "<VNX8_QHSD:gs_scale>")
> - (match_operand:VNX8_QHSD 4 "register_operand")
> + (match_operand:RATIO16I 1 "register_operand")
> + (match_operand 2 "<RATIO16:gs_extension>")
> + (match_operand 3 "<RATIO16:gs_scale>")
> + (match_operand:RATIO16 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO8:mode><RATIO8I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX16_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX16_QHSD:gs_extension>")
> - (match_operand 3 "<VNX16_QHSD:gs_scale>")
> - (match_operand:VNX16_QHSD 4 "register_operand")
> + (match_operand:RATIO8I 1 "register_operand")
> + (match_operand 2 "<RATIO8:gs_extension>")
> + (match_operand 3 "<RATIO8:gs_scale>")
> + (match_operand:RATIO8 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO4:mode><RATIO4I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX32_QHSI 1 "register_operand")
> - (match_operand 2 "<VNX32_QHS:gs_extension>")
> - (match_operand 3 "<VNX32_QHS:gs_scale>")
> - (match_operand:VNX32_QHS 4 "register_operand")
> + (match_operand:RATIO4I 1 "register_operand")
> + (match_operand 2 "<RATIO4:gs_extension>")
> + (match_operand 3 "<RATIO4:gs_scale>")
> + (match_operand:RATIO4 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO2:mode><RATIO2I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX64_QHI 1 "register_operand")
> - (match_operand 2 "<VNX64_QH:gs_extension>")
> - (match_operand 3 "<VNX64_QH:gs_scale>")
> - (match_operand:VNX64_QH 4 "register_operand")
> + (match_operand:RATIO2I 1 "register_operand")
> + (match_operand 2 "<RATIO2:gs_extension>")
> + (match_operand 3 "<RATIO2:gs_scale>")
> + (match_operand:RATIO2 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> @@ -298,15 +268,15 @@
> ;; larger SEW. Since RVV indexed load/store support zero extend
> ;; implicitly and not support scaling, we should only allow
> ;; operands[3] and operands[4] to be const_1_operand.
> -(define_expand "len_mask_scatter_store<mode><mode>"
> +(define_expand "len_mask_scatter_store<RATIO1:mode><RATIO1:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX128_Q 1 "register_operand")
> - (match_operand 2 "const_1_operand")
> - (match_operand 3 "const_1_operand")
> - (match_operand:VNX128_Q 4 "register_operand")
> + (match_operand:RATIO1 1 "register_operand")
> + (match_operand 2 "<RATIO1:gs_extension>")
> + (match_operand 3 "<RATIO1:gs_scale>")
> + (match_operand:RATIO1 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> diff --git a/gcc/config/riscv/riscv-modes.def
> b/gcc/config/riscv/riscv-modes.def
> index 1d152709ddc..d6b90e9e304 100644
> --- a/gcc/config/riscv/riscv-modes.def
> +++ b/gcc/config/riscv/riscv-modes.def
> @@ -27,311 +27,287 @@ FLOAT_MODE (TF, 16, ieee_quad_format);
> /* Encode the ratio of SEW/LMUL into the mask types. There are the
> following
> * mask types. */
>
> -/* | Mode | MIN_VLEN = 32 | MIN_VLEN = 64 | MIN_VLEN = 128 |
> - | | SEW/LMUL | SEW/LMUL | SEW/LMUL |
> - | VNx1BI | 32 | 64 | 128 |
> - | VNx2BI | 16 | 32 | 64 |
> - | VNx4BI | 8 | 16 | 32 |
> - | VNx8BI | 4 | 8 | 16 |
> - | VNx16BI | 2 | 4 | 8 |
> - | VNx32BI | 1 | 2 | 4 |
> - | VNx64BI | N/A | 1 | 2 |
> - | VNx128BI | N/A | N/A | 1 | */
> +/* Encode the ratio of SEW/LMUL into the mask types.
> + There are the following mask types.
> +
> + n = SEW/LMUL
> +
> + |Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
> + |BI |RVVM1BI |RVVMF2BI |RVVMF4BI |RVVMF8BI |RVVMF16BI |RVVMF32BI
> |RVVMF64BI | */
>
> /* For RVV modes, each boolean value occupies 1-bit.
> 4th argument is specify the minmial possible size of the vector mode,
> and will adjust to the right size by ADJUST_BYTESIZE. */
> -VECTOR_BOOL_MODE (VNx1BI, 1, BI, 1);
> -VECTOR_BOOL_MODE (VNx2BI, 2, BI, 1);
> -VECTOR_BOOL_MODE (VNx4BI, 4, BI, 1);
> -VECTOR_BOOL_MODE (VNx8BI, 8, BI, 1);
> -VECTOR_BOOL_MODE (VNx16BI, 16, BI, 2);
> -VECTOR_BOOL_MODE (VNx32BI, 32, BI, 4);
> -VECTOR_BOOL_MODE (VNx64BI, 64, BI, 8);
> -VECTOR_BOOL_MODE (VNx128BI, 128, BI, 16);
> -
> -ADJUST_NUNITS (VNx1BI, riscv_v_adjust_nunits (VNx1BImode, 1));
> -ADJUST_NUNITS (VNx2BI, riscv_v_adjust_nunits (VNx2BImode, 2));
> -ADJUST_NUNITS (VNx4BI, riscv_v_adjust_nunits (VNx4BImode, 4));
> -ADJUST_NUNITS (VNx8BI, riscv_v_adjust_nunits (VNx8BImode, 8));
> -ADJUST_NUNITS (VNx16BI, riscv_v_adjust_nunits (VNx16BImode, 16));
> -ADJUST_NUNITS (VNx32BI, riscv_v_adjust_nunits (VNx32BImode, 32));
> -ADJUST_NUNITS (VNx64BI, riscv_v_adjust_nunits (VNx64BImode, 64));
> -ADJUST_NUNITS (VNx128BI, riscv_v_adjust_nunits (VNx128BImode, 128));
> -
> -ADJUST_ALIGNMENT (VNx1BI, 1);
> -ADJUST_ALIGNMENT (VNx2BI, 1);
> -ADJUST_ALIGNMENT (VNx4BI, 1);
> -ADJUST_ALIGNMENT (VNx8BI, 1);
> -ADJUST_ALIGNMENT (VNx16BI, 1);
> -ADJUST_ALIGNMENT (VNx32BI, 1);
> -ADJUST_ALIGNMENT (VNx64BI, 1);
> -ADJUST_ALIGNMENT (VNx128BI, 1);
> -
> -ADJUST_BYTESIZE (VNx1BI, riscv_v_adjust_bytesize (VNx1BImode, 1));
> -ADJUST_BYTESIZE (VNx2BI, riscv_v_adjust_bytesize (VNx2BImode, 1));
> -ADJUST_BYTESIZE (VNx4BI, riscv_v_adjust_bytesize (VNx4BImode, 1));
> -ADJUST_BYTESIZE (VNx8BI, riscv_v_adjust_bytesize (VNx8BImode, 1));
> -ADJUST_BYTESIZE (VNx16BI, riscv_v_adjust_bytesize (VNx16BImode, 2));
> -ADJUST_BYTESIZE (VNx32BI, riscv_v_adjust_bytesize (VNx32BImode, 4));
> -ADJUST_BYTESIZE (VNx64BI, riscv_v_adjust_bytesize (VNx64BImode, 8));
> -ADJUST_BYTESIZE (VNx128BI, riscv_v_adjust_bytesize (VNx128BImode, 16));
> -
> -ADJUST_PRECISION (VNx1BI, riscv_v_adjust_precision (VNx1BImode, 1));
> -ADJUST_PRECISION (VNx2BI, riscv_v_adjust_precision (VNx2BImode, 2));
> -ADJUST_PRECISION (VNx4BI, riscv_v_adjust_precision (VNx4BImode, 4));
> -ADJUST_PRECISION (VNx8BI, riscv_v_adjust_precision (VNx8BImode, 8));
> -ADJUST_PRECISION (VNx16BI, riscv_v_adjust_precision (VNx16BImode, 16));
> -ADJUST_PRECISION (VNx32BI, riscv_v_adjust_precision (VNx32BImode, 32));
> -ADJUST_PRECISION (VNx64BI, riscv_v_adjust_precision (VNx64BImode, 64));
> -ADJUST_PRECISION (VNx128BI, riscv_v_adjust_precision (VNx128BImode, 128));
> -
> -/*
> - | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64
> | MIN_VLEN=128 | MIN_VLEN=128 |
> - | | LMUL | SEW/LMUL | LMUL | SEW/LMUL
> | LMUL | SEW/LMUL |
> - | VNx1QI | MF4 | 32 | MF8 | 64
> | N/A | N/A |
> - | VNx2QI | MF2 | 16 | MF4 | 32
> | MF8 | 64 |
> - | VNx4QI | M1 | 8 | MF2 | 16
> | MF4 | 32 |
> - | VNx8QI | M2 | 4 | M1 | 8
> | MF2 | 16 |
> - | VNx16QI | M4 | 2 | M2 | 4
> | M1 | 8 |
> - | VNx32QI | M8 | 1 | M4 | 2
> | M2 | 4 |
> - | VNx64QI | N/A | N/A | M8 | 1
> | M4 | 2 |
> - | VNx128QI | N/A | N/A | N/A | N/A
> | M8 | 1 |
> - | VNx1(HI|HF) | MF2 | 32 | MF4 | 64
> | N/A | N/A |
> - | VNx2(HI|HF) | M1 | 16 | MF2 | 32
> | MF4 | 64 |
> - | VNx4(HI|HF) | M2 | 8 | M1 | 16
> | MF2 | 32 |
> - | VNx8(HI|HF) | M4 | 4 | M2 | 8
> | M1 | 16 |
> - | VNx16(HI|HF)| M8 | 2 | M4 | 4
> | M2 | 8 |
> - | VNx32(HI|HF)| N/A | N/A | M8 | 2
> | M4 | 4 |
> - | VNx64(HI|HF)| N/A | N/A | N/A | N/A
> | M8 | 2 |
> - | VNx1(SI|SF) | M1 | 32 | MF2 | 64
> | MF2 | 64 |
> - | VNx2(SI|SF) | M2 | 16 | M1 | 32
> | M1 | 32 |
> - | VNx4(SI|SF) | M4 | 8 | M2 | 16
> | M2 | 16 |
> - | VNx8(SI|SF) | M8 | 4 | M4 | 8
> | M4 | 8 |
> - | VNx16(SI|SF)| N/A | N/A | M8 | 4
> | M8 | 4 |
> - | VNx1(DI|DF) | N/A | N/A | M1 | 64
> | N/A | N/A |
> - | VNx2(DI|DF) | N/A | N/A | M2 | 32
> | M1 | 64 |
> - | VNx4(DI|DF) | N/A | N/A | M4 | 16
> | M2 | 32 |
> - | VNx8(DI|DF) | N/A | N/A | M8 | 8
> | M4 | 16 |
> - | VNx16(DI|DF)| N/A | N/A | N/A | N/A
> | M8 | 8 |
> -*/
> -
> -/* Define RVV modes whose sizes are multiples of 64-bit chunks. */
> -#define RVV_MODES(NVECS, VB, VH, VS, VD)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 8 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 4 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 4 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 2 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 2 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, DI, NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, DF, NVECS, 0);
> \
> +VECTOR_BOOL_MODE (RVVM1BI, 64, BI, 8);
> +VECTOR_BOOL_MODE (RVVMF2BI, 32, BI, 4);
> +VECTOR_BOOL_MODE (RVVMF4BI, 16, BI, 2);
> +VECTOR_BOOL_MODE (RVVMF8BI, 8, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF16BI, 4, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF32BI, 2, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF64BI, 1, BI, 1);
> +
> +ADJUST_NUNITS (RVVM1BI, riscv_v_adjust_nunits (RVVM1BImode, 64));
> +ADJUST_NUNITS (RVVMF2BI, riscv_v_adjust_nunits (RVVMF2BImode, 32));
> +ADJUST_NUNITS (RVVMF4BI, riscv_v_adjust_nunits (RVVMF4BImode, 16));
> +ADJUST_NUNITS (RVVMF8BI, riscv_v_adjust_nunits (RVVMF8BImode, 8));
> +ADJUST_NUNITS (RVVMF16BI, riscv_v_adjust_nunits (RVVMF16BImode, 4));
> +ADJUST_NUNITS (RVVMF32BI, riscv_v_adjust_nunits (RVVMF32BImode, 2));
> +ADJUST_NUNITS (RVVMF64BI, riscv_v_adjust_nunits (RVVMF64BImode, 1));
> +
> +ADJUST_ALIGNMENT (RVVM1BI, 1);
> +ADJUST_ALIGNMENT (RVVMF2BI, 1);
> +ADJUST_ALIGNMENT (RVVMF4BI, 1);
> +ADJUST_ALIGNMENT (RVVMF8BI, 1);
> +ADJUST_ALIGNMENT (RVVMF16BI, 1);
> +ADJUST_ALIGNMENT (RVVMF32BI, 1);
> +ADJUST_ALIGNMENT (RVVMF64BI, 1);
> +
> +ADJUST_PRECISION (RVVM1BI, riscv_v_adjust_precision (RVVM1BImode, 64));
> +ADJUST_PRECISION (RVVMF2BI, riscv_v_adjust_precision (RVVMF2BImode, 32));
> +ADJUST_PRECISION (RVVMF4BI, riscv_v_adjust_precision (RVVMF4BImode, 16));
> +ADJUST_PRECISION (RVVMF8BI, riscv_v_adjust_precision (RVVMF8BImode, 8));
> +ADJUST_PRECISION (RVVMF16BI, riscv_v_adjust_precision (RVVMF16BImode, 4));
> +ADJUST_PRECISION (RVVMF32BI, riscv_v_adjust_precision (RVVMF32BImode, 2));
> +ADJUST_PRECISION (RVVMF64BI, riscv_v_adjust_precision (RVVMF64BImode, 1));
> +
> +ADJUST_BYTESIZE (RVVM1BI, riscv_v_adjust_bytesize (RVVM1BImode, 8));
> +ADJUST_BYTESIZE (RVVMF2BI, riscv_v_adjust_bytesize (RVVMF2BImode, 4));
> +ADJUST_BYTESIZE (RVVMF4BI, riscv_v_adjust_bytesize (RVVMF4BImode, 2));
> +ADJUST_BYTESIZE (RVVMF8BI, riscv_v_adjust_bytesize (RVVMF8BImode, 1));
> +ADJUST_BYTESIZE (RVVMF16BI, riscv_v_adjust_bytesize (RVVMF16BImode, 1));
> +ADJUST_BYTESIZE (RVVMF32BI, riscv_v_adjust_bytesize (RVVMF32BImode, 1));
> +ADJUST_BYTESIZE (RVVMF64BI, riscv_v_adjust_bytesize (RVVMF64BImode, 1));
> +
> +/* Encode SEW and LMUL into data types.
> + We enforce the constraint LMUL ≥ SEW/ELEN in the implementation.
> + There are the following data types for ELEN = 64.
> +
> + |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
> + |DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
> + |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
> + |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
> + |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
> + |DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
> + |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
> + |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
> +
> +There are the following data types for ELEN = 32.
> +
> + |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
> + |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
> + |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
> + |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
> + |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
> + |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A | */
> +
> +#define RVV_WHOLE_MODES(LMUL)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, QI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, HI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, HF, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, SI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, SF, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, DI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, DF, LMUL, 0);
> \
> +
> \
> + ADJUST_NUNITS (RVVM##LMUL##QI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##QImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##HI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##HImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##SI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##SImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##DI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##DImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##HF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##HFmode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##SF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##SFmode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##DF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##DFmode, false, LMUL,
> 1)); \
> +
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##DF, 8);
> +
> +RVV_WHOLE_MODES (1)
> +RVV_WHOLE_MODES (2)
> +RVV_WHOLE_MODES (4)
> +RVV_WHOLE_MODES (8)
> +
> +#define RVV_FRACT_MODE(TYPE, MODE, LMUL, ALIGN)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF, TYPE, MODE, LMUL, 0);
> \
> + ADJUST_NUNITS (RVVMF##LMUL##MODE,
> \
> + riscv_v_adjust_nunits (RVVMF##LMUL##MODE##mode, true,
> LMUL, \
> + 1));
> \
> +
> \
> + ADJUST_ALIGNMENT (RVVMF##LMUL##MODE, ALIGN);
> +
> +RVV_FRACT_MODE (INT, QI, 2, 1)
> +RVV_FRACT_MODE (INT, QI, 4, 1)
> +RVV_FRACT_MODE (INT, QI, 8, 1)
> +RVV_FRACT_MODE (INT, HI, 2, 2)
> +RVV_FRACT_MODE (INT, HI, 4, 2)
> +RVV_FRACT_MODE (FLOAT, HF, 2, 2)
> +RVV_FRACT_MODE (FLOAT, HF, 4, 2)
> +RVV_FRACT_MODE (INT, SI, 2, 4)
> +RVV_FRACT_MODE (FLOAT, SF, 2, 4)
> +
> +/* Tuple modes for segment loads/stores according to NF.
> +
> + Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
> +
> + When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
> + When LMUL is M2, NF can be 2 ~ 4.
> + When LMUL is M4, NF can be 4. */
> +
> +#define RVV_NF8_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF8x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, DF, NF, 1);
> \
> +
> \
> + ADJUST_NUNITS (RVVMF8x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF8x##NF##QImode, true, 8,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##QImode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##QImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##QImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##HImode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##HImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##HImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##HFmode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##HFmode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##HFmode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##SImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##SImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##SFmode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##SFmode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##DImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##DFmode, false, 1,
> NF)); \
> +
> \
> + ADJUST_ALIGNMENT (RVVMF8x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##DF, 8);
> +
> +RVV_NF8_MODES (8)
> +RVV_NF8_MODES (7)
> +RVV_NF8_MODES (6)
> +RVV_NF8_MODES (5)
> +RVV_NF8_MODES (4)
> +RVV_NF8_MODES (3)
> +RVV_NF8_MODES (2)
> +
> +#define RVV_NF4_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, DF, NF, 1);
> \
>
> \
> - ADJUST_NUNITS (VB##QI, riscv_v_adjust_nunits (VB##QI##mode, NVECS *
> 8)); \
> - ADJUST_NUNITS (VH##HI, riscv_v_adjust_nunits (VH##HI##mode, NVECS *
> 4)); \
> - ADJUST_NUNITS (VS##SI, riscv_v_adjust_nunits (VS##SI##mode, NVECS *
> 2)); \
> - ADJUST_NUNITS (VD##DI, riscv_v_adjust_nunits (VD##DI##mode, NVECS));
> \
> - ADJUST_NUNITS (VH##HF, riscv_v_adjust_nunits (VH##HF##mode, NVECS *
> 4)); \
> - ADJUST_NUNITS (VS##SF, riscv_v_adjust_nunits (VS##SF##mode, NVECS *
> 2)); \
> - ADJUST_NUNITS (VD##DF, riscv_v_adjust_nunits (VD##DF##mode, NVECS));
> \
> + ADJUST_NUNITS (RVVM2x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##QImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##HImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##HFmode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##SImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##SFmode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##DImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##DFmode, false, 2,
> NF)); \
>
> \
> - ADJUST_ALIGNMENT (VB##QI, 1);
> \
> - ADJUST_ALIGNMENT (VH##HI, 2);
> \
> - ADJUST_ALIGNMENT (VS##SI, 4);
> \
> - ADJUST_ALIGNMENT (VD##DI, 8);
> \
> - ADJUST_ALIGNMENT (VH##HF, 2);
> \
> - ADJUST_ALIGNMENT (VS##SF, 4);
> \
> - ADJUST_ALIGNMENT (VD##DF, 8);
> -
> -RVV_MODES (1, VNx8, VNx4, VNx2, VNx1)
> -RVV_MODES (2, VNx16, VNx8, VNx4, VNx2)
> -RVV_MODES (4, VNx32, VNx16, VNx8, VNx4)
> -RVV_MODES (8, VNx64, VNx32, VNx16, VNx8)
> -RVV_MODES (16, VNx128, VNx64, VNx32, VNx16)
> -
> -VECTOR_MODES_WITH_PREFIX (VNx, INT, 4, 0);
> -VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 4, 0);
> -ADJUST_NUNITS (VNx4QI, riscv_v_adjust_nunits (VNx4QImode, 4));
> -ADJUST_NUNITS (VNx2HI, riscv_v_adjust_nunits (VNx2HImode, 2));
> -ADJUST_NUNITS (VNx2HF, riscv_v_adjust_nunits (VNx2HFmode, 2));
> -ADJUST_ALIGNMENT (VNx4QI, 1);
> -ADJUST_ALIGNMENT (VNx2HI, 2);
> -ADJUST_ALIGNMENT (VNx2HF, 2);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1SImode and
> VNx1SFmode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 1, 0);
> -VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 1, 0);
> -ADJUST_NUNITS (VNx1SI, riscv_v_adjust_nunits (VNx1SImode, 1));
> -ADJUST_NUNITS (VNx1SF, riscv_v_adjust_nunits (VNx1SFmode, 1));
> -ADJUST_ALIGNMENT (VNx1SI, 4);
> -ADJUST_ALIGNMENT (VNx1SF, 4);
> -
> -VECTOR_MODES_WITH_PREFIX (VNx, INT, 2, 0);
> -ADJUST_NUNITS (VNx2QI, riscv_v_adjust_nunits (VNx2QImode, 2));
> -ADJUST_ALIGNMENT (VNx2QI, 1);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1HImode and
> VNx1HFmode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 1, 0);
> -VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 1, 0);
> -ADJUST_NUNITS (VNx1HI, riscv_v_adjust_nunits (VNx1HImode, 1));
> -ADJUST_NUNITS (VNx1HF, riscv_v_adjust_nunits (VNx1HFmode, 1));
> -ADJUST_ALIGNMENT (VNx1HI, 2);
> -ADJUST_ALIGNMENT (VNx1HF, 2);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1QImode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
> -ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
> -ADJUST_ALIGNMENT (VNx1QI, 1);
> -
> -/* Tuple modes for segment loads/stores according to NF, NF value can be
> 2 ~ 8. */
> -
> -/*
> - | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 |
> MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
> - | | LMUL | SEW/LMUL | LMUL | SEW/LMUL
> | LMUL | SEW/LMUL |
> - | VNxNFx1QI | MF4 | 32 | MF8 | 64
> | N/A | N/A |
> - | VNxNFx2QI | MF2 | 16 | MF4 | 32
> | MF8 | 64 |
> - | VNxNFx4QI | M1 | 8 | MF2 | 16
> | MF4 | 32 |
> - | VNxNFx8QI | M2 | 4 | M1 | 8
> | MF2 | 16 |
> - | VNxNFx16QI | M4 | 2 | M2 | 4
> | M1 | 8 |
> - | VNxNFx32QI | M8 | 1 | M4 | 2
> | M2 | 4 |
> - | VNxNFx64QI | N/A | N/A | M8 | 1
> | M4 | 2 |
> - | VNxNFx128QI | N/A | N/A | N/A | N/A
> | M8 | 1 |
> - | VNxNFx1(HI|HF) | MF2 | 32 | MF4 | 64
> | N/A | N/A |
> - | VNxNFx2(HI|HF) | M1 | 16 | MF2 | 32
> | MF4 | 64 |
> - | VNxNFx4(HI|HF) | M2 | 8 | M1 | 16
> | MF2 | 32 |
> - | VNxNFx8(HI|HF) | M4 | 4 | M2 | 8
> | M1 | 16 |
> - | VNxNFx16(HI|HF)| M8 | 2 | M4 | 4
> | M2 | 8 |
> - | VNxNFx32(HI|HF)| N/A | N/A | M8 | 2
> | M4 | 4 |
> - | VNxNFx64(HI|HF)| N/A | N/A | N/A | N/A
> | M8 | 2 |
> - | VNxNFx1(SI|SF) | M1 | 32 | MF2 | 64
> | MF2 | 64 |
> - | VNxNFx2(SI|SF) | M2 | 16 | M1 | 32
> | M1 | 32 |
> - | VNxNFx4(SI|SF) | M4 | 8 | M2 | 16
> | M2 | 16 |
> - | VNxNFx8(SI|SF) | M8 | 4 | M4 | 8
> | M4 | 8 |
> - | VNxNFx16(SI|SF)| N/A | N/A | M8 | 4
> | M8 | 4 |
> - | VNxNFx1(DI|DF) | N/A | N/A | M1 | 64
> | N/A | N/A |
> - | VNxNFx2(DI|DF) | N/A | N/A | M2 | 32
> | M1 | 64 |
> - | VNxNFx4(DI|DF) | N/A | N/A | M4 | 16
> | M2 | 32 |
> - | VNxNFx8(DI|DF) | N/A | N/A | M8 | 8
> | M4 | 16 |
> - | VNxNFx16(DI|DF)| N/A | N/A | N/A | N/A
> | M8 | 8 |
> -*/
> -
> -#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, NBYTES / 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1);
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode,
> \
> - VB * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode,
> \
> - VH * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode,
> \
> - VS * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode,
> \
> - VD * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HF##mode,
> \
> - VH * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode,
> \
> - VS * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode,
> \
> - VD * NSUBPARTS));
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##DF, 8);
> +
> +RVV_NF4_MODES (2)
> +RVV_NF4_MODES (3)
> +RVV_NF4_MODES (4)
> +
> +#define RVV_NF2_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, DF, NF, 1);
> \
>
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
> -
> -RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
> -
> -RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
> -
> -RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
> -RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
> -RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
> -
> -RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
> -
> -#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1);
> \
> + ADJUST_NUNITS (RVVM4x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##QImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##HImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##HFmode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##SImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##SFmode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##DImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##DFmode, false, 4,
> NF)); \
>
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HF##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1SI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1SF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x2QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode,
> \
> - 2 * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x2HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode,
> \
> - 2 * NSUBPARTS));
> \
> -ADJUST_NUNITS (VNx##NSUBPARTS##x2HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HF##mode,
> \
> - 2 * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x4QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode,
> \
> - 4 * NSUBPARTS));
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
> -
> -RVV_TUPLE_PARTIAL_MODES (2)
> -RVV_TUPLE_PARTIAL_MODES (3)
> -RVV_TUPLE_PARTIAL_MODES (4)
> -RVV_TUPLE_PARTIAL_MODES (5)
> -RVV_TUPLE_PARTIAL_MODES (6)
> -RVV_TUPLE_PARTIAL_MODES (7)
> -RVV_TUPLE_PARTIAL_MODES (8)
> + ADJUST_ALIGNMENT (RVVM4x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##DF, 8);
> +
> +RVV_NF2_MODES (2)
>
> /* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
> be 65536 for a single vector register which means the vector mode in
> diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
> index ff1e682f6d0..53088edf909 100644
> --- a/gcc/config/riscv/riscv-v.cc
> +++ b/gcc/config/riscv/riscv-v.cc
> @@ -1550,37 +1550,20 @@ legitimize_move (rtx dest, rtx src)
> /* VTYPE information for machine_mode. */
> struct mode_vtype_group
> {
> - enum vlmul_type vlmul_for_min_vlen32[NUM_MACHINE_MODES];
> - uint8_t ratio_for_min_vlen32[NUM_MACHINE_MODES];
> - enum vlmul_type vlmul_for_min_vlen64[NUM_MACHINE_MODES];
> - uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
> - enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
> - uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
> + enum vlmul_type vlmul[NUM_MACHINE_MODES];
> + uint8_t ratio[NUM_MACHINE_MODES];
> machine_mode subpart_mode[NUM_MACHINE_MODES];
> uint8_t nf[NUM_MACHINE_MODES];
> mode_vtype_group ()
> {
> -#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32,
> RATIO_FOR_MIN_VLEN32, \
> - VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,
> \
> - VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
> \
> - vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;
> \
> - ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;
> \
> - vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;
> \
> - ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;
> \
> - vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;
> \
> - ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
> -#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF,
> VLMUL_FOR_MIN_VLEN32, \
> - RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,
> \
> - RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,
> \
> - RATIO_FOR_MIN_VLEN128)
> \
> +#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO)
> \
> + vlmul[MODE##mode] = VLMUL;
> \
> + ratio[MODE##mode] = RATIO;
> +#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO)
> \
> subpart_mode[MODE##mode] = SUBPART_MODE##mode;
> \
> nf[MODE##mode] = NF;
> \
> - vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;
> \
> - ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;
> \
> - vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;
> \
> - ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;
> \
> - vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;
> \
> - ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
> + vlmul[MODE##mode] = VLMUL;
> \
> + ratio[MODE##mode] = RATIO;
> #include "riscv-vector-switch.def"
> #undef ENTRY
> #undef TUPLE_ENTRY
> @@ -1593,12 +1576,7 @@ static mode_vtype_group mode_vtype_infos;
> enum vlmul_type
> get_vlmul (machine_mode mode)
> {
> - if (TARGET_MIN_VLEN >= 128)
> - return mode_vtype_infos.vlmul_for_for_vlen128[mode];
> - else if (TARGET_MIN_VLEN == 32)
> - return mode_vtype_infos.vlmul_for_min_vlen32[mode];
> - else
> - return mode_vtype_infos.vlmul_for_min_vlen64[mode];
> + return mode_vtype_infos.vlmul[mode];
> }
>
> /* Return the NF value of the corresponding mode. */
> @@ -1610,8 +1588,8 @@ get_nf (machine_mode mode)
> return mode_vtype_infos.nf[mode];
> }
>
> -/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
> - the subpart mode is VNx1SImode. This will help to build
> +/* Return the subpart mode of the tuple mode. For RVVM2x2SImode,
> + the subpart mode is RVVM2SImode. This will help to build
> array/struct type in builtins. */
> machine_mode
> get_subpart_mode (machine_mode mode)
> @@ -1625,12 +1603,7 @@ get_subpart_mode (machine_mode mode)
> unsigned int
> get_ratio (machine_mode mode)
> {
> - if (TARGET_MIN_VLEN >= 128)
> - return mode_vtype_infos.ratio_for_for_vlen128[mode];
> - else if (TARGET_MIN_VLEN == 32)
> - return mode_vtype_infos.ratio_for_min_vlen32[mode];
> - else
> - return mode_vtype_infos.ratio_for_min_vlen64[mode];
> + return mode_vtype_infos.ratio[mode];
> }
>
> /* Get ta according to operand[tail_op_idx]. */
> @@ -2171,12 +2144,12 @@ preferred_simd_mode (scalar_mode mode)
> /* We will disable auto-vectorization when TARGET_MIN_VLEN < 128 &&
> riscv_autovec_lmul < RVV_M2. Since GCC loop vectorizer report ICE
> when we
> enable -march=rv64gc_zve32* and -march=rv32gc_zve64*. in the
> - 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since we have
> - VNx1SImode in -march=*zve32* and VNx1DImode in -march=*zve64*, they
> are
> - enabled in targetm. vector_mode_supported_p and SLP vectorizer will
> try to
> - use them. Currently, we can support auto-vectorization in
> - -march=rv32_zve32x_zvl128b. Wheras, -march=rv32_zve32x_zvl32b or
> - -march=rv32_zve32x_zvl64b are disabled. */
> + 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since both
> + RVVM1SImode in -march=*zve32*_zvl32b and RVVM1DImode in
> + -march=*zve64*_zvl64b are NUNITS = poly (1, 1), they will cause ICE
> in loop
> + vectorizer when we enable them in this target hook. Currently, we can
> + support auto-vectorization in -march=rv32_zve32x_zvl128b. Wheras,
> + -march=rv32_zve32x_zvl32b or -march=rv32_zve32x_zvl64b are
> disabled. */
> if (autovec_use_vlmax_p ())
> {
> if (TARGET_MIN_VLEN < 128 && riscv_autovec_lmul < RVV_M2)
> @@ -2371,9 +2344,9 @@ autovectorize_vector_modes (vector_modes *modes,
> bool)
> poly_uint64 full_size
> = BYTES_PER_RISCV_VECTOR * ((int) riscv_autovec_lmul);
>
> - /* Start with a VNxYYQImode where YY is the number of units that
> + /* Start with a RVV<LMUL>QImode where LMUL is the number of units
> that
> fit a whole vector.
> - Then try YY = nunits / 2, nunits / 4 and nunits / 8 which
> + Then try LMUL = nunits / 2, nunits / 4 and nunits / 8 which
> is guided by the extensions we have available (vf2, vf4 and vf8).
>
> - full_size: Try using full vectors for all element types.
> diff --git a/gcc/config/riscv/riscv-vector-builtins.cc
> b/gcc/config/riscv/riscv-vector-builtins.cc
> index 3a53b56effa..528dca7ae85 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.cc
> +++ b/gcc/config/riscv/riscv-vector-builtins.cc
> @@ -109,10 +109,8 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
>
> /* Static information about type suffix for each RVV type. */
> const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX,
> SCALAR_SUFFIX, \
> - VSETVL_SUFFIX)
> \
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
> \
> {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
> #define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE,
> SCALAR_TYPE, \
> NF, VECTOR_SUFFIX)
> \
> @@ -2802,12 +2800,9 @@ register_builtin_types ()
> tree int64_type_node = get_typenode_from_name (INT64_TYPE);
>
> machine_mode mode;
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, ARGS...)
> \
> - mode = TARGET_MIN_VLEN >= 128 ? VECTOR_MODE_MIN_VLEN_128##mode
> \
> - : TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode
> \
> - : VECTOR_MODE_MIN_VLEN_32##mode;
> \
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + ARGS...)
> \
> + mode = VECTOR_MODE##mode;
> \
> register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node,
> mode);
> #define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE,
> SCALAR_TYPE, \
> NF, VECTOR_SUFFIX)
> \
> diff --git a/gcc/config/riscv/riscv-vector-builtins.def
> b/gcc/config/riscv/riscv-vector-builtins.def
> index 1e9457953f8..0e49480703b 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.def
> +++ b/gcc/config/riscv/riscv-vector-builtins.def
> @@ -28,24 +28,19 @@ along with GCC; see the file COPYING3. If not see
> "build_vector_type_for_mode". For "vint32m1_t", we use
> "intSI_type_node" in
> RV64. Otherwise, we use "long_integer_type_node".
> 5.The 'VECTOR_MODE' is the machine modes of corresponding RVV type used
> - in "build_vector_type_for_mode" when TARGET_MIN_VLEN > 32.
> - For example: VECTOR_MODE = VNx2SI for "vint32m1_t".
> - 6.The 'VECTOR_MODE_MIN_VLEN_32' is the machine modes of corresponding
> RVV
> - type used in "build_vector_type_for_mode" when TARGET_MIN_VLEN = 32.
> For
> - example: VECTOR_MODE_MIN_VLEN_32 = VNx1SI for "vint32m1_t".
> - 7.The 'VECTOR_SUFFIX' define mode suffix for vector type.
> + in "build_vector_type_for_mode".
> + For example: VECTOR_MODE = RVVM1SImode for "vint32m1_t".
> + 6.The 'VECTOR_SUFFIX' define mode suffix for vector type.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vector = i32m1.
> - 8.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
> + 7.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].scalar = i32.
> - 9.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
> + 8.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vsetvl = e32m1.
> */
>
> #ifndef DEF_RVV_TYPE
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX,
> SCALAR_SUFFIX, \
> - VSETVL_SUFFIX)
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
> #endif
>
> #ifndef DEF_RVV_TUPLE_TYPE
> @@ -101,47 +96,34 @@ along with GCC; see the file COPYING3. If not see
>
> /* SEW/LMUL = 64:
> Only enable when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1BImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx2BI, VNx1BI,
> VOID, _b64, , )
> + Machine mode = RVVMF64BImode. */
> +DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, RVVMF64BI, _b64, , )
> /* SEW/LMUL = 32:
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx4BI, VNx2BI,
> VNx1BI, _b32, , )
> + Machine mode = RVVMF32BImode. */
> +DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, RVVMF32BI, _b32, , )
> /* SEW/LMUL = 16:
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
> - Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32. */
> -DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx8BI, VNx4BI,
> VNx2BI, _b16, , )
> + Machine mode = RVVMF16BImode. */
> +DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, RVVMF16BI, _b16, , )
> /* SEW/LMUL = 8:
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx16BI, VNx8BI,
> VNx4BI, _b8, , )
> + Machine mode = RVVMF8BImode. */
> +DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, RVVMF8BI, _b8, , )
> /* SEW/LMUL = 4:
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx32BI, VNx16BI,
> VNx8BI, _b4, , )
> + Machine mode = RVVMF4BImode. */
> +DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, RVVMF4BI, _b4, , )
> /* SEW/LMUL = 2:
> - Machine mode = VNx64BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx64BI, VNx32BI,
> VNx16BI, _b2, , )
> + Machine mode = RVVMF2BImode. */
> +DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, RVVMF2BI, _b2, , )
> /* SEW/LMUL = 1:
> - Machine mode = VNx128BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx128BI, VNx64BI,
> VNx32BI, _b1, , )
> + Machine mode = RVVM1BImode. */
> +DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, RVVM1BI, _b1, , )
>
> /* LMUL = 1/8:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1QImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx2QI, VNx1QI,
> VOID, _i8mf8, _i8,
> + Machine mode = RVVMF8QImode. */
> +DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, RVVMF8QI, _i8mf8,
> _i8,
> + _e8mf8)
> +DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, RVVMF8QI, _u8mf8,
> _u8,
> _e8mf8)
> -DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx2QI, VNx1QI,
> VOID, _u8mf8,
> - _u8, _e8mf8)
> /* Define tuple types for SEW = 8, LMUL = MF8. */
> DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t,
> int8, 2, _i8mf8x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t,
> uint8, 2, _u8mf8x2)
> @@ -158,13 +140,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18,
> __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t,
> int8, 8, _i8mf8x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t,
> uint8, 8, _u8mf8x8)
> /* LMUL = 1/4:
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI,
> VNx1QI, _i8mf4,
> - _i8, _e8mf4)
> -DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI,
> VNx1QI, _u8mf4,
> - _u8, _e8mf4)
> + Machine mode = RVVMF4QImode. */
> +DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, RVVMF4QI, _i8mf4,
> _i8,
> + _e8mf4)
> +DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, RVVMF4QI, _u8mf4,
> _u8,
> + _e8mf4)
> /* Define tuple types for SEW = 8, LMUL = MF4. */
> DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t,
> int8, 2, _i8mf4x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t,
> uint8, 2, _u8mf4x2)
> @@ -181,13 +161,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18,
> __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t,
> int8, 8, _i8mf4x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t,
> uint8, 8, _u8mf4x8)
> /* LMUL = 1/2:
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI,
> VNx2QI, _i8mf2,
> - _i8, _e8mf2)
> -DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI,
> VNx2QI, _u8mf2,
> - _u8, _e8mf2)
> + Machine mode = RVVMF2QImode. */
> +DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, RVVMF2QI, _i8mf2,
> _i8,
> + _e8mf2)
> +DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, RVVMF2QI, _u8mf2,
> _u8,
> + _e8mf2)
> /* Define tuple types for SEW = 8, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t,
> int8, 2, _i8mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t,
> uint8, 2, _u8mf2x2)
> @@ -204,13 +182,10 @@ DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18,
> __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t,
> int8, 8, _i8mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t,
> uint8, 8, _u8mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI,
> VNx4QI, _i8m1, _i8,
> + Machine mode = RVVM1QImode. */
> +DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, RVVM1QI, _i8m1, _i8,
> _e8m1)
> +DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, RVVM1QI, _u8m1, _u8,
> _e8m1)
> -DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI,
> VNx4QI, _u8m1,
> - _u8, _e8m1)
> /* Define tuple types for SEW = 8, LMUL = M1. */
> DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8,
> 2, _i8m1x2)
> DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t,
> uint8, 2, _u8m1x2)
> @@ -227,13 +202,10 @@ DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17,
> __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _
> DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8,
> 8, _i8m1x8)
> DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t,
> uint8, 8, _u8m1x8)
> /* LMUL = 2:
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI,
> VNx8QI, _i8m2, _i8,
> + Machine mode = RVVM2QImode. */
> +DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, RVVM2QI, _i8m2, _i8,
> _e8m2)
> +DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, RVVM2QI, _u8m2, _u8,
> _e8m2)
> -DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI,
> VNx8QI, _u8m2,
> - _u8, _e8m2)
> /* Define tuple types for SEW = 8, LMUL = M2. */
> DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8,
> 2, _i8m2x2)
> DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t,
> uint8, 2, _u8m2x2)
> @@ -242,33 +214,26 @@ DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17,
> __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _
> DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8,
> 4, _i8m2x4)
> DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t,
> uint8, 4, _u8m2x4)
> /* LMUL = 4:
> - Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI,
> VNx16QI, _i8m4, _i8,
> + Machine mode = RVVM4QImode. */
> +DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, RVVM4QI, _i8m4, _i8,
> _e8m4)
> +DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, RVVM4QI, _u8m4, _u8,
> _e8m4)
> -DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI,
> VNx16QI, _u8m4,
> - _u8, _e8m4)
> /* Define tuple types for SEW = 8, LMUL = M4. */
> DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8,
> 2, _i8m4x2)
> DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t,
> uint8, 2, _u8m4x2)
> /* LMUL = 8:
> - Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI,
> VNx32QI, _i8m8, _i8,
> + Machine mode = RVVM8QImode. */
> +DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, RVVM8QI, _i8m8, _i8,
> _e8m8)
> +DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, RVVM8QI, _u8m8, _u8,
> _e8m8)
> -DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI,
> VNx32QI, _u8m8,
> - _u8, _e8m8)
>
> /* LMUL = 1/4:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI,
> VOID, _i16mf4,
> - _i16, _e16mf4)
> -DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI,
> VNx1HI, VOID,
> - _u16mf4, _u16, _e16mf4)
> + Machine mode = RVVMF4HImode. */
> +DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, RVVMF4HI,
> _i16mf4, _i16,
> + _e16mf4)
> +DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, RVVMF4HI,
> _u16mf4,
> + _u16, _e16mf4)
> /* Define tuple types for SEW = 16, LMUL = MF4. */
> DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t,
> int16, 2, _i16mf4x2)
> DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t,
> vuint16mf4_t, uint16, 2, _u16mf4x2)
> @@ -285,13 +250,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19,
> __rvv_uint16mf4x7_t, vuint16mf4_t, uint1
> DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t,
> int16, 8, _i16mf4x8)
> DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t,
> vuint16mf4_t, uint16, 8, _u16mf4x8)
> /* LMUL = 1/2:
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI,
> VNx1HI, _i16mf2,
> - _i16, _e16mf2)
> -DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI,
> VNx2HI, VNx1HI,
> - _u16mf2, _u16, _e16mf2)
> + Machine mode = RVVMF2HImode. */
> +DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, RVVMF2HI,
> _i16mf2, _i16,
> + _e16mf2)
> +DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, RVVMF2HI,
> _u16mf2,
> + _u16, _e16mf2)
> /* Define tuple types for SEW = 16, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t,
> int16, 2, _i16mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t,
> vuint16mf2_t, uint16, 2, _u16mf2x2)
> @@ -308,13 +271,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19,
> __rvv_uint16mf2x7_t, vuint16mf2_t, uint1
> DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t,
> int16, 8, _i16mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t,
> vuint16mf2_t, uint16, 8, _u16mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI,
> VNx2HI, _i16m1,
> - _i16, _e16m1)
> -DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI,
> VNx2HI, _u16m1,
> - _u16, _e16m1)
> + Machine mode = RVVM1HImode. */
> +DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, RVVM1HI, _i16m1,
> _i16,
> + _e16m1)
> +DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, RVVM1HI, _u16m1,
> _u16,
> + _e16m1)
> /* Define tuple types for SEW = 16, LMUL = M1. */
> DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t,
> int16, 2, _i16m1x2)
> DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t,
> uint16, 2, _u16m1x2)
> @@ -331,13 +292,11 @@ DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18,
> __rvv_uint16m1x7_t, vuint16m1_t, uint16,
> DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t,
> int16, 8, _i16m1x8)
> DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t,
> uint16, 8, _u16m1x8)
> /* LMUL = 2:
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI,
> VNx4HI, _i16m2,
> - _i16, _e16m2)
> -DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI,
> VNx4HI, _u16m2,
> - _u16, _e16m2)
> + Machine mode = RVVM1H2mode. */
> +DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, RVVM2HI, _i16m2,
> _i16,
> + _e16m2)
> +DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, RVVM2HI, _u16m2,
> _u16,
> + _e16m2)
> /* Define tuple types for SEW = 16, LMUL = M2. */
> DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t,
> int16, 2, _i16m2x2)
> DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t,
> uint16, 2, _u16m2x2)
> @@ -346,33 +305,28 @@ DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18,
> __rvv_uint16m2x3_t, vuint16m2_t, uint16,
> DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t,
> int16, 4, _i16m2x4)
> DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t,
> uint16, 4, _u16m2x4)
> /* LMUL = 4:
> - Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI,
> VNx8HI, _i16m4,
> - _i16, _e16m4)
> -DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI,
> VNx16HI, VNx8HI,
> - _u16m4, _u16, _e16m4)
> + Machine mode = RVVM4HImode. */
> +DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, RVVM4HI, _i16m4,
> _i16,
> + _e16m4)
> +DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, RVVM4HI, _u16m4,
> _u16,
> + _e16m4)
> /* Define tuple types for SEW = 16, LMUL = M4. */
> DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t,
> int16, 2, _i16m4x2)
> DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t,
> uint16, 2, _u16m4x2)
> /* LMUL = 8:
> - Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI,
> VNx16HI, _i16m8,
> - _i16, _e16m8)
> -DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI,
> VNx32HI, VNx16HI,
> - _u16m8, _u16, _e16m8)
> + Machine mode = RVVM8HImode. */
> +DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, RVVM8HI, _i16m8,
> _i16,
> + _e16m8)
> +DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, RVVM8HI, _u16m8,
> _u16,
> + _e16m8)
>
> /* LMUL = 1/2:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI,
> VOID, _i32mf2,
> - _i32, _e32mf2)
> -DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI,
> VNx1SI, VOID,
> - _u32mf2, _u32, _e32mf2)
> + Machine mode = RVVMF2SImode. */
> +DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, RVVMF2SI,
> _i32mf2, _i32,
> + _e32mf2)
> +DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, RVVMF2SI,
> _u32mf2,
> + _u32, _e32mf2)
> /* Define tuple types for SEW = 32, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t,
> int32, 2, _i32mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t,
> vuint32mf2_t, uint32, 2, _u32mf2x2)
> @@ -389,13 +343,11 @@ DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19,
> __rvv_uint32mf2x7_t, vuint32mf2_t, uint3
> DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t,
> int32, 8, _i32mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t,
> vuint32mf2_t, uint32, 8, _u32mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
>
Committed, thanks Kito.
Pan
-----Original Message-----
From: Gcc-patches <gcc-patches-bounces+pan2.li=intel.com@gcc.gnu.org> On Behalf Of Kito Cheng via Gcc-patches
Sent: Thursday, July 20, 2023 10:16 AM
To: Juzhe-Zhong <juzhe.zhong@rivai.ai>
Cc: GCC Patches <gcc-patches@gcc.gnu.org>; Kito Cheng <kito.cheng@sifive.com>; Robin Dapp <rdapp.gcc@gmail.com>; Jeff Law <jeffreyalaw@gmail.com>
Subject: Re: [PATCH V3] RISC-V: Refactor RVV machine modes
LGTM, I must say the number of modifications is less than my expect :)
And it's a really big move for RVV implementation!
Juzhe-Zhong <juzhe.zhong@rivai.ai> 於 2023年7月20日 週四 07:22 寫道:
> Current machine modes layout is hard to maintain && read && understand.
>
> For a LMUL = 1 SI vector mode:
> 1. VNx1SI mode when TARGET_MIN_VLEN = 32.
> 2. VNx2SI mode when TARGET_MIN_VLEN = 64.
> 3. VNx4SI mode when TARGET_MIN_VLEN = 128.
>
> Such implementation produces redundant machine modes and thus redudant
> machine description patterns.
>
> Now, this patch refactor machine modes into 3 follow formats:
>
> 1. mask mode: RVVMF64BImode, RVVMF32BImode, ...., RVVM1BImode.
> RVVMF64BImode means such mask mode occupy 1/64 of a RVV M1
> reg.
> RVVM1BImode size = LMUL = 1 reg.
> 2. non-tuple vector modes:
> RVV<LMUL><BASE_MODE>: E.g. RVVMF8QImode = SEW = 8 && LMUL
> = MF8
> 3. tuple vector modes:
> RVV<LMUL>x<NF><BASE_MODE>.
>
> For example, for SEW = 16, LMUL = MF2 , int mode is always RVVMF4HImode,
> then adjust its size according to TARGET_MIN_VLEN.
>
> Before this patch, the machine description patterns: 17551
> After this patch, the machine description patterns: 14132 =====> reduce
> 3K+ patterns.
>
> Regression of gcc/g++ rv32/rv64 all passed.
>
> Ok for trunk?
>
> gcc/ChangeLog:
>
> * config/riscv/autovec.md
> (len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Refactor RVV
> machine modes.
> (len_mask_gather_load<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
> (len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
> (len_mask_gather_load<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (len_mask_gather_load<mode><mode>): Ditto.
> (len_mask_gather_load<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> (len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
> (len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
> (len_mask_scatter_store<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
> (len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (len_mask_scatter_store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (len_mask_scatter_store<mode><mode>): Ditto.
> (len_mask_scatter_store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> * config/riscv/riscv-modes.def (VECTOR_BOOL_MODE): Ditto.
> (ADJUST_NUNITS): Ditto.
> (ADJUST_ALIGNMENT): Ditto.
> (ADJUST_BYTESIZE): Ditto.
> (ADJUST_PRECISION): Ditto.
> (RVV_MODES): Ditto.
> (RVV_WHOLE_MODES): Ditto.
> (RVV_FRACT_MODE): Ditto.
> (RVV_NF8_MODES): Ditto.
> (RVV_NF4_MODES): Ditto.
> (VECTOR_MODES_WITH_PREFIX): Ditto.
> (VECTOR_MODE_WITH_PREFIX): Ditto.
> (RVV_TUPLE_MODES): Ditto.
> (RVV_NF2_MODES): Ditto.
> (RVV_TUPLE_PARTIAL_MODES): Ditto.
> * config/riscv/riscv-v.cc (struct mode_vtype_group): Ditto.
> (ENTRY): Ditto.
> (TUPLE_ENTRY): Ditto.
> (get_vlmul): Ditto.
> (get_nf): Ditto.
> (get_ratio): Ditto.
> (preferred_simd_mode): Ditto.
> (autovectorize_vector_modes): Ditto.
> * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Ditto.
> * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Ditto.
> (vbool64_t): Ditto.
> (vbool32_t): Ditto.
> (vbool16_t): Ditto.
> (vbool8_t): Ditto.
> (vbool4_t): Ditto.
> (vbool2_t): Ditto.
> (vbool1_t): Ditto.
> (vint8mf8_t): Ditto.
> (vuint8mf8_t): Ditto.
> (vint8mf4_t): Ditto.
> (vuint8mf4_t): Ditto.
> (vint8mf2_t): Ditto.
> (vuint8mf2_t): Ditto.
> (vint8m1_t): Ditto.
> (vuint8m1_t): Ditto.
> (vint8m2_t): Ditto.
> (vuint8m2_t): Ditto.
> (vint8m4_t): Ditto.
> (vuint8m4_t): Ditto.
> (vint8m8_t): Ditto.
> (vuint8m8_t): Ditto.
> (vint16mf4_t): Ditto.
> (vuint16mf4_t): Ditto.
> (vint16mf2_t): Ditto.
> (vuint16mf2_t): Ditto.
> (vint16m1_t): Ditto.
> (vuint16m1_t): Ditto.
> (vint16m2_t): Ditto.
> (vuint16m2_t): Ditto.
> (vint16m4_t): Ditto.
> (vuint16m4_t): Ditto.
> (vint16m8_t): Ditto.
> (vuint16m8_t): Ditto.
> (vint32mf2_t): Ditto.
> (vuint32mf2_t): Ditto.
> (vint32m1_t): Ditto.
> (vuint32m1_t): Ditto.
> (vint32m2_t): Ditto.
> (vuint32m2_t): Ditto.
> (vint32m4_t): Ditto.
> (vuint32m4_t): Ditto.
> (vint32m8_t): Ditto.
> (vuint32m8_t): Ditto.
> (vint64m1_t): Ditto.
> (vuint64m1_t): Ditto.
> (vint64m2_t): Ditto.
> (vuint64m2_t): Ditto.
> (vint64m4_t): Ditto.
> (vuint64m4_t): Ditto.
> (vint64m8_t): Ditto.
> (vuint64m8_t): Ditto.
> (vfloat16mf4_t): Ditto.
> (vfloat16mf2_t): Ditto.
> (vfloat16m1_t): Ditto.
> (vfloat16m2_t): Ditto.
> (vfloat16m4_t): Ditto.
> (vfloat16m8_t): Ditto.
> (vfloat32mf2_t): Ditto.
> (vfloat32m1_t): Ditto.
> (vfloat32m2_t): Ditto.
> (vfloat32m4_t): Ditto.
> (vfloat32m8_t): Ditto.
> (vfloat64m1_t): Ditto.
> (vfloat64m2_t): Ditto.
> (vfloat64m4_t): Ditto.
> (vfloat64m8_t): Ditto.
> * config/riscv/riscv-vector-switch.def (ENTRY): Ditto.
> (TUPLE_ENTRY): Ditto.
> * config/riscv/riscv-vsetvl.cc (change_insn): Ditto.
> * config/riscv/riscv.cc (riscv_valid_lo_sum_p): Ditto.
> (riscv_v_adjust_nunits): Ditto.
> (riscv_v_adjust_bytesize): Ditto.
> (riscv_v_adjust_precision): Ditto.
> (riscv_convert_vector_bits): Ditto.
> * config/riscv/riscv.h (riscv_v_adjust_nunits): Ditto.
> * config/riscv/riscv.md: Ditto.
> * config/riscv/vector-iterators.md: Ditto.
> * config/riscv/vector.md
> (@pred_indexed_<order>store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX16_QHS:mode><VNX16_QHSI:mode>):
> Ditto.
> (@pred_indexed_<order>store<VNX32_QHS:mode><VNX32_QHSI:mode>):
> Ditto.
> (@pred_indexed_<order>store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
> (@pred_indexed_<order>store<VNX128_Q:mode><VNX128_Q:mode>): Ditto.
> (@pred_indexed_<order>load<V1T:mode><V1I:mode>): Ditto.
> (@pred_indexed_<order>load<V1T:mode><VNX1_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V2T:mode><V2I:mode>): Ditto.
> (@pred_indexed_<order>load<V2T:mode><VNX2_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V4T:mode><V4I:mode>): Ditto.
> (@pred_indexed_<order>load<V4T:mode><VNX4_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V8T:mode><V8I:mode>): Ditto.
> (@pred_indexed_<order>load<V8T:mode><VNX8_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>load<V16T:mode><V16I:mode>): Ditto.
> (@pred_indexed_<order>load<V16T:mode><VNX16_QHSI:mode>): Ditto.
> (@pred_indexed_<order>load<V32T:mode><V32I:mode>): Ditto.
> (@pred_indexed_<order>load<V32T:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>load<V64T:mode><V64I:mode>): Ditto.
> (@pred_indexed_<order>store<V1T:mode><V1I:mode>): Ditto.
> (@pred_indexed_<order>store<V1T:mode><VNX1_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V2T:mode><V2I:mode>): Ditto.
> (@pred_indexed_<order>store<V2T:mode><VNX2_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V4T:mode><V4I:mode>): Ditto.
> (@pred_indexed_<order>store<V4T:mode><VNX4_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V8T:mode><V8I:mode>): Ditto.
> (@pred_indexed_<order>store<V8T:mode><VNX8_QHSDI:mode>): Ditto.
> (@pred_indexed_<order>store<V16T:mode><V16I:mode>): Ditto.
> (@pred_indexed_<order>store<V16T:mode><VNX16_QHSI:mode>): Ditto.
> (@pred_indexed_<order>store<V32T:mode><V32I:mode>): Ditto.
> (@pred_indexed_<order>store<V32T:mode><VNX32_QHI:mode>): Ditto.
> (@pred_indexed_<order>store<V64T:mode><V64I:mode>): Ditto.
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-7.c:
> Adapt test.
> * gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-8.c:
> Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store-9.c: Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-8.c:
> Ditto.
> *
> gcc.target/riscv/rvv/autovec/gather-scatter/scatter_store_run-8.c: Ditto.
>
> ---
> gcc/config/riscv/autovec.md | 198 +-
> gcc/config/riscv/riscv-modes.def | 570 ++---
> gcc/config/riscv/riscv-v.cc | 67 +-
> gcc/config/riscv/riscv-vector-builtins.cc | 15 +-
> gcc/config/riscv/riscv-vector-builtins.def | 361 ++-
> gcc/config/riscv/riscv-vector-switch.def | 581 ++---
> gcc/config/riscv/riscv-vsetvl.cc | 24 +-
> gcc/config/riscv/riscv.cc | 87 +-
> gcc/config/riscv/riscv.h | 1 +
> gcc/config/riscv/riscv.md | 87 +-
> gcc/config/riscv/vector-iterators.md | 2236 ++++++++---------
> gcc/config/riscv/vector.md | 826 +++---
> .../gather-scatter/gather_load_run-7.c | 6 +-
> .../gather-scatter/gather_load_run-8.c | 6 +-
> .../gather-scatter/mask_scatter_store-9.c | 5 +
> .../gather-scatter/mask_scatter_store_run-8.c | 6 +-
> .../gather-scatter/scatter_store_run-8.c | 6 +-
> 17 files changed, 2496 insertions(+), 2586 deletions(-)
>
> diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
> index cd5b19457f8..00947207f3f 100644
> --- a/gcc/config/riscv/autovec.md
> +++ b/gcc/config/riscv/autovec.md
> @@ -61,105 +61,90 @@
> ;; == Gather Load
> ;;
> =========================================================================
>
> -(define_expand "len_mask_gather_load<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
> - [(match_operand:VNX1_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO64:mode><RATIO64I:mode>"
> + [(match_operand:RATIO64 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX1_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX1_QHSD:gs_extension>")
> - (match_operand 4 "<VNX1_QHSD:gs_scale>")
> + (match_operand:RATIO64I 2 "register_operand")
> + (match_operand 3 "<RATIO64:gs_extension>")
> + (match_operand 4 "<RATIO64:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
> - [(match_operand:VNX2_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO32:mode><RATIO32I:mode>"
> + [(match_operand:RATIO32 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX2_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX2_QHSD:gs_extension>")
> - (match_operand 4 "<VNX2_QHSD:gs_scale>")
> + (match_operand:RATIO32I 2 "register_operand")
> + (match_operand 3 "<RATIO32:gs_extension>")
> + (match_operand 4 "<RATIO32:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
> - [(match_operand:VNX4_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO16:mode><RATIO16I:mode>"
> + [(match_operand:RATIO16 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX4_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX4_QHSD:gs_extension>")
> - (match_operand 4 "<VNX4_QHSD:gs_scale>")
> + (match_operand:RATIO16I 2 "register_operand")
> + (match_operand 3 "<RATIO16:gs_extension>")
> + (match_operand 4 "<RATIO16:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
> - [(match_operand:VNX8_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO8:mode><RATIO8I:mode>"
> + [(match_operand:RATIO8 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX8_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX8_QHSD:gs_extension>")
> - (match_operand 4 "<VNX8_QHSD:gs_scale>")
> + (match_operand:RATIO8I 2 "register_operand")
> + (match_operand 3 "<RATIO8:gs_extension>")
> + (match_operand 4 "<RATIO8:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
> - [(match_operand:VNX16_QHSD 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO4:mode><RATIO4I:mode>"
> + [(match_operand:RATIO4 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX16_QHSDI 2 "register_operand")
> - (match_operand 3 "<VNX16_QHSD:gs_extension>")
> - (match_operand 4 "<VNX16_QHSD:gs_scale>")
> + (match_operand:RATIO4I 2 "register_operand")
> + (match_operand 3 "<RATIO4:gs_extension>")
> + (match_operand 4 "<RATIO4:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> DONE;
> })
>
> -(define_expand "len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>"
> - [(match_operand:VNX32_QHS 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO2:mode><RATIO2I:mode>"
> + [(match_operand:RATIO2 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX32_QHSI 2 "register_operand")
> - (match_operand 3 "<VNX32_QHS:gs_extension>")
> - (match_operand 4 "<VNX32_QHS:gs_scale>")
> + (match_operand:RATIO2I 2 "register_operand")
> + (match_operand 3 "<RATIO2:gs_extension>")
> + (match_operand 4 "<RATIO2:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
> - "TARGET_VECTOR"
> -{
> - riscv_vector::expand_gather_scatter (operands, true);
> - DONE;
> -})
> -
> -(define_expand "len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>"
> - [(match_operand:VNX64_QH 0 "register_operand")
> - (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX64_QHI 2 "register_operand")
> - (match_operand 3 "<VNX64_QH:gs_extension>")
> - (match_operand 4 "<VNX64_QH:gs_scale>")
> - (match_operand 5 "autovec_length_operand")
> - (match_operand 6 "const_0_operand")
> - (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> @@ -170,15 +155,15 @@
> ;; larger SEW. Since RVV indexed load/store support zero extend
> ;; implicitly and not support scaling, we should only allow
> ;; operands[3] and operands[4] to be const_1_operand.
> -(define_expand "len_mask_gather_load<mode><mode>"
> - [(match_operand:VNX128_Q 0 "register_operand")
> +(define_expand "len_mask_gather_load<RATIO1:mode><RATIO1:mode>"
> + [(match_operand:RATIO1 0 "register_operand")
> (match_operand 1 "pmode_reg_or_0_operand")
> - (match_operand:VNX128_Q 2 "register_operand")
> - (match_operand 3 "const_1_operand")
> - (match_operand 4 "const_1_operand")
> + (match_operand:RATIO1 2 "register_operand")
> + (match_operand 3 "<RATIO1:gs_extension>")
> + (match_operand 4 "<RATIO1:gs_scale>")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, true);
> @@ -189,105 +174,90 @@
> ;; == Scatter Store
> ;;
> =========================================================================
>
> -(define_expand "len_mask_scatter_store<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
> - [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX1_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX1_QHSD:gs_extension>")
> - (match_operand 3 "<VNX1_QHSD:gs_scale>")
> - (match_operand:VNX1_QHSD 4 "register_operand")
> - (match_operand 5 "autovec_length_operand")
> - (match_operand 6 "const_0_operand")
> - (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
> - "TARGET_VECTOR"
> -{
> - riscv_vector::expand_gather_scatter (operands, false);
> - DONE;
> -})
> -
> -(define_expand "len_mask_scatter_store<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO64:mode><RATIO64I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX2_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX2_QHSD:gs_extension>")
> - (match_operand 3 "<VNX2_QHSD:gs_scale>")
> - (match_operand:VNX2_QHSD 4 "register_operand")
> + (match_operand:RATIO64I 1 "register_operand")
> + (match_operand 2 "<RATIO64:gs_extension>")
> + (match_operand 3 "<RATIO64:gs_scale>")
> + (match_operand:RATIO64 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO32:mode><RATIO32I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX4_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX4_QHSD:gs_extension>")
> - (match_operand 3 "<VNX4_QHSD:gs_scale>")
> - (match_operand:VNX4_QHSD 4 "register_operand")
> + (match_operand:RATIO32I 1 "register_operand")
> + (match_operand 2 "<RATIO32:gs_extension>")
> + (match_operand 3 "<RATIO32:gs_scale>")
> + (match_operand:RATIO32 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO16:mode><RATIO16I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX8_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX8_QHSD:gs_extension>")
> - (match_operand 3 "<VNX8_QHSD:gs_scale>")
> - (match_operand:VNX8_QHSD 4 "register_operand")
> + (match_operand:RATIO16I 1 "register_operand")
> + (match_operand 2 "<RATIO16:gs_extension>")
> + (match_operand 3 "<RATIO16:gs_scale>")
> + (match_operand:RATIO16 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO8:mode><RATIO8I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX16_QHSDI 1 "register_operand")
> - (match_operand 2 "<VNX16_QHSD:gs_extension>")
> - (match_operand 3 "<VNX16_QHSD:gs_scale>")
> - (match_operand:VNX16_QHSD 4 "register_operand")
> + (match_operand:RATIO8I 1 "register_operand")
> + (match_operand 2 "<RATIO8:gs_extension>")
> + (match_operand 3 "<RATIO8:gs_scale>")
> + (match_operand:RATIO8 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO4:mode><RATIO4I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX32_QHSI 1 "register_operand")
> - (match_operand 2 "<VNX32_QHS:gs_extension>")
> - (match_operand 3 "<VNX32_QHS:gs_scale>")
> - (match_operand:VNX32_QHS 4 "register_operand")
> + (match_operand:RATIO4I 1 "register_operand")
> + (match_operand 2 "<RATIO4:gs_extension>")
> + (match_operand 3 "<RATIO4:gs_scale>")
> + (match_operand:RATIO4 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> DONE;
> })
>
> -(define_expand "len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>"
> +(define_expand "len_mask_scatter_store<RATIO2:mode><RATIO2I:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX64_QHI 1 "register_operand")
> - (match_operand 2 "<VNX64_QH:gs_extension>")
> - (match_operand 3 "<VNX64_QH:gs_scale>")
> - (match_operand:VNX64_QH 4 "register_operand")
> + (match_operand:RATIO2I 1 "register_operand")
> + (match_operand 2 "<RATIO2:gs_extension>")
> + (match_operand 3 "<RATIO2:gs_scale>")
> + (match_operand:RATIO2 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> @@ -298,15 +268,15 @@
> ;; larger SEW. Since RVV indexed load/store support zero extend
> ;; implicitly and not support scaling, we should only allow
> ;; operands[3] and operands[4] to be const_1_operand.
> -(define_expand "len_mask_scatter_store<mode><mode>"
> +(define_expand "len_mask_scatter_store<RATIO1:mode><RATIO1:mode>"
> [(match_operand 0 "pmode_reg_or_0_operand")
> - (match_operand:VNX128_Q 1 "register_operand")
> - (match_operand 2 "const_1_operand")
> - (match_operand 3 "const_1_operand")
> - (match_operand:VNX128_Q 4 "register_operand")
> + (match_operand:RATIO1 1 "register_operand")
> + (match_operand 2 "<RATIO1:gs_extension>")
> + (match_operand 3 "<RATIO1:gs_scale>")
> + (match_operand:RATIO1 4 "register_operand")
> (match_operand 5 "autovec_length_operand")
> (match_operand 6 "const_0_operand")
> - (match_operand:<VM> 7 "vector_mask_operand")]
> + (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
> "TARGET_VECTOR"
> {
> riscv_vector::expand_gather_scatter (operands, false);
> diff --git a/gcc/config/riscv/riscv-modes.def
> b/gcc/config/riscv/riscv-modes.def
> index 1d152709ddc..d6b90e9e304 100644
> --- a/gcc/config/riscv/riscv-modes.def
> +++ b/gcc/config/riscv/riscv-modes.def
> @@ -27,311 +27,287 @@ FLOAT_MODE (TF, 16, ieee_quad_format);
> /* Encode the ratio of SEW/LMUL into the mask types. There are the
> following
> * mask types. */
>
> -/* | Mode | MIN_VLEN = 32 | MIN_VLEN = 64 | MIN_VLEN = 128 |
> - | | SEW/LMUL | SEW/LMUL | SEW/LMUL |
> - | VNx1BI | 32 | 64 | 128 |
> - | VNx2BI | 16 | 32 | 64 |
> - | VNx4BI | 8 | 16 | 32 |
> - | VNx8BI | 4 | 8 | 16 |
> - | VNx16BI | 2 | 4 | 8 |
> - | VNx32BI | 1 | 2 | 4 |
> - | VNx64BI | N/A | 1 | 2 |
> - | VNx128BI | N/A | N/A | 1 | */
> +/* Encode the ratio of SEW/LMUL into the mask types.
> + There are the following mask types.
> +
> + n = SEW/LMUL
> +
> + |Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
> + |BI |RVVM1BI |RVVMF2BI |RVVMF4BI |RVVMF8BI |RVVMF16BI |RVVMF32BI
> |RVVMF64BI | */
>
> /* For RVV modes, each boolean value occupies 1-bit.
> 4th argument is specify the minmial possible size of the vector mode,
> and will adjust to the right size by ADJUST_BYTESIZE. */
> -VECTOR_BOOL_MODE (VNx1BI, 1, BI, 1);
> -VECTOR_BOOL_MODE (VNx2BI, 2, BI, 1);
> -VECTOR_BOOL_MODE (VNx4BI, 4, BI, 1);
> -VECTOR_BOOL_MODE (VNx8BI, 8, BI, 1);
> -VECTOR_BOOL_MODE (VNx16BI, 16, BI, 2);
> -VECTOR_BOOL_MODE (VNx32BI, 32, BI, 4);
> -VECTOR_BOOL_MODE (VNx64BI, 64, BI, 8);
> -VECTOR_BOOL_MODE (VNx128BI, 128, BI, 16);
> -
> -ADJUST_NUNITS (VNx1BI, riscv_v_adjust_nunits (VNx1BImode, 1));
> -ADJUST_NUNITS (VNx2BI, riscv_v_adjust_nunits (VNx2BImode, 2));
> -ADJUST_NUNITS (VNx4BI, riscv_v_adjust_nunits (VNx4BImode, 4));
> -ADJUST_NUNITS (VNx8BI, riscv_v_adjust_nunits (VNx8BImode, 8));
> -ADJUST_NUNITS (VNx16BI, riscv_v_adjust_nunits (VNx16BImode, 16));
> -ADJUST_NUNITS (VNx32BI, riscv_v_adjust_nunits (VNx32BImode, 32));
> -ADJUST_NUNITS (VNx64BI, riscv_v_adjust_nunits (VNx64BImode, 64));
> -ADJUST_NUNITS (VNx128BI, riscv_v_adjust_nunits (VNx128BImode, 128));
> -
> -ADJUST_ALIGNMENT (VNx1BI, 1);
> -ADJUST_ALIGNMENT (VNx2BI, 1);
> -ADJUST_ALIGNMENT (VNx4BI, 1);
> -ADJUST_ALIGNMENT (VNx8BI, 1);
> -ADJUST_ALIGNMENT (VNx16BI, 1);
> -ADJUST_ALIGNMENT (VNx32BI, 1);
> -ADJUST_ALIGNMENT (VNx64BI, 1);
> -ADJUST_ALIGNMENT (VNx128BI, 1);
> -
> -ADJUST_BYTESIZE (VNx1BI, riscv_v_adjust_bytesize (VNx1BImode, 1));
> -ADJUST_BYTESIZE (VNx2BI, riscv_v_adjust_bytesize (VNx2BImode, 1));
> -ADJUST_BYTESIZE (VNx4BI, riscv_v_adjust_bytesize (VNx4BImode, 1));
> -ADJUST_BYTESIZE (VNx8BI, riscv_v_adjust_bytesize (VNx8BImode, 1));
> -ADJUST_BYTESIZE (VNx16BI, riscv_v_adjust_bytesize (VNx16BImode, 2));
> -ADJUST_BYTESIZE (VNx32BI, riscv_v_adjust_bytesize (VNx32BImode, 4));
> -ADJUST_BYTESIZE (VNx64BI, riscv_v_adjust_bytesize (VNx64BImode, 8));
> -ADJUST_BYTESIZE (VNx128BI, riscv_v_adjust_bytesize (VNx128BImode, 16));
> -
> -ADJUST_PRECISION (VNx1BI, riscv_v_adjust_precision (VNx1BImode, 1));
> -ADJUST_PRECISION (VNx2BI, riscv_v_adjust_precision (VNx2BImode, 2));
> -ADJUST_PRECISION (VNx4BI, riscv_v_adjust_precision (VNx4BImode, 4));
> -ADJUST_PRECISION (VNx8BI, riscv_v_adjust_precision (VNx8BImode, 8));
> -ADJUST_PRECISION (VNx16BI, riscv_v_adjust_precision (VNx16BImode, 16));
> -ADJUST_PRECISION (VNx32BI, riscv_v_adjust_precision (VNx32BImode, 32));
> -ADJUST_PRECISION (VNx64BI, riscv_v_adjust_precision (VNx64BImode, 64));
> -ADJUST_PRECISION (VNx128BI, riscv_v_adjust_precision (VNx128BImode, 128));
> -
> -/*
> - | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64
> | MIN_VLEN=128 | MIN_VLEN=128 |
> - | | LMUL | SEW/LMUL | LMUL | SEW/LMUL
> | LMUL | SEW/LMUL |
> - | VNx1QI | MF4 | 32 | MF8 | 64
> | N/A | N/A |
> - | VNx2QI | MF2 | 16 | MF4 | 32
> | MF8 | 64 |
> - | VNx4QI | M1 | 8 | MF2 | 16
> | MF4 | 32 |
> - | VNx8QI | M2 | 4 | M1 | 8
> | MF2 | 16 |
> - | VNx16QI | M4 | 2 | M2 | 4
> | M1 | 8 |
> - | VNx32QI | M8 | 1 | M4 | 2
> | M2 | 4 |
> - | VNx64QI | N/A | N/A | M8 | 1
> | M4 | 2 |
> - | VNx128QI | N/A | N/A | N/A | N/A
> | M8 | 1 |
> - | VNx1(HI|HF) | MF2 | 32 | MF4 | 64
> | N/A | N/A |
> - | VNx2(HI|HF) | M1 | 16 | MF2 | 32
> | MF4 | 64 |
> - | VNx4(HI|HF) | M2 | 8 | M1 | 16
> | MF2 | 32 |
> - | VNx8(HI|HF) | M4 | 4 | M2 | 8
> | M1 | 16 |
> - | VNx16(HI|HF)| M8 | 2 | M4 | 4
> | M2 | 8 |
> - | VNx32(HI|HF)| N/A | N/A | M8 | 2
> | M4 | 4 |
> - | VNx64(HI|HF)| N/A | N/A | N/A | N/A
> | M8 | 2 |
> - | VNx1(SI|SF) | M1 | 32 | MF2 | 64
> | MF2 | 64 |
> - | VNx2(SI|SF) | M2 | 16 | M1 | 32
> | M1 | 32 |
> - | VNx4(SI|SF) | M4 | 8 | M2 | 16
> | M2 | 16 |
> - | VNx8(SI|SF) | M8 | 4 | M4 | 8
> | M4 | 8 |
> - | VNx16(SI|SF)| N/A | N/A | M8 | 4
> | M8 | 4 |
> - | VNx1(DI|DF) | N/A | N/A | M1 | 64
> | N/A | N/A |
> - | VNx2(DI|DF) | N/A | N/A | M2 | 32
> | M1 | 64 |
> - | VNx4(DI|DF) | N/A | N/A | M4 | 16
> | M2 | 32 |
> - | VNx8(DI|DF) | N/A | N/A | M8 | 8
> | M4 | 16 |
> - | VNx16(DI|DF)| N/A | N/A | N/A | N/A
> | M8 | 8 |
> -*/
> -
> -/* Define RVV modes whose sizes are multiples of 64-bit chunks. */
> -#define RVV_MODES(NVECS, VB, VH, VS, VD)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 8 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 4 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 4 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 2 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 2 * NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, INT, DI, NVECS, 0);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, DF, NVECS, 0);
> \
> +VECTOR_BOOL_MODE (RVVM1BI, 64, BI, 8);
> +VECTOR_BOOL_MODE (RVVMF2BI, 32, BI, 4);
> +VECTOR_BOOL_MODE (RVVMF4BI, 16, BI, 2);
> +VECTOR_BOOL_MODE (RVVMF8BI, 8, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF16BI, 4, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF32BI, 2, BI, 1);
> +VECTOR_BOOL_MODE (RVVMF64BI, 1, BI, 1);
> +
> +ADJUST_NUNITS (RVVM1BI, riscv_v_adjust_nunits (RVVM1BImode, 64));
> +ADJUST_NUNITS (RVVMF2BI, riscv_v_adjust_nunits (RVVMF2BImode, 32));
> +ADJUST_NUNITS (RVVMF4BI, riscv_v_adjust_nunits (RVVMF4BImode, 16));
> +ADJUST_NUNITS (RVVMF8BI, riscv_v_adjust_nunits (RVVMF8BImode, 8));
> +ADJUST_NUNITS (RVVMF16BI, riscv_v_adjust_nunits (RVVMF16BImode, 4));
> +ADJUST_NUNITS (RVVMF32BI, riscv_v_adjust_nunits (RVVMF32BImode, 2));
> +ADJUST_NUNITS (RVVMF64BI, riscv_v_adjust_nunits (RVVMF64BImode, 1));
> +
> +ADJUST_ALIGNMENT (RVVM1BI, 1);
> +ADJUST_ALIGNMENT (RVVMF2BI, 1);
> +ADJUST_ALIGNMENT (RVVMF4BI, 1);
> +ADJUST_ALIGNMENT (RVVMF8BI, 1);
> +ADJUST_ALIGNMENT (RVVMF16BI, 1);
> +ADJUST_ALIGNMENT (RVVMF32BI, 1);
> +ADJUST_ALIGNMENT (RVVMF64BI, 1);
> +
> +ADJUST_PRECISION (RVVM1BI, riscv_v_adjust_precision (RVVM1BImode, 64));
> +ADJUST_PRECISION (RVVMF2BI, riscv_v_adjust_precision (RVVMF2BImode, 32));
> +ADJUST_PRECISION (RVVMF4BI, riscv_v_adjust_precision (RVVMF4BImode, 16));
> +ADJUST_PRECISION (RVVMF8BI, riscv_v_adjust_precision (RVVMF8BImode, 8));
> +ADJUST_PRECISION (RVVMF16BI, riscv_v_adjust_precision (RVVMF16BImode, 4));
> +ADJUST_PRECISION (RVVMF32BI, riscv_v_adjust_precision (RVVMF32BImode, 2));
> +ADJUST_PRECISION (RVVMF64BI, riscv_v_adjust_precision (RVVMF64BImode, 1));
> +
> +ADJUST_BYTESIZE (RVVM1BI, riscv_v_adjust_bytesize (RVVM1BImode, 8));
> +ADJUST_BYTESIZE (RVVMF2BI, riscv_v_adjust_bytesize (RVVMF2BImode, 4));
> +ADJUST_BYTESIZE (RVVMF4BI, riscv_v_adjust_bytesize (RVVMF4BImode, 2));
> +ADJUST_BYTESIZE (RVVMF8BI, riscv_v_adjust_bytesize (RVVMF8BImode, 1));
> +ADJUST_BYTESIZE (RVVMF16BI, riscv_v_adjust_bytesize (RVVMF16BImode, 1));
> +ADJUST_BYTESIZE (RVVMF32BI, riscv_v_adjust_bytesize (RVVMF32BImode, 1));
> +ADJUST_BYTESIZE (RVVMF64BI, riscv_v_adjust_bytesize (RVVMF64BImode, 1));
> +
> +/* Encode SEW and LMUL into data types.
> + We enforce the constraint LMUL ≥ SEW/ELEN in the implementation.
> + There are the following data types for ELEN = 64.
> +
> + |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
> + |DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
> + |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
> + |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
> + |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
> + |DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
> + |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
> + |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
> +
> +There are the following data types for ELEN = 32.
> +
> + |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
> + |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
> + |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
> + |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
> + |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
> + |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A | */
> +
> +#define RVV_WHOLE_MODES(LMUL)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, QI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, HI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, HF, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, SI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, SF, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, INT, DI, LMUL, 0);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, DF, LMUL, 0);
> \
> +
> \
> + ADJUST_NUNITS (RVVM##LMUL##QI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##QImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##HI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##HImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##SI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##SImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##DI,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##DImode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##HF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##HFmode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##SF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##SFmode, false, LMUL,
> 1)); \
> + ADJUST_NUNITS (RVVM##LMUL##DF,
> \
> + riscv_v_adjust_nunits (RVVM##LMUL##DFmode, false, LMUL,
> 1)); \
> +
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM##LMUL##DF, 8);
> +
> +RVV_WHOLE_MODES (1)
> +RVV_WHOLE_MODES (2)
> +RVV_WHOLE_MODES (4)
> +RVV_WHOLE_MODES (8)
> +
> +#define RVV_FRACT_MODE(TYPE, MODE, LMUL, ALIGN)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF, TYPE, MODE, LMUL, 0);
> \
> + ADJUST_NUNITS (RVVMF##LMUL##MODE,
> \
> + riscv_v_adjust_nunits (RVVMF##LMUL##MODE##mode, true,
> LMUL, \
> + 1));
> \
> +
> \
> + ADJUST_ALIGNMENT (RVVMF##LMUL##MODE, ALIGN);
> +
> +RVV_FRACT_MODE (INT, QI, 2, 1)
> +RVV_FRACT_MODE (INT, QI, 4, 1)
> +RVV_FRACT_MODE (INT, QI, 8, 1)
> +RVV_FRACT_MODE (INT, HI, 2, 2)
> +RVV_FRACT_MODE (INT, HI, 4, 2)
> +RVV_FRACT_MODE (FLOAT, HF, 2, 2)
> +RVV_FRACT_MODE (FLOAT, HF, 4, 2)
> +RVV_FRACT_MODE (INT, SI, 2, 4)
> +RVV_FRACT_MODE (FLOAT, SF, 2, 4)
> +
> +/* Tuple modes for segment loads/stores according to NF.
> +
> + Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
> +
> + When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
> + When LMUL is M2, NF can be 2 ~ 4.
> + When LMUL is M4, NF can be 4. */
> +
> +#define RVV_NF8_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF8x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF4x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, DF, NF, 1);
> \
> +
> \
> + ADJUST_NUNITS (RVVMF8x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF8x##NF##QImode, true, 8,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##QImode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##QImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##QImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##HImode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##HImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##HImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF4x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVMF4x##NF##HFmode, true, 4,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##HFmode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##HFmode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##SImode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##SImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVMF2x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVMF2x##NF##SFmode, true, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##SFmode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##DImode, false, 1,
> NF)); \
> + ADJUST_NUNITS (RVVM1x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM1x##NF##DFmode, false, 1,
> NF)); \
> +
> \
> + ADJUST_ALIGNMENT (RVVMF8x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF4x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVMF2x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM1x##NF##DF, 8);
> +
> +RVV_NF8_MODES (8)
> +RVV_NF8_MODES (7)
> +RVV_NF8_MODES (6)
> +RVV_NF8_MODES (5)
> +RVV_NF8_MODES (4)
> +RVV_NF8_MODES (3)
> +RVV_NF8_MODES (2)
> +
> +#define RVV_NF4_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, DF, NF, 1);
> \
>
> \
> - ADJUST_NUNITS (VB##QI, riscv_v_adjust_nunits (VB##QI##mode, NVECS *
> 8)); \
> - ADJUST_NUNITS (VH##HI, riscv_v_adjust_nunits (VH##HI##mode, NVECS *
> 4)); \
> - ADJUST_NUNITS (VS##SI, riscv_v_adjust_nunits (VS##SI##mode, NVECS *
> 2)); \
> - ADJUST_NUNITS (VD##DI, riscv_v_adjust_nunits (VD##DI##mode, NVECS));
> \
> - ADJUST_NUNITS (VH##HF, riscv_v_adjust_nunits (VH##HF##mode, NVECS *
> 4)); \
> - ADJUST_NUNITS (VS##SF, riscv_v_adjust_nunits (VS##SF##mode, NVECS *
> 2)); \
> - ADJUST_NUNITS (VD##DF, riscv_v_adjust_nunits (VD##DF##mode, NVECS));
> \
> + ADJUST_NUNITS (RVVM2x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##QImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##HImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##HFmode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##SImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##SFmode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##DImode, false, 2,
> NF)); \
> + ADJUST_NUNITS (RVVM2x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM2x##NF##DFmode, false, 2,
> NF)); \
>
> \
> - ADJUST_ALIGNMENT (VB##QI, 1);
> \
> - ADJUST_ALIGNMENT (VH##HI, 2);
> \
> - ADJUST_ALIGNMENT (VS##SI, 4);
> \
> - ADJUST_ALIGNMENT (VD##DI, 8);
> \
> - ADJUST_ALIGNMENT (VH##HF, 2);
> \
> - ADJUST_ALIGNMENT (VS##SF, 4);
> \
> - ADJUST_ALIGNMENT (VD##DF, 8);
> -
> -RVV_MODES (1, VNx8, VNx4, VNx2, VNx1)
> -RVV_MODES (2, VNx16, VNx8, VNx4, VNx2)
> -RVV_MODES (4, VNx32, VNx16, VNx8, VNx4)
> -RVV_MODES (8, VNx64, VNx32, VNx16, VNx8)
> -RVV_MODES (16, VNx128, VNx64, VNx32, VNx16)
> -
> -VECTOR_MODES_WITH_PREFIX (VNx, INT, 4, 0);
> -VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 4, 0);
> -ADJUST_NUNITS (VNx4QI, riscv_v_adjust_nunits (VNx4QImode, 4));
> -ADJUST_NUNITS (VNx2HI, riscv_v_adjust_nunits (VNx2HImode, 2));
> -ADJUST_NUNITS (VNx2HF, riscv_v_adjust_nunits (VNx2HFmode, 2));
> -ADJUST_ALIGNMENT (VNx4QI, 1);
> -ADJUST_ALIGNMENT (VNx2HI, 2);
> -ADJUST_ALIGNMENT (VNx2HF, 2);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1SImode and
> VNx1SFmode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 1, 0);
> -VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 1, 0);
> -ADJUST_NUNITS (VNx1SI, riscv_v_adjust_nunits (VNx1SImode, 1));
> -ADJUST_NUNITS (VNx1SF, riscv_v_adjust_nunits (VNx1SFmode, 1));
> -ADJUST_ALIGNMENT (VNx1SI, 4);
> -ADJUST_ALIGNMENT (VNx1SF, 4);
> -
> -VECTOR_MODES_WITH_PREFIX (VNx, INT, 2, 0);
> -ADJUST_NUNITS (VNx2QI, riscv_v_adjust_nunits (VNx2QImode, 2));
> -ADJUST_ALIGNMENT (VNx2QI, 1);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1HImode and
> VNx1HFmode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 1, 0);
> -VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 1, 0);
> -ADJUST_NUNITS (VNx1HI, riscv_v_adjust_nunits (VNx1HImode, 1));
> -ADJUST_NUNITS (VNx1HF, riscv_v_adjust_nunits (VNx1HFmode, 1));
> -ADJUST_ALIGNMENT (VNx1HI, 2);
> -ADJUST_ALIGNMENT (VNx1HF, 2);
> -
> -/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
> - So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1QImode. */
> -VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
> -ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
> -ADJUST_ALIGNMENT (VNx1QI, 1);
> -
> -/* Tuple modes for segment loads/stores according to NF, NF value can be
> 2 ~ 8. */
> -
> -/*
> - | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 |
> MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
> - | | LMUL | SEW/LMUL | LMUL | SEW/LMUL
> | LMUL | SEW/LMUL |
> - | VNxNFx1QI | MF4 | 32 | MF8 | 64
> | N/A | N/A |
> - | VNxNFx2QI | MF2 | 16 | MF4 | 32
> | MF8 | 64 |
> - | VNxNFx4QI | M1 | 8 | MF2 | 16
> | MF4 | 32 |
> - | VNxNFx8QI | M2 | 4 | M1 | 8
> | MF2 | 16 |
> - | VNxNFx16QI | M4 | 2 | M2 | 4
> | M1 | 8 |
> - | VNxNFx32QI | M8 | 1 | M4 | 2
> | M2 | 4 |
> - | VNxNFx64QI | N/A | N/A | M8 | 1
> | M4 | 2 |
> - | VNxNFx128QI | N/A | N/A | N/A | N/A
> | M8 | 1 |
> - | VNxNFx1(HI|HF) | MF2 | 32 | MF4 | 64
> | N/A | N/A |
> - | VNxNFx2(HI|HF) | M1 | 16 | MF2 | 32
> | MF4 | 64 |
> - | VNxNFx4(HI|HF) | M2 | 8 | M1 | 16
> | MF2 | 32 |
> - | VNxNFx8(HI|HF) | M4 | 4 | M2 | 8
> | M1 | 16 |
> - | VNxNFx16(HI|HF)| M8 | 2 | M4 | 4
> | M2 | 8 |
> - | VNxNFx32(HI|HF)| N/A | N/A | M8 | 2
> | M4 | 4 |
> - | VNxNFx64(HI|HF)| N/A | N/A | N/A | N/A
> | M8 | 2 |
> - | VNxNFx1(SI|SF) | M1 | 32 | MF2 | 64
> | MF2 | 64 |
> - | VNxNFx2(SI|SF) | M2 | 16 | M1 | 32
> | M1 | 32 |
> - | VNxNFx4(SI|SF) | M4 | 8 | M2 | 16
> | M2 | 16 |
> - | VNxNFx8(SI|SF) | M8 | 4 | M4 | 8
> | M4 | 8 |
> - | VNxNFx16(SI|SF)| N/A | N/A | M8 | 4
> | M8 | 4 |
> - | VNxNFx1(DI|DF) | N/A | N/A | M1 | 64
> | N/A | N/A |
> - | VNxNFx2(DI|DF) | N/A | N/A | M2 | 32
> | M1 | 64 |
> - | VNxNFx4(DI|DF) | N/A | N/A | M4 | 16
> | M2 | 32 |
> - | VNxNFx8(DI|DF) | N/A | N/A | M8 | 8
> | M4 | 16 |
> - | VNxNFx16(DI|DF)| N/A | N/A | N/A | N/A
> | M8 | 8 |
> -*/
> -
> -#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, NBYTES / 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1);
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode,
> \
> - VB * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode,
> \
> - VH * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode,
> \
> - VS * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode,
> \
> - VD * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HF##mode,
> \
> - VH * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode,
> \
> - VS * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode,
> \
> - VD * NSUBPARTS));
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM2x##NF##DF, 8);
> +
> +RVV_NF4_MODES (2)
> +RVV_NF4_MODES (3)
> +RVV_NF4_MODES (4)
> +
> +#define RVV_NF2_MODES(NF)
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, QI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, HI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, HF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, SI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, SF, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, DI, NF, 1);
> \
> + VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, DF, NF, 1);
> \
>
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
> -
> -RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
> -RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
> -
> -RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
> -RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
> -
> -RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
> -RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
> -RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
> -
> -RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
> -
> -#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS)
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 2, 1);
> \
> - VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1);
> \
> + ADJUST_NUNITS (RVVM4x##NF##QI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##QImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##HI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##HImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##HF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##HFmode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##SI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##SImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##SF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##SFmode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##DI,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##DImode, false, 4,
> NF)); \
> + ADJUST_NUNITS (RVVM4x##NF##DF,
> \
> + riscv_v_adjust_nunits (RVVM4x##NF##DFmode, false, 4,
> NF)); \
>
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HF##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1SI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x1SF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode,
> \
> - NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x2QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode,
> \
> - 2 * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x2HI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode,
> \
> - 2 * NSUBPARTS));
> \
> -ADJUST_NUNITS (VNx##NSUBPARTS##x2HF,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HF##mode,
> \
> - 2 * NSUBPARTS));
> \
> - ADJUST_NUNITS (VNx##NSUBPARTS##x4QI,
> \
> - riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode,
> \
> - 4 * NSUBPARTS));
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HF, 2);
> \
> - ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
> -
> -RVV_TUPLE_PARTIAL_MODES (2)
> -RVV_TUPLE_PARTIAL_MODES (3)
> -RVV_TUPLE_PARTIAL_MODES (4)
> -RVV_TUPLE_PARTIAL_MODES (5)
> -RVV_TUPLE_PARTIAL_MODES (6)
> -RVV_TUPLE_PARTIAL_MODES (7)
> -RVV_TUPLE_PARTIAL_MODES (8)
> + ADJUST_ALIGNMENT (RVVM4x##NF##QI, 1);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##HI, 2);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##HF, 2);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##SI, 4);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##SF, 4);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##DI, 8);
> \
> + ADJUST_ALIGNMENT (RVVM4x##NF##DF, 8);
> +
> +RVV_NF2_MODES (2)
>
> /* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
> be 65536 for a single vector register which means the vector mode in
> diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
> index ff1e682f6d0..53088edf909 100644
> --- a/gcc/config/riscv/riscv-v.cc
> +++ b/gcc/config/riscv/riscv-v.cc
> @@ -1550,37 +1550,20 @@ legitimize_move (rtx dest, rtx src)
> /* VTYPE information for machine_mode. */
> struct mode_vtype_group
> {
> - enum vlmul_type vlmul_for_min_vlen32[NUM_MACHINE_MODES];
> - uint8_t ratio_for_min_vlen32[NUM_MACHINE_MODES];
> - enum vlmul_type vlmul_for_min_vlen64[NUM_MACHINE_MODES];
> - uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
> - enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
> - uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
> + enum vlmul_type vlmul[NUM_MACHINE_MODES];
> + uint8_t ratio[NUM_MACHINE_MODES];
> machine_mode subpart_mode[NUM_MACHINE_MODES];
> uint8_t nf[NUM_MACHINE_MODES];
> mode_vtype_group ()
> {
> -#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32,
> RATIO_FOR_MIN_VLEN32, \
> - VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,
> \
> - VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
> \
> - vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;
> \
> - ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;
> \
> - vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;
> \
> - ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;
> \
> - vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;
> \
> - ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
> -#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF,
> VLMUL_FOR_MIN_VLEN32, \
> - RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,
> \
> - RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,
> \
> - RATIO_FOR_MIN_VLEN128)
> \
> +#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO)
> \
> + vlmul[MODE##mode] = VLMUL;
> \
> + ratio[MODE##mode] = RATIO;
> +#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO)
> \
> subpart_mode[MODE##mode] = SUBPART_MODE##mode;
> \
> nf[MODE##mode] = NF;
> \
> - vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;
> \
> - ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;
> \
> - vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;
> \
> - ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;
> \
> - vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;
> \
> - ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
> + vlmul[MODE##mode] = VLMUL;
> \
> + ratio[MODE##mode] = RATIO;
> #include "riscv-vector-switch.def"
> #undef ENTRY
> #undef TUPLE_ENTRY
> @@ -1593,12 +1576,7 @@ static mode_vtype_group mode_vtype_infos;
> enum vlmul_type
> get_vlmul (machine_mode mode)
> {
> - if (TARGET_MIN_VLEN >= 128)
> - return mode_vtype_infos.vlmul_for_for_vlen128[mode];
> - else if (TARGET_MIN_VLEN == 32)
> - return mode_vtype_infos.vlmul_for_min_vlen32[mode];
> - else
> - return mode_vtype_infos.vlmul_for_min_vlen64[mode];
> + return mode_vtype_infos.vlmul[mode];
> }
>
> /* Return the NF value of the corresponding mode. */
> @@ -1610,8 +1588,8 @@ get_nf (machine_mode mode)
> return mode_vtype_infos.nf[mode];
> }
>
> -/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
> - the subpart mode is VNx1SImode. This will help to build
> +/* Return the subpart mode of the tuple mode. For RVVM2x2SImode,
> + the subpart mode is RVVM2SImode. This will help to build
> array/struct type in builtins. */
> machine_mode
> get_subpart_mode (machine_mode mode)
> @@ -1625,12 +1603,7 @@ get_subpart_mode (machine_mode mode)
> unsigned int
> get_ratio (machine_mode mode)
> {
> - if (TARGET_MIN_VLEN >= 128)
> - return mode_vtype_infos.ratio_for_for_vlen128[mode];
> - else if (TARGET_MIN_VLEN == 32)
> - return mode_vtype_infos.ratio_for_min_vlen32[mode];
> - else
> - return mode_vtype_infos.ratio_for_min_vlen64[mode];
> + return mode_vtype_infos.ratio[mode];
> }
>
> /* Get ta according to operand[tail_op_idx]. */
> @@ -2171,12 +2144,12 @@ preferred_simd_mode (scalar_mode mode)
> /* We will disable auto-vectorization when TARGET_MIN_VLEN < 128 &&
> riscv_autovec_lmul < RVV_M2. Since GCC loop vectorizer report ICE
> when we
> enable -march=rv64gc_zve32* and -march=rv32gc_zve64*. in the
> - 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since we have
> - VNx1SImode in -march=*zve32* and VNx1DImode in -march=*zve64*, they
> are
> - enabled in targetm. vector_mode_supported_p and SLP vectorizer will
> try to
> - use them. Currently, we can support auto-vectorization in
> - -march=rv32_zve32x_zvl128b. Wheras, -march=rv32_zve32x_zvl32b or
> - -march=rv32_zve32x_zvl64b are disabled. */
> + 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since both
> + RVVM1SImode in -march=*zve32*_zvl32b and RVVM1DImode in
> + -march=*zve64*_zvl64b are NUNITS = poly (1, 1), they will cause ICE
> in loop
> + vectorizer when we enable them in this target hook. Currently, we can
> + support auto-vectorization in -march=rv32_zve32x_zvl128b. Wheras,
> + -march=rv32_zve32x_zvl32b or -march=rv32_zve32x_zvl64b are
> disabled. */
> if (autovec_use_vlmax_p ())
> {
> if (TARGET_MIN_VLEN < 128 && riscv_autovec_lmul < RVV_M2)
> @@ -2371,9 +2344,9 @@ autovectorize_vector_modes (vector_modes *modes,
> bool)
> poly_uint64 full_size
> = BYTES_PER_RISCV_VECTOR * ((int) riscv_autovec_lmul);
>
> - /* Start with a VNxYYQImode where YY is the number of units that
> + /* Start with a RVV<LMUL>QImode where LMUL is the number of units
> that
> fit a whole vector.
> - Then try YY = nunits / 2, nunits / 4 and nunits / 8 which
> + Then try LMUL = nunits / 2, nunits / 4 and nunits / 8 which
> is guided by the extensions we have available (vf2, vf4 and vf8).
>
> - full_size: Try using full vectors for all element types.
> diff --git a/gcc/config/riscv/riscv-vector-builtins.cc
> b/gcc/config/riscv/riscv-vector-builtins.cc
> index 3a53b56effa..528dca7ae85 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.cc
> +++ b/gcc/config/riscv/riscv-vector-builtins.cc
> @@ -109,10 +109,8 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
>
> /* Static information about type suffix for each RVV type. */
> const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX,
> SCALAR_SUFFIX, \
> - VSETVL_SUFFIX)
> \
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
> \
> {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
> #define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE,
> SCALAR_TYPE, \
> NF, VECTOR_SUFFIX)
> \
> @@ -2802,12 +2800,9 @@ register_builtin_types ()
> tree int64_type_node = get_typenode_from_name (INT64_TYPE);
>
> machine_mode mode;
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, ARGS...)
> \
> - mode = TARGET_MIN_VLEN >= 128 ? VECTOR_MODE_MIN_VLEN_128##mode
> \
> - : TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode
> \
> - : VECTOR_MODE_MIN_VLEN_32##mode;
> \
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + ARGS...)
> \
> + mode = VECTOR_MODE##mode;
> \
> register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node,
> mode);
> #define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE,
> SCALAR_TYPE, \
> NF, VECTOR_SUFFIX)
> \
> diff --git a/gcc/config/riscv/riscv-vector-builtins.def
> b/gcc/config/riscv/riscv-vector-builtins.def
> index 1e9457953f8..0e49480703b 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.def
> +++ b/gcc/config/riscv/riscv-vector-builtins.def
> @@ -28,24 +28,19 @@ along with GCC; see the file COPYING3. If not see
> "build_vector_type_for_mode". For "vint32m1_t", we use
> "intSI_type_node" in
> RV64. Otherwise, we use "long_integer_type_node".
> 5.The 'VECTOR_MODE' is the machine modes of corresponding RVV type used
> - in "build_vector_type_for_mode" when TARGET_MIN_VLEN > 32.
> - For example: VECTOR_MODE = VNx2SI for "vint32m1_t".
> - 6.The 'VECTOR_MODE_MIN_VLEN_32' is the machine modes of corresponding
> RVV
> - type used in "build_vector_type_for_mode" when TARGET_MIN_VLEN = 32.
> For
> - example: VECTOR_MODE_MIN_VLEN_32 = VNx1SI for "vint32m1_t".
> - 7.The 'VECTOR_SUFFIX' define mode suffix for vector type.
> + in "build_vector_type_for_mode".
> + For example: VECTOR_MODE = RVVM1SImode for "vint32m1_t".
> + 6.The 'VECTOR_SUFFIX' define mode suffix for vector type.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vector = i32m1.
> - 8.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
> + 7.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].scalar = i32.
> - 9.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
> + 8.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
> For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vsetvl = e32m1.
> */
>
> #ifndef DEF_RVV_TYPE
> -#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE,
> \
> - VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64,
> \
> - VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX,
> SCALAR_SUFFIX, \
> - VSETVL_SUFFIX)
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,
> \
> + VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
> #endif
>
> #ifndef DEF_RVV_TUPLE_TYPE
> @@ -101,47 +96,34 @@ along with GCC; see the file COPYING3. If not see
>
> /* SEW/LMUL = 64:
> Only enable when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1BImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx2BI, VNx1BI,
> VOID, _b64, , )
> + Machine mode = RVVMF64BImode. */
> +DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, RVVMF64BI, _b64, , )
> /* SEW/LMUL = 32:
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx4BI, VNx2BI,
> VNx1BI, _b32, , )
> + Machine mode = RVVMF32BImode. */
> +DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, RVVMF32BI, _b32, , )
> /* SEW/LMUL = 16:
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
> - Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32. */
> -DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx8BI, VNx4BI,
> VNx2BI, _b16, , )
> + Machine mode = RVVMF16BImode. */
> +DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, RVVMF16BI, _b16, , )
> /* SEW/LMUL = 8:
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx16BI, VNx8BI,
> VNx4BI, _b8, , )
> + Machine mode = RVVMF8BImode. */
> +DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, RVVMF8BI, _b8, , )
> /* SEW/LMUL = 4:
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx32BI, VNx16BI,
> VNx8BI, _b4, , )
> + Machine mode = RVVMF4BImode. */
> +DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, RVVMF4BI, _b4, , )
> /* SEW/LMUL = 2:
> - Machine mode = VNx64BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx64BI, VNx32BI,
> VNx16BI, _b2, , )
> + Machine mode = RVVMF2BImode. */
> +DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, RVVMF2BI, _b2, , )
> /* SEW/LMUL = 1:
> - Machine mode = VNx128BImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx128BI, VNx64BI,
> VNx32BI, _b1, , )
> + Machine mode = RVVM1BImode. */
> +DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, RVVM1BI, _b1, , )
>
> /* LMUL = 1/8:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1QImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx2QI, VNx1QI,
> VOID, _i8mf8, _i8,
> + Machine mode = RVVMF8QImode. */
> +DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, RVVMF8QI, _i8mf8,
> _i8,
> + _e8mf8)
> +DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, RVVMF8QI, _u8mf8,
> _u8,
> _e8mf8)
> -DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx2QI, VNx1QI,
> VOID, _u8mf8,
> - _u8, _e8mf8)
> /* Define tuple types for SEW = 8, LMUL = MF8. */
> DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t,
> int8, 2, _i8mf8x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t,
> uint8, 2, _u8mf8x2)
> @@ -158,13 +140,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18,
> __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t,
> int8, 8, _i8mf8x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t,
> uint8, 8, _u8mf8x8)
> /* LMUL = 1/4:
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI,
> VNx1QI, _i8mf4,
> - _i8, _e8mf4)
> -DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI,
> VNx1QI, _u8mf4,
> - _u8, _e8mf4)
> + Machine mode = RVVMF4QImode. */
> +DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, RVVMF4QI, _i8mf4,
> _i8,
> + _e8mf4)
> +DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, RVVMF4QI, _u8mf4,
> _u8,
> + _e8mf4)
> /* Define tuple types for SEW = 8, LMUL = MF4. */
> DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t,
> int8, 2, _i8mf4x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t,
> uint8, 2, _u8mf4x2)
> @@ -181,13 +161,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18,
> __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t,
> int8, 8, _i8mf4x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t,
> uint8, 8, _u8mf4x8)
> /* LMUL = 1/2:
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI,
> VNx2QI, _i8mf2,
> - _i8, _e8mf2)
> -DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI,
> VNx2QI, _u8mf2,
> - _u8, _e8mf2)
> + Machine mode = RVVMF2QImode. */
> +DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, RVVMF2QI, _i8mf2,
> _i8,
> + _e8mf2)
> +DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, RVVMF2QI, _u8mf2,
> _u8,
> + _e8mf2)
> /* Define tuple types for SEW = 8, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t,
> int8, 2, _i8mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t,
> uint8, 2, _u8mf2x2)
> @@ -204,13 +182,10 @@ DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18,
> __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7
> DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t,
> int8, 8, _i8mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t,
> uint8, 8, _u8mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI,
> VNx4QI, _i8m1, _i8,
> + Machine mode = RVVM1QImode. */
> +DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, RVVM1QI, _i8m1, _i8,
> _e8m1)
> +DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, RVVM1QI, _u8m1, _u8,
> _e8m1)
> -DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI,
> VNx4QI, _u8m1,
> - _u8, _e8m1)
> /* Define tuple types for SEW = 8, LMUL = M1. */
> DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8,
> 2, _i8m1x2)
> DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t,
> uint8, 2, _u8m1x2)
> @@ -227,13 +202,10 @@ DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17,
> __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _
> DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8,
> 8, _i8m1x8)
> DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t,
> uint8, 8, _u8m1x8)
> /* LMUL = 2:
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI,
> VNx8QI, _i8m2, _i8,
> + Machine mode = RVVM2QImode. */
> +DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, RVVM2QI, _i8m2, _i8,
> _e8m2)
> +DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, RVVM2QI, _u8m2, _u8,
> _e8m2)
> -DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI,
> VNx8QI, _u8m2,
> - _u8, _e8m2)
> /* Define tuple types for SEW = 8, LMUL = M2. */
> DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8,
> 2, _i8m2x2)
> DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t,
> uint8, 2, _u8m2x2)
> @@ -242,33 +214,26 @@ DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17,
> __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _
> DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8,
> 4, _i8m2x4)
> DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t,
> uint8, 4, _u8m2x4)
> /* LMUL = 4:
> - Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI,
> VNx16QI, _i8m4, _i8,
> + Machine mode = RVVM4QImode. */
> +DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, RVVM4QI, _i8m4, _i8,
> _e8m4)
> +DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, RVVM4QI, _u8m4, _u8,
> _e8m4)
> -DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI,
> VNx16QI, _u8m4,
> - _u8, _e8m4)
> /* Define tuple types for SEW = 8, LMUL = M4. */
> DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8,
> 2, _i8m4x2)
> DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t,
> uint8, 2, _u8m4x2)
> /* LMUL = 8:
> - Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI,
> VNx32QI, _i8m8, _i8,
> + Machine mode = RVVM8QImode. */
> +DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, RVVM8QI, _i8m8, _i8,
> _e8m8)
> +DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, RVVM8QI, _u8m8, _u8,
> _e8m8)
> -DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI,
> VNx32QI, _u8m8,
> - _u8, _e8m8)
>
> /* LMUL = 1/4:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI,
> VOID, _i16mf4,
> - _i16, _e16mf4)
> -DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI,
> VNx1HI, VOID,
> - _u16mf4, _u16, _e16mf4)
> + Machine mode = RVVMF4HImode. */
> +DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, RVVMF4HI,
> _i16mf4, _i16,
> + _e16mf4)
> +DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, RVVMF4HI,
> _u16mf4,
> + _u16, _e16mf4)
> /* Define tuple types for SEW = 16, LMUL = MF4. */
> DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t,
> int16, 2, _i16mf4x2)
> DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t,
> vuint16mf4_t, uint16, 2, _u16mf4x2)
> @@ -285,13 +250,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19,
> __rvv_uint16mf4x7_t, vuint16mf4_t, uint1
> DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t,
> int16, 8, _i16mf4x8)
> DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t,
> vuint16mf4_t, uint16, 8, _u16mf4x8)
> /* LMUL = 1/2:
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI,
> VNx1HI, _i16mf2,
> - _i16, _e16mf2)
> -DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI,
> VNx2HI, VNx1HI,
> - _u16mf2, _u16, _e16mf2)
> + Machine mode = RVVMF2HImode. */
> +DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, RVVMF2HI,
> _i16mf2, _i16,
> + _e16mf2)
> +DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, RVVMF2HI,
> _u16mf2,
> + _u16, _e16mf2)
> /* Define tuple types for SEW = 16, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t,
> int16, 2, _i16mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t,
> vuint16mf2_t, uint16, 2, _u16mf2x2)
> @@ -308,13 +271,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19,
> __rvv_uint16mf2x7_t, vuint16mf2_t, uint1
> DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t,
> int16, 8, _i16mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t,
> vuint16mf2_t, uint16, 8, _u16mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI,
> VNx2HI, _i16m1,
> - _i16, _e16m1)
> -DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI,
> VNx2HI, _u16m1,
> - _u16, _e16m1)
> + Machine mode = RVVM1HImode. */
> +DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, RVVM1HI, _i16m1,
> _i16,
> + _e16m1)
> +DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, RVVM1HI, _u16m1,
> _u16,
> + _e16m1)
> /* Define tuple types for SEW = 16, LMUL = M1. */
> DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t,
> int16, 2, _i16m1x2)
> DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t,
> uint16, 2, _u16m1x2)
> @@ -331,13 +292,11 @@ DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18,
> __rvv_uint16m1x7_t, vuint16m1_t, uint16,
> DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t,
> int16, 8, _i16m1x8)
> DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t,
> uint16, 8, _u16m1x8)
> /* LMUL = 2:
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI,
> VNx4HI, _i16m2,
> - _i16, _e16m2)
> -DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI,
> VNx4HI, _u16m2,
> - _u16, _e16m2)
> + Machine mode = RVVM1H2mode. */
> +DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, RVVM2HI, _i16m2,
> _i16,
> + _e16m2)
> +DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, RVVM2HI, _u16m2,
> _u16,
> + _e16m2)
> /* Define tuple types for SEW = 16, LMUL = M2. */
> DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t,
> int16, 2, _i16m2x2)
> DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t,
> uint16, 2, _u16m2x2)
> @@ -346,33 +305,28 @@ DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18,
> __rvv_uint16m2x3_t, vuint16m2_t, uint16,
> DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t,
> int16, 4, _i16m2x4)
> DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t,
> uint16, 4, _u16m2x4)
> /* LMUL = 4:
> - Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI,
> VNx8HI, _i16m4,
> - _i16, _e16m4)
> -DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI,
> VNx16HI, VNx8HI,
> - _u16m4, _u16, _e16m4)
> + Machine mode = RVVM4HImode. */
> +DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, RVVM4HI, _i16m4,
> _i16,
> + _e16m4)
> +DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, RVVM4HI, _u16m4,
> _u16,
> + _e16m4)
> /* Define tuple types for SEW = 16, LMUL = M4. */
> DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t,
> int16, 2, _i16m4x2)
> DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t,
> uint16, 2, _u16m4x2)
> /* LMUL = 8:
> - Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32. */
> -DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI,
> VNx16HI, _i16m8,
> - _i16, _e16m8)
> -DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI,
> VNx32HI, VNx16HI,
> - _u16m8, _u16, _e16m8)
> + Machine mode = RVVM8HImode. */
> +DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, RVVM8HI, _i16m8,
> _i16,
> + _e16m8)
> +DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, RVVM8HI, _u16m8,
> _u16,
> + _e16m8)
>
> /* LMUL = 1/2:
> Only enble when TARGET_MIN_VLEN > 32.
> - Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
> - Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128. */
> -DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI,
> VOID, _i32mf2,
> - _i32, _e32mf2)
> -DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI,
> VNx1SI, VOID,
> - _u32mf2, _u32, _e32mf2)
> + Machine mode = RVVMF2SImode. */
> +DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, RVVMF2SI,
> _i32mf2, _i32,
> + _e32mf2)
> +DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, RVVMF2SI,
> _u32mf2,
> + _u32, _e32mf2)
> /* Define tuple types for SEW = 32, LMUL = MF2. */
> DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t,
> int32, 2, _i32mf2x2)
> DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t,
> vuint32mf2_t, uint32, 2, _u32mf2x2)
> @@ -389,13 +343,11 @@ DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19,
> __rvv_uint32mf2x7_t, vuint32mf2_t, uint3
> DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t,
> int32, 8, _i32mf2x8)
> DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t,
> vuint32mf2_t, uint32, 8, _u32mf2x8)
> /* LMUL = 1:
> - Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
> - Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
>
@@ -61,105 +61,90 @@
;; == Gather Load
;; =========================================================================
-(define_expand "len_mask_gather_load<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
- [(match_operand:VNX1_QHSD 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO64:mode><RATIO64I:mode>"
+ [(match_operand:RATIO64 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX1_QHSDI 2 "register_operand")
- (match_operand 3 "<VNX1_QHSD:gs_extension>")
- (match_operand 4 "<VNX1_QHSD:gs_scale>")
+ (match_operand:RATIO64I 2 "register_operand")
+ (match_operand 3 "<RATIO64:gs_extension>")
+ (match_operand 4 "<RATIO64:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
-(define_expand "len_mask_gather_load<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
- [(match_operand:VNX2_QHSD 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO32:mode><RATIO32I:mode>"
+ [(match_operand:RATIO32 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX2_QHSDI 2 "register_operand")
- (match_operand 3 "<VNX2_QHSD:gs_extension>")
- (match_operand 4 "<VNX2_QHSD:gs_scale>")
+ (match_operand:RATIO32I 2 "register_operand")
+ (match_operand 3 "<RATIO32:gs_extension>")
+ (match_operand 4 "<RATIO32:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
-(define_expand "len_mask_gather_load<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
- [(match_operand:VNX4_QHSD 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO16:mode><RATIO16I:mode>"
+ [(match_operand:RATIO16 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX4_QHSDI 2 "register_operand")
- (match_operand 3 "<VNX4_QHSD:gs_extension>")
- (match_operand 4 "<VNX4_QHSD:gs_scale>")
+ (match_operand:RATIO16I 2 "register_operand")
+ (match_operand 3 "<RATIO16:gs_extension>")
+ (match_operand 4 "<RATIO16:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
-(define_expand "len_mask_gather_load<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
- [(match_operand:VNX8_QHSD 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO8:mode><RATIO8I:mode>"
+ [(match_operand:RATIO8 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX8_QHSDI 2 "register_operand")
- (match_operand 3 "<VNX8_QHSD:gs_extension>")
- (match_operand 4 "<VNX8_QHSD:gs_scale>")
+ (match_operand:RATIO8I 2 "register_operand")
+ (match_operand 3 "<RATIO8:gs_extension>")
+ (match_operand 4 "<RATIO8:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
-(define_expand "len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
- [(match_operand:VNX16_QHSD 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO4:mode><RATIO4I:mode>"
+ [(match_operand:RATIO4 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX16_QHSDI 2 "register_operand")
- (match_operand 3 "<VNX16_QHSD:gs_extension>")
- (match_operand 4 "<VNX16_QHSD:gs_scale>")
+ (match_operand:RATIO4I 2 "register_operand")
+ (match_operand 3 "<RATIO4:gs_extension>")
+ (match_operand 4 "<RATIO4:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
-(define_expand "len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>"
- [(match_operand:VNX32_QHS 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO2:mode><RATIO2I:mode>"
+ [(match_operand:RATIO2 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX32_QHSI 2 "register_operand")
- (match_operand 3 "<VNX32_QHS:gs_extension>")
- (match_operand 4 "<VNX32_QHS:gs_scale>")
+ (match_operand:RATIO2I 2 "register_operand")
+ (match_operand 3 "<RATIO2:gs_extension>")
+ (match_operand 4 "<RATIO2:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
- "TARGET_VECTOR"
-{
- riscv_vector::expand_gather_scatter (operands, true);
- DONE;
-})
-
-(define_expand "len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>"
- [(match_operand:VNX64_QH 0 "register_operand")
- (match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX64_QHI 2 "register_operand")
- (match_operand 3 "<VNX64_QH:gs_extension>")
- (match_operand 4 "<VNX64_QH:gs_scale>")
- (match_operand 5 "autovec_length_operand")
- (match_operand 6 "const_0_operand")
- (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
@@ -170,15 +155,15 @@
;; larger SEW. Since RVV indexed load/store support zero extend
;; implicitly and not support scaling, we should only allow
;; operands[3] and operands[4] to be const_1_operand.
-(define_expand "len_mask_gather_load<mode><mode>"
- [(match_operand:VNX128_Q 0 "register_operand")
+(define_expand "len_mask_gather_load<RATIO1:mode><RATIO1:mode>"
+ [(match_operand:RATIO1 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
- (match_operand:VNX128_Q 2 "register_operand")
- (match_operand 3 "const_1_operand")
- (match_operand 4 "const_1_operand")
+ (match_operand:RATIO1 2 "register_operand")
+ (match_operand 3 "<RATIO1:gs_extension>")
+ (match_operand 4 "<RATIO1:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
@@ -189,105 +174,90 @@
;; == Scatter Store
;; =========================================================================
-(define_expand "len_mask_scatter_store<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
- [(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX1_QHSDI 1 "register_operand")
- (match_operand 2 "<VNX1_QHSD:gs_extension>")
- (match_operand 3 "<VNX1_QHSD:gs_scale>")
- (match_operand:VNX1_QHSD 4 "register_operand")
- (match_operand 5 "autovec_length_operand")
- (match_operand 6 "const_0_operand")
- (match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
- "TARGET_VECTOR"
-{
- riscv_vector::expand_gather_scatter (operands, false);
- DONE;
-})
-
-(define_expand "len_mask_scatter_store<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
+(define_expand "len_mask_scatter_store<RATIO64:mode><RATIO64I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX2_QHSDI 1 "register_operand")
- (match_operand 2 "<VNX2_QHSD:gs_extension>")
- (match_operand 3 "<VNX2_QHSD:gs_scale>")
- (match_operand:VNX2_QHSD 4 "register_operand")
+ (match_operand:RATIO64I 1 "register_operand")
+ (match_operand 2 "<RATIO64:gs_extension>")
+ (match_operand 3 "<RATIO64:gs_scale>")
+ (match_operand:RATIO64 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
-(define_expand "len_mask_scatter_store<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
+(define_expand "len_mask_scatter_store<RATIO32:mode><RATIO32I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX4_QHSDI 1 "register_operand")
- (match_operand 2 "<VNX4_QHSD:gs_extension>")
- (match_operand 3 "<VNX4_QHSD:gs_scale>")
- (match_operand:VNX4_QHSD 4 "register_operand")
+ (match_operand:RATIO32I 1 "register_operand")
+ (match_operand 2 "<RATIO32:gs_extension>")
+ (match_operand 3 "<RATIO32:gs_scale>")
+ (match_operand:RATIO32 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
-(define_expand "len_mask_scatter_store<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
+(define_expand "len_mask_scatter_store<RATIO16:mode><RATIO16I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX8_QHSDI 1 "register_operand")
- (match_operand 2 "<VNX8_QHSD:gs_extension>")
- (match_operand 3 "<VNX8_QHSD:gs_scale>")
- (match_operand:VNX8_QHSD 4 "register_operand")
+ (match_operand:RATIO16I 1 "register_operand")
+ (match_operand 2 "<RATIO16:gs_extension>")
+ (match_operand 3 "<RATIO16:gs_scale>")
+ (match_operand:RATIO16 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
-(define_expand "len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
+(define_expand "len_mask_scatter_store<RATIO8:mode><RATIO8I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX16_QHSDI 1 "register_operand")
- (match_operand 2 "<VNX16_QHSD:gs_extension>")
- (match_operand 3 "<VNX16_QHSD:gs_scale>")
- (match_operand:VNX16_QHSD 4 "register_operand")
+ (match_operand:RATIO8I 1 "register_operand")
+ (match_operand 2 "<RATIO8:gs_extension>")
+ (match_operand 3 "<RATIO8:gs_scale>")
+ (match_operand:RATIO8 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
-(define_expand "len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>"
+(define_expand "len_mask_scatter_store<RATIO4:mode><RATIO4I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX32_QHSI 1 "register_operand")
- (match_operand 2 "<VNX32_QHS:gs_extension>")
- (match_operand 3 "<VNX32_QHS:gs_scale>")
- (match_operand:VNX32_QHS 4 "register_operand")
+ (match_operand:RATIO4I 1 "register_operand")
+ (match_operand 2 "<RATIO4:gs_extension>")
+ (match_operand 3 "<RATIO4:gs_scale>")
+ (match_operand:RATIO4 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
-(define_expand "len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>"
+(define_expand "len_mask_scatter_store<RATIO2:mode><RATIO2I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX64_QHI 1 "register_operand")
- (match_operand 2 "<VNX64_QH:gs_extension>")
- (match_operand 3 "<VNX64_QH:gs_scale>")
- (match_operand:VNX64_QH 4 "register_operand")
+ (match_operand:RATIO2I 1 "register_operand")
+ (match_operand 2 "<RATIO2:gs_extension>")
+ (match_operand 3 "<RATIO2:gs_scale>")
+ (match_operand:RATIO2 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
@@ -298,15 +268,15 @@
;; larger SEW. Since RVV indexed load/store support zero extend
;; implicitly and not support scaling, we should only allow
;; operands[3] and operands[4] to be const_1_operand.
-(define_expand "len_mask_scatter_store<mode><mode>"
+(define_expand "len_mask_scatter_store<RATIO1:mode><RATIO1:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
- (match_operand:VNX128_Q 1 "register_operand")
- (match_operand 2 "const_1_operand")
- (match_operand 3 "const_1_operand")
- (match_operand:VNX128_Q 4 "register_operand")
+ (match_operand:RATIO1 1 "register_operand")
+ (match_operand 2 "<RATIO1:gs_extension>")
+ (match_operand 3 "<RATIO1:gs_scale>")
+ (match_operand:RATIO1 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
- (match_operand:<VM> 7 "vector_mask_operand")]
+ (match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
@@ -27,311 +27,287 @@ FLOAT_MODE (TF, 16, ieee_quad_format);
/* Encode the ratio of SEW/LMUL into the mask types. There are the following
* mask types. */
-/* | Mode | MIN_VLEN = 32 | MIN_VLEN = 64 | MIN_VLEN = 128 |
- | | SEW/LMUL | SEW/LMUL | SEW/LMUL |
- | VNx1BI | 32 | 64 | 128 |
- | VNx2BI | 16 | 32 | 64 |
- | VNx4BI | 8 | 16 | 32 |
- | VNx8BI | 4 | 8 | 16 |
- | VNx16BI | 2 | 4 | 8 |
- | VNx32BI | 1 | 2 | 4 |
- | VNx64BI | N/A | 1 | 2 |
- | VNx128BI | N/A | N/A | 1 | */
+/* Encode the ratio of SEW/LMUL into the mask types.
+ There are the following mask types.
+
+ n = SEW/LMUL
+
+ |Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
+ |BI |RVVM1BI |RVVMF2BI |RVVMF4BI |RVVMF8BI |RVVMF16BI |RVVMF32BI |RVVMF64BI | */
/* For RVV modes, each boolean value occupies 1-bit.
4th argument is specify the minmial possible size of the vector mode,
and will adjust to the right size by ADJUST_BYTESIZE. */
-VECTOR_BOOL_MODE (VNx1BI, 1, BI, 1);
-VECTOR_BOOL_MODE (VNx2BI, 2, BI, 1);
-VECTOR_BOOL_MODE (VNx4BI, 4, BI, 1);
-VECTOR_BOOL_MODE (VNx8BI, 8, BI, 1);
-VECTOR_BOOL_MODE (VNx16BI, 16, BI, 2);
-VECTOR_BOOL_MODE (VNx32BI, 32, BI, 4);
-VECTOR_BOOL_MODE (VNx64BI, 64, BI, 8);
-VECTOR_BOOL_MODE (VNx128BI, 128, BI, 16);
-
-ADJUST_NUNITS (VNx1BI, riscv_v_adjust_nunits (VNx1BImode, 1));
-ADJUST_NUNITS (VNx2BI, riscv_v_adjust_nunits (VNx2BImode, 2));
-ADJUST_NUNITS (VNx4BI, riscv_v_adjust_nunits (VNx4BImode, 4));
-ADJUST_NUNITS (VNx8BI, riscv_v_adjust_nunits (VNx8BImode, 8));
-ADJUST_NUNITS (VNx16BI, riscv_v_adjust_nunits (VNx16BImode, 16));
-ADJUST_NUNITS (VNx32BI, riscv_v_adjust_nunits (VNx32BImode, 32));
-ADJUST_NUNITS (VNx64BI, riscv_v_adjust_nunits (VNx64BImode, 64));
-ADJUST_NUNITS (VNx128BI, riscv_v_adjust_nunits (VNx128BImode, 128));
-
-ADJUST_ALIGNMENT (VNx1BI, 1);
-ADJUST_ALIGNMENT (VNx2BI, 1);
-ADJUST_ALIGNMENT (VNx4BI, 1);
-ADJUST_ALIGNMENT (VNx8BI, 1);
-ADJUST_ALIGNMENT (VNx16BI, 1);
-ADJUST_ALIGNMENT (VNx32BI, 1);
-ADJUST_ALIGNMENT (VNx64BI, 1);
-ADJUST_ALIGNMENT (VNx128BI, 1);
-
-ADJUST_BYTESIZE (VNx1BI, riscv_v_adjust_bytesize (VNx1BImode, 1));
-ADJUST_BYTESIZE (VNx2BI, riscv_v_adjust_bytesize (VNx2BImode, 1));
-ADJUST_BYTESIZE (VNx4BI, riscv_v_adjust_bytesize (VNx4BImode, 1));
-ADJUST_BYTESIZE (VNx8BI, riscv_v_adjust_bytesize (VNx8BImode, 1));
-ADJUST_BYTESIZE (VNx16BI, riscv_v_adjust_bytesize (VNx16BImode, 2));
-ADJUST_BYTESIZE (VNx32BI, riscv_v_adjust_bytesize (VNx32BImode, 4));
-ADJUST_BYTESIZE (VNx64BI, riscv_v_adjust_bytesize (VNx64BImode, 8));
-ADJUST_BYTESIZE (VNx128BI, riscv_v_adjust_bytesize (VNx128BImode, 16));
-
-ADJUST_PRECISION (VNx1BI, riscv_v_adjust_precision (VNx1BImode, 1));
-ADJUST_PRECISION (VNx2BI, riscv_v_adjust_precision (VNx2BImode, 2));
-ADJUST_PRECISION (VNx4BI, riscv_v_adjust_precision (VNx4BImode, 4));
-ADJUST_PRECISION (VNx8BI, riscv_v_adjust_precision (VNx8BImode, 8));
-ADJUST_PRECISION (VNx16BI, riscv_v_adjust_precision (VNx16BImode, 16));
-ADJUST_PRECISION (VNx32BI, riscv_v_adjust_precision (VNx32BImode, 32));
-ADJUST_PRECISION (VNx64BI, riscv_v_adjust_precision (VNx64BImode, 64));
-ADJUST_PRECISION (VNx128BI, riscv_v_adjust_precision (VNx128BImode, 128));
-
-/*
- | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
- | | LMUL | SEW/LMUL | LMUL | SEW/LMUL | LMUL | SEW/LMUL |
- | VNx1QI | MF4 | 32 | MF8 | 64 | N/A | N/A |
- | VNx2QI | MF2 | 16 | MF4 | 32 | MF8 | 64 |
- | VNx4QI | M1 | 8 | MF2 | 16 | MF4 | 32 |
- | VNx8QI | M2 | 4 | M1 | 8 | MF2 | 16 |
- | VNx16QI | M4 | 2 | M2 | 4 | M1 | 8 |
- | VNx32QI | M8 | 1 | M4 | 2 | M2 | 4 |
- | VNx64QI | N/A | N/A | M8 | 1 | M4 | 2 |
- | VNx128QI | N/A | N/A | N/A | N/A | M8 | 1 |
- | VNx1(HI|HF) | MF2 | 32 | MF4 | 64 | N/A | N/A |
- | VNx2(HI|HF) | M1 | 16 | MF2 | 32 | MF4 | 64 |
- | VNx4(HI|HF) | M2 | 8 | M1 | 16 | MF2 | 32 |
- | VNx8(HI|HF) | M4 | 4 | M2 | 8 | M1 | 16 |
- | VNx16(HI|HF)| M8 | 2 | M4 | 4 | M2 | 8 |
- | VNx32(HI|HF)| N/A | N/A | M8 | 2 | M4 | 4 |
- | VNx64(HI|HF)| N/A | N/A | N/A | N/A | M8 | 2 |
- | VNx1(SI|SF) | M1 | 32 | MF2 | 64 | MF2 | 64 |
- | VNx2(SI|SF) | M2 | 16 | M1 | 32 | M1 | 32 |
- | VNx4(SI|SF) | M4 | 8 | M2 | 16 | M2 | 16 |
- | VNx8(SI|SF) | M8 | 4 | M4 | 8 | M4 | 8 |
- | VNx16(SI|SF)| N/A | N/A | M8 | 4 | M8 | 4 |
- | VNx1(DI|DF) | N/A | N/A | M1 | 64 | N/A | N/A |
- | VNx2(DI|DF) | N/A | N/A | M2 | 32 | M1 | 64 |
- | VNx4(DI|DF) | N/A | N/A | M4 | 16 | M2 | 32 |
- | VNx8(DI|DF) | N/A | N/A | M8 | 8 | M4 | 16 |
- | VNx16(DI|DF)| N/A | N/A | N/A | N/A | M8 | 8 |
-*/
-
-/* Define RVV modes whose sizes are multiples of 64-bit chunks. */
-#define RVV_MODES(NVECS, VB, VH, VS, VD) \
- VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 8 * NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 4 * NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 4 * NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 2 * NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 2 * NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, INT, DI, NVECS, 0); \
- VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, DF, NVECS, 0); \
+VECTOR_BOOL_MODE (RVVM1BI, 64, BI, 8);
+VECTOR_BOOL_MODE (RVVMF2BI, 32, BI, 4);
+VECTOR_BOOL_MODE (RVVMF4BI, 16, BI, 2);
+VECTOR_BOOL_MODE (RVVMF8BI, 8, BI, 1);
+VECTOR_BOOL_MODE (RVVMF16BI, 4, BI, 1);
+VECTOR_BOOL_MODE (RVVMF32BI, 2, BI, 1);
+VECTOR_BOOL_MODE (RVVMF64BI, 1, BI, 1);
+
+ADJUST_NUNITS (RVVM1BI, riscv_v_adjust_nunits (RVVM1BImode, 64));
+ADJUST_NUNITS (RVVMF2BI, riscv_v_adjust_nunits (RVVMF2BImode, 32));
+ADJUST_NUNITS (RVVMF4BI, riscv_v_adjust_nunits (RVVMF4BImode, 16));
+ADJUST_NUNITS (RVVMF8BI, riscv_v_adjust_nunits (RVVMF8BImode, 8));
+ADJUST_NUNITS (RVVMF16BI, riscv_v_adjust_nunits (RVVMF16BImode, 4));
+ADJUST_NUNITS (RVVMF32BI, riscv_v_adjust_nunits (RVVMF32BImode, 2));
+ADJUST_NUNITS (RVVMF64BI, riscv_v_adjust_nunits (RVVMF64BImode, 1));
+
+ADJUST_ALIGNMENT (RVVM1BI, 1);
+ADJUST_ALIGNMENT (RVVMF2BI, 1);
+ADJUST_ALIGNMENT (RVVMF4BI, 1);
+ADJUST_ALIGNMENT (RVVMF8BI, 1);
+ADJUST_ALIGNMENT (RVVMF16BI, 1);
+ADJUST_ALIGNMENT (RVVMF32BI, 1);
+ADJUST_ALIGNMENT (RVVMF64BI, 1);
+
+ADJUST_PRECISION (RVVM1BI, riscv_v_adjust_precision (RVVM1BImode, 64));
+ADJUST_PRECISION (RVVMF2BI, riscv_v_adjust_precision (RVVMF2BImode, 32));
+ADJUST_PRECISION (RVVMF4BI, riscv_v_adjust_precision (RVVMF4BImode, 16));
+ADJUST_PRECISION (RVVMF8BI, riscv_v_adjust_precision (RVVMF8BImode, 8));
+ADJUST_PRECISION (RVVMF16BI, riscv_v_adjust_precision (RVVMF16BImode, 4));
+ADJUST_PRECISION (RVVMF32BI, riscv_v_adjust_precision (RVVMF32BImode, 2));
+ADJUST_PRECISION (RVVMF64BI, riscv_v_adjust_precision (RVVMF64BImode, 1));
+
+ADJUST_BYTESIZE (RVVM1BI, riscv_v_adjust_bytesize (RVVM1BImode, 8));
+ADJUST_BYTESIZE (RVVMF2BI, riscv_v_adjust_bytesize (RVVMF2BImode, 4));
+ADJUST_BYTESIZE (RVVMF4BI, riscv_v_adjust_bytesize (RVVMF4BImode, 2));
+ADJUST_BYTESIZE (RVVMF8BI, riscv_v_adjust_bytesize (RVVMF8BImode, 1));
+ADJUST_BYTESIZE (RVVMF16BI, riscv_v_adjust_bytesize (RVVMF16BImode, 1));
+ADJUST_BYTESIZE (RVVMF32BI, riscv_v_adjust_bytesize (RVVMF32BImode, 1));
+ADJUST_BYTESIZE (RVVMF64BI, riscv_v_adjust_bytesize (RVVMF64BImode, 1));
+
+/* Encode SEW and LMUL into data types.
+ We enforce the constraint LMUL ≥ SEW/ELEN in the implementation.
+ There are the following data types for ELEN = 64.
+
+ |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
+ |DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
+ |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
+ |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
+ |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
+ |DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
+ |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
+ |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
+
+There are the following data types for ELEN = 32.
+
+ |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
+ |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
+ |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
+ |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
+ |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
+ |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A | */
+
+#define RVV_WHOLE_MODES(LMUL) \
+ VECTOR_MODE_WITH_PREFIX (RVVM, INT, QI, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, INT, HI, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, HF, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, INT, SI, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, SF, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, INT, DI, LMUL, 0); \
+ VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, DF, LMUL, 0); \
+ \
+ ADJUST_NUNITS (RVVM##LMUL##QI, \
+ riscv_v_adjust_nunits (RVVM##LMUL##QImode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##HI, \
+ riscv_v_adjust_nunits (RVVM##LMUL##HImode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##SI, \
+ riscv_v_adjust_nunits (RVVM##LMUL##SImode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##DI, \
+ riscv_v_adjust_nunits (RVVM##LMUL##DImode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##HF, \
+ riscv_v_adjust_nunits (RVVM##LMUL##HFmode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##SF, \
+ riscv_v_adjust_nunits (RVVM##LMUL##SFmode, false, LMUL, 1)); \
+ ADJUST_NUNITS (RVVM##LMUL##DF, \
+ riscv_v_adjust_nunits (RVVM##LMUL##DFmode, false, LMUL, 1)); \
+ \
+ ADJUST_ALIGNMENT (RVVM##LMUL##QI, 1); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##HI, 2); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##SI, 4); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##DI, 8); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##HF, 2); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##SF, 4); \
+ ADJUST_ALIGNMENT (RVVM##LMUL##DF, 8);
+
+RVV_WHOLE_MODES (1)
+RVV_WHOLE_MODES (2)
+RVV_WHOLE_MODES (4)
+RVV_WHOLE_MODES (8)
+
+#define RVV_FRACT_MODE(TYPE, MODE, LMUL, ALIGN) \
+ VECTOR_MODE_WITH_PREFIX (RVVMF, TYPE, MODE, LMUL, 0); \
+ ADJUST_NUNITS (RVVMF##LMUL##MODE, \
+ riscv_v_adjust_nunits (RVVMF##LMUL##MODE##mode, true, LMUL, \
+ 1)); \
+ \
+ ADJUST_ALIGNMENT (RVVMF##LMUL##MODE, ALIGN);
+
+RVV_FRACT_MODE (INT, QI, 2, 1)
+RVV_FRACT_MODE (INT, QI, 4, 1)
+RVV_FRACT_MODE (INT, QI, 8, 1)
+RVV_FRACT_MODE (INT, HI, 2, 2)
+RVV_FRACT_MODE (INT, HI, 4, 2)
+RVV_FRACT_MODE (FLOAT, HF, 2, 2)
+RVV_FRACT_MODE (FLOAT, HF, 4, 2)
+RVV_FRACT_MODE (INT, SI, 2, 4)
+RVV_FRACT_MODE (FLOAT, SF, 2, 4)
+
+/* Tuple modes for segment loads/stores according to NF.
+
+ Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
+
+ When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
+ When LMUL is M2, NF can be 2 ~ 4.
+ When LMUL is M4, NF can be 4. */
+
+#define RVV_NF8_MODES(NF) \
+ VECTOR_MODE_WITH_PREFIX (RVVMF8x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, HI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, HI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, HI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF4x, FLOAT, HF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, HF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, HF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, SI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, SI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, SF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, SF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, DI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, DF, NF, 1); \
+ \
+ ADJUST_NUNITS (RVVMF8x##NF##QI, \
+ riscv_v_adjust_nunits (RVVMF8x##NF##QImode, true, 8, NF)); \
+ ADJUST_NUNITS (RVVMF4x##NF##QI, \
+ riscv_v_adjust_nunits (RVVMF4x##NF##QImode, true, 4, NF)); \
+ ADJUST_NUNITS (RVVMF2x##NF##QI, \
+ riscv_v_adjust_nunits (RVVMF2x##NF##QImode, true, 2, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##QI, \
+ riscv_v_adjust_nunits (RVVM1x##NF##QImode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVMF4x##NF##HI, \
+ riscv_v_adjust_nunits (RVVMF4x##NF##HImode, true, 4, NF)); \
+ ADJUST_NUNITS (RVVMF2x##NF##HI, \
+ riscv_v_adjust_nunits (RVVMF2x##NF##HImode, true, 2, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##HI, \
+ riscv_v_adjust_nunits (RVVM1x##NF##HImode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVMF4x##NF##HF, \
+ riscv_v_adjust_nunits (RVVMF4x##NF##HFmode, true, 4, NF)); \
+ ADJUST_NUNITS (RVVMF2x##NF##HF, \
+ riscv_v_adjust_nunits (RVVMF2x##NF##HFmode, true, 2, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##HF, \
+ riscv_v_adjust_nunits (RVVM1x##NF##HFmode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVMF2x##NF##SI, \
+ riscv_v_adjust_nunits (RVVMF2x##NF##SImode, true, 2, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##SI, \
+ riscv_v_adjust_nunits (RVVM1x##NF##SImode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVMF2x##NF##SF, \
+ riscv_v_adjust_nunits (RVVMF2x##NF##SFmode, true, 2, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##SF, \
+ riscv_v_adjust_nunits (RVVM1x##NF##SFmode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##DI, \
+ riscv_v_adjust_nunits (RVVM1x##NF##DImode, false, 1, NF)); \
+ ADJUST_NUNITS (RVVM1x##NF##DF, \
+ riscv_v_adjust_nunits (RVVM1x##NF##DFmode, false, 1, NF)); \
+ \
+ ADJUST_ALIGNMENT (RVVMF8x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVMF4x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVMF2x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVMF4x##NF##HI, 2); \
+ ADJUST_ALIGNMENT (RVVMF2x##NF##HI, 2); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##HI, 2); \
+ ADJUST_ALIGNMENT (RVVMF4x##NF##HF, 2); \
+ ADJUST_ALIGNMENT (RVVMF2x##NF##HF, 2); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##HF, 2); \
+ ADJUST_ALIGNMENT (RVVMF2x##NF##SI, 4); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##SI, 4); \
+ ADJUST_ALIGNMENT (RVVMF2x##NF##SF, 4); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##SF, 4); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##DI, 8); \
+ ADJUST_ALIGNMENT (RVVM1x##NF##DF, 8);
+
+RVV_NF8_MODES (8)
+RVV_NF8_MODES (7)
+RVV_NF8_MODES (6)
+RVV_NF8_MODES (5)
+RVV_NF8_MODES (4)
+RVV_NF8_MODES (3)
+RVV_NF8_MODES (2)
+
+#define RVV_NF4_MODES(NF) \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, HI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, HF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, SI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, SF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, DI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, DF, NF, 1); \
\
- ADJUST_NUNITS (VB##QI, riscv_v_adjust_nunits (VB##QI##mode, NVECS * 8)); \
- ADJUST_NUNITS (VH##HI, riscv_v_adjust_nunits (VH##HI##mode, NVECS * 4)); \
- ADJUST_NUNITS (VS##SI, riscv_v_adjust_nunits (VS##SI##mode, NVECS * 2)); \
- ADJUST_NUNITS (VD##DI, riscv_v_adjust_nunits (VD##DI##mode, NVECS)); \
- ADJUST_NUNITS (VH##HF, riscv_v_adjust_nunits (VH##HF##mode, NVECS * 4)); \
- ADJUST_NUNITS (VS##SF, riscv_v_adjust_nunits (VS##SF##mode, NVECS * 2)); \
- ADJUST_NUNITS (VD##DF, riscv_v_adjust_nunits (VD##DF##mode, NVECS)); \
+ ADJUST_NUNITS (RVVM2x##NF##QI, \
+ riscv_v_adjust_nunits (RVVM2x##NF##QImode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##HI, \
+ riscv_v_adjust_nunits (RVVM2x##NF##HImode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##HF, \
+ riscv_v_adjust_nunits (RVVM2x##NF##HFmode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##SI, \
+ riscv_v_adjust_nunits (RVVM2x##NF##SImode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##SF, \
+ riscv_v_adjust_nunits (RVVM2x##NF##SFmode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##DI, \
+ riscv_v_adjust_nunits (RVVM2x##NF##DImode, false, 2, NF)); \
+ ADJUST_NUNITS (RVVM2x##NF##DF, \
+ riscv_v_adjust_nunits (RVVM2x##NF##DFmode, false, 2, NF)); \
\
- ADJUST_ALIGNMENT (VB##QI, 1); \
- ADJUST_ALIGNMENT (VH##HI, 2); \
- ADJUST_ALIGNMENT (VS##SI, 4); \
- ADJUST_ALIGNMENT (VD##DI, 8); \
- ADJUST_ALIGNMENT (VH##HF, 2); \
- ADJUST_ALIGNMENT (VS##SF, 4); \
- ADJUST_ALIGNMENT (VD##DF, 8);
-
-RVV_MODES (1, VNx8, VNx4, VNx2, VNx1)
-RVV_MODES (2, VNx16, VNx8, VNx4, VNx2)
-RVV_MODES (4, VNx32, VNx16, VNx8, VNx4)
-RVV_MODES (8, VNx64, VNx32, VNx16, VNx8)
-RVV_MODES (16, VNx128, VNx64, VNx32, VNx16)
-
-VECTOR_MODES_WITH_PREFIX (VNx, INT, 4, 0);
-VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 4, 0);
-ADJUST_NUNITS (VNx4QI, riscv_v_adjust_nunits (VNx4QImode, 4));
-ADJUST_NUNITS (VNx2HI, riscv_v_adjust_nunits (VNx2HImode, 2));
-ADJUST_NUNITS (VNx2HF, riscv_v_adjust_nunits (VNx2HFmode, 2));
-ADJUST_ALIGNMENT (VNx4QI, 1);
-ADJUST_ALIGNMENT (VNx2HI, 2);
-ADJUST_ALIGNMENT (VNx2HF, 2);
-
-/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
- So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1SImode and VNx1SFmode. */
-VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 1, 0);
-VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 1, 0);
-ADJUST_NUNITS (VNx1SI, riscv_v_adjust_nunits (VNx1SImode, 1));
-ADJUST_NUNITS (VNx1SF, riscv_v_adjust_nunits (VNx1SFmode, 1));
-ADJUST_ALIGNMENT (VNx1SI, 4);
-ADJUST_ALIGNMENT (VNx1SF, 4);
-
-VECTOR_MODES_WITH_PREFIX (VNx, INT, 2, 0);
-ADJUST_NUNITS (VNx2QI, riscv_v_adjust_nunits (VNx2QImode, 2));
-ADJUST_ALIGNMENT (VNx2QI, 1);
-
-/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
- So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1HImode and VNx1HFmode. */
-VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 1, 0);
-VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 1, 0);
-ADJUST_NUNITS (VNx1HI, riscv_v_adjust_nunits (VNx1HImode, 1));
-ADJUST_NUNITS (VNx1HF, riscv_v_adjust_nunits (VNx1HFmode, 1));
-ADJUST_ALIGNMENT (VNx1HI, 2);
-ADJUST_ALIGNMENT (VNx1HF, 2);
-
-/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
- So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1QImode. */
-VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
-ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
-ADJUST_ALIGNMENT (VNx1QI, 1);
-
-/* Tuple modes for segment loads/stores according to NF, NF value can be 2 ~ 8. */
-
-/*
- | Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
- | | LMUL | SEW/LMUL | LMUL | SEW/LMUL | LMUL | SEW/LMUL |
- | VNxNFx1QI | MF4 | 32 | MF8 | 64 | N/A | N/A |
- | VNxNFx2QI | MF2 | 16 | MF4 | 32 | MF8 | 64 |
- | VNxNFx4QI | M1 | 8 | MF2 | 16 | MF4 | 32 |
- | VNxNFx8QI | M2 | 4 | M1 | 8 | MF2 | 16 |
- | VNxNFx16QI | M4 | 2 | M2 | 4 | M1 | 8 |
- | VNxNFx32QI | M8 | 1 | M4 | 2 | M2 | 4 |
- | VNxNFx64QI | N/A | N/A | M8 | 1 | M4 | 2 |
- | VNxNFx128QI | N/A | N/A | N/A | N/A | M8 | 1 |
- | VNxNFx1(HI|HF) | MF2 | 32 | MF4 | 64 | N/A | N/A |
- | VNxNFx2(HI|HF) | M1 | 16 | MF2 | 32 | MF4 | 64 |
- | VNxNFx4(HI|HF) | M2 | 8 | M1 | 16 | MF2 | 32 |
- | VNxNFx8(HI|HF) | M4 | 4 | M2 | 8 | M1 | 16 |
- | VNxNFx16(HI|HF)| M8 | 2 | M4 | 4 | M2 | 8 |
- | VNxNFx32(HI|HF)| N/A | N/A | M8 | 2 | M4 | 4 |
- | VNxNFx64(HI|HF)| N/A | N/A | N/A | N/A | M8 | 2 |
- | VNxNFx1(SI|SF) | M1 | 32 | MF2 | 64 | MF2 | 64 |
- | VNxNFx2(SI|SF) | M2 | 16 | M1 | 32 | M1 | 32 |
- | VNxNFx4(SI|SF) | M4 | 8 | M2 | 16 | M2 | 16 |
- | VNxNFx8(SI|SF) | M8 | 4 | M4 | 8 | M4 | 8 |
- | VNxNFx16(SI|SF)| N/A | N/A | M8 | 4 | M8 | 4 |
- | VNxNFx1(DI|DF) | N/A | N/A | M1 | 64 | N/A | N/A |
- | VNxNFx2(DI|DF) | N/A | N/A | M2 | 32 | M1 | 64 |
- | VNxNFx4(DI|DF) | N/A | N/A | M4 | 16 | M2 | 32 |
- | VNxNFx8(DI|DF) | N/A | N/A | M8 | 8 | M4 | 16 |
- | VNxNFx16(DI|DF)| N/A | N/A | N/A | N/A | M8 | 8 |
-*/
-
-#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD) \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, NBYTES / 2, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode, \
- VB * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode, \
- VH * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode, \
- VS * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode, \
- VD * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HF##mode, \
- VH * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode, \
- VS * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode, \
- VD * NSUBPARTS)); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##HI, 2); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##HF, 2); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##SI, 4); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##SF, 4); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##DI, 8); \
+ ADJUST_ALIGNMENT (RVVM2x##NF##DF, 8);
+
+RVV_NF4_MODES (2)
+RVV_NF4_MODES (3)
+RVV_NF4_MODES (4)
+
+#define RVV_NF2_MODES(NF) \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, QI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, HI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, HF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, SI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, SF, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, DI, NF, 1); \
+ VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, DF, NF, 1); \
\
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HF, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
-
-RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
-RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
-
-RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
-RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
-
-RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
-RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
-RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
-
-RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
-
-#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS) \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 1, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 2, 1); \
- VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1); \
+ ADJUST_NUNITS (RVVM4x##NF##QI, \
+ riscv_v_adjust_nunits (RVVM4x##NF##QImode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##HI, \
+ riscv_v_adjust_nunits (RVVM4x##NF##HImode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##HF, \
+ riscv_v_adjust_nunits (RVVM4x##NF##HFmode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##SI, \
+ riscv_v_adjust_nunits (RVVM4x##NF##SImode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##SF, \
+ riscv_v_adjust_nunits (RVVM4x##NF##SFmode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##DI, \
+ riscv_v_adjust_nunits (RVVM4x##NF##DImode, false, 4, NF)); \
+ ADJUST_NUNITS (RVVM4x##NF##DF, \
+ riscv_v_adjust_nunits (RVVM4x##NF##DFmode, false, 4, NF)); \
\
- ADJUST_NUNITS (VNx##NSUBPARTS##x1QI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode, \
- NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x1HI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode, \
- NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x1HF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HF##mode, \
- NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x1SI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode, \
- NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x1SF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode, \
- NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x2QI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode, \
- 2 * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x2HI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode, \
- 2 * NSUBPARTS)); \
-ADJUST_NUNITS (VNx##NSUBPARTS##x2HF, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HF##mode, \
- 2 * NSUBPARTS)); \
- ADJUST_NUNITS (VNx##NSUBPARTS##x4QI, \
- riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode, \
- 4 * NSUBPARTS)); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HF, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HF, 2); \
- ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
-
-RVV_TUPLE_PARTIAL_MODES (2)
-RVV_TUPLE_PARTIAL_MODES (3)
-RVV_TUPLE_PARTIAL_MODES (4)
-RVV_TUPLE_PARTIAL_MODES (5)
-RVV_TUPLE_PARTIAL_MODES (6)
-RVV_TUPLE_PARTIAL_MODES (7)
-RVV_TUPLE_PARTIAL_MODES (8)
+ ADJUST_ALIGNMENT (RVVM4x##NF##QI, 1); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##HI, 2); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##HF, 2); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##SI, 4); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##SF, 4); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##DI, 8); \
+ ADJUST_ALIGNMENT (RVVM4x##NF##DF, 8);
+
+RVV_NF2_MODES (2)
/* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
be 65536 for a single vector register which means the vector mode in
@@ -1550,37 +1550,20 @@ legitimize_move (rtx dest, rtx src)
/* VTYPE information for machine_mode. */
struct mode_vtype_group
{
- enum vlmul_type vlmul_for_min_vlen32[NUM_MACHINE_MODES];
- uint8_t ratio_for_min_vlen32[NUM_MACHINE_MODES];
- enum vlmul_type vlmul_for_min_vlen64[NUM_MACHINE_MODES];
- uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
- enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
- uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
+ enum vlmul_type vlmul[NUM_MACHINE_MODES];
+ uint8_t ratio[NUM_MACHINE_MODES];
machine_mode subpart_mode[NUM_MACHINE_MODES];
uint8_t nf[NUM_MACHINE_MODES];
mode_vtype_group ()
{
-#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32, \
- VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64, \
- VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128) \
- vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32; \
- ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32; \
- vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64; \
- ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64; \
- vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128; \
- ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
-#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
- RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64, \
- RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128, \
- RATIO_FOR_MIN_VLEN128) \
+#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO) \
+ vlmul[MODE##mode] = VLMUL; \
+ ratio[MODE##mode] = RATIO;
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO) \
subpart_mode[MODE##mode] = SUBPART_MODE##mode; \
nf[MODE##mode] = NF; \
- vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32; \
- ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32; \
- vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64; \
- ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64; \
- vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128; \
- ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
+ vlmul[MODE##mode] = VLMUL; \
+ ratio[MODE##mode] = RATIO;
#include "riscv-vector-switch.def"
#undef ENTRY
#undef TUPLE_ENTRY
@@ -1593,12 +1576,7 @@ static mode_vtype_group mode_vtype_infos;
enum vlmul_type
get_vlmul (machine_mode mode)
{
- if (TARGET_MIN_VLEN >= 128)
- return mode_vtype_infos.vlmul_for_for_vlen128[mode];
- else if (TARGET_MIN_VLEN == 32)
- return mode_vtype_infos.vlmul_for_min_vlen32[mode];
- else
- return mode_vtype_infos.vlmul_for_min_vlen64[mode];
+ return mode_vtype_infos.vlmul[mode];
}
/* Return the NF value of the corresponding mode. */
@@ -1610,8 +1588,8 @@ get_nf (machine_mode mode)
return mode_vtype_infos.nf[mode];
}
-/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
- the subpart mode is VNx1SImode. This will help to build
+/* Return the subpart mode of the tuple mode. For RVVM2x2SImode,
+ the subpart mode is RVVM2SImode. This will help to build
array/struct type in builtins. */
machine_mode
get_subpart_mode (machine_mode mode)
@@ -1625,12 +1603,7 @@ get_subpart_mode (machine_mode mode)
unsigned int
get_ratio (machine_mode mode)
{
- if (TARGET_MIN_VLEN >= 128)
- return mode_vtype_infos.ratio_for_for_vlen128[mode];
- else if (TARGET_MIN_VLEN == 32)
- return mode_vtype_infos.ratio_for_min_vlen32[mode];
- else
- return mode_vtype_infos.ratio_for_min_vlen64[mode];
+ return mode_vtype_infos.ratio[mode];
}
/* Get ta according to operand[tail_op_idx]. */
@@ -2171,12 +2144,12 @@ preferred_simd_mode (scalar_mode mode)
/* We will disable auto-vectorization when TARGET_MIN_VLEN < 128 &&
riscv_autovec_lmul < RVV_M2. Since GCC loop vectorizer report ICE when we
enable -march=rv64gc_zve32* and -march=rv32gc_zve64*. in the
- 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since we have
- VNx1SImode in -march=*zve32* and VNx1DImode in -march=*zve64*, they are
- enabled in targetm. vector_mode_supported_p and SLP vectorizer will try to
- use them. Currently, we can support auto-vectorization in
- -march=rv32_zve32x_zvl128b. Wheras, -march=rv32_zve32x_zvl32b or
- -march=rv32_zve32x_zvl64b are disabled. */
+ 'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since both
+ RVVM1SImode in -march=*zve32*_zvl32b and RVVM1DImode in
+ -march=*zve64*_zvl64b are NUNITS = poly (1, 1), they will cause ICE in loop
+ vectorizer when we enable them in this target hook. Currently, we can
+ support auto-vectorization in -march=rv32_zve32x_zvl128b. Wheras,
+ -march=rv32_zve32x_zvl32b or -march=rv32_zve32x_zvl64b are disabled. */
if (autovec_use_vlmax_p ())
{
if (TARGET_MIN_VLEN < 128 && riscv_autovec_lmul < RVV_M2)
@@ -2371,9 +2344,9 @@ autovectorize_vector_modes (vector_modes *modes, bool)
poly_uint64 full_size
= BYTES_PER_RISCV_VECTOR * ((int) riscv_autovec_lmul);
- /* Start with a VNxYYQImode where YY is the number of units that
+ /* Start with a RVV<LMUL>QImode where LMUL is the number of units that
fit a whole vector.
- Then try YY = nunits / 2, nunits / 4 and nunits / 8 which
+ Then try LMUL = nunits / 2, nunits / 4 and nunits / 8 which
is guided by the extensions we have available (vf2, vf4 and vf8).
- full_size: Try using full vectors for all element types.
@@ -109,10 +109,8 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
/* Static information about type suffix for each RVV type. */
const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
-#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
- VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
- VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX, \
- VSETVL_SUFFIX) \
+#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
+ VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX) \
{#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE, \
NF, VECTOR_SUFFIX) \
@@ -2802,12 +2800,9 @@ register_builtin_types ()
tree int64_type_node = get_typenode_from_name (INT64_TYPE);
machine_mode mode;
-#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
- VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
- VECTOR_MODE_MIN_VLEN_32, ARGS...) \
- mode = TARGET_MIN_VLEN >= 128 ? VECTOR_MODE_MIN_VLEN_128##mode \
- : TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode \
- : VECTOR_MODE_MIN_VLEN_32##mode; \
+#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
+ ARGS...) \
+ mode = VECTOR_MODE##mode; \
register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node, mode);
#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE, \
NF, VECTOR_SUFFIX) \
@@ -28,24 +28,19 @@ along with GCC; see the file COPYING3. If not see
"build_vector_type_for_mode". For "vint32m1_t", we use "intSI_type_node" in
RV64. Otherwise, we use "long_integer_type_node".
5.The 'VECTOR_MODE' is the machine modes of corresponding RVV type used
- in "build_vector_type_for_mode" when TARGET_MIN_VLEN > 32.
- For example: VECTOR_MODE = VNx2SI for "vint32m1_t".
- 6.The 'VECTOR_MODE_MIN_VLEN_32' is the machine modes of corresponding RVV
- type used in "build_vector_type_for_mode" when TARGET_MIN_VLEN = 32. For
- example: VECTOR_MODE_MIN_VLEN_32 = VNx1SI for "vint32m1_t".
- 7.The 'VECTOR_SUFFIX' define mode suffix for vector type.
+ in "build_vector_type_for_mode".
+ For example: VECTOR_MODE = RVVM1SImode for "vint32m1_t".
+ 6.The 'VECTOR_SUFFIX' define mode suffix for vector type.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vector = i32m1.
- 8.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
+ 7.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].scalar = i32.
- 9.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
+ 8.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vsetvl = e32m1.
*/
#ifndef DEF_RVV_TYPE
-#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
- VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
- VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX, \
- VSETVL_SUFFIX)
+#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
+ VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
#endif
#ifndef DEF_RVV_TUPLE_TYPE
@@ -101,47 +96,34 @@ along with GCC; see the file COPYING3. If not see
/* SEW/LMUL = 64:
Only enable when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1BImode when TARGET_MIN_VLEN < 128.
- Machine mode = VNx2BImode when TARGET_MIN_VLEN >= 128. */
-DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx2BI, VNx1BI, VOID, _b64, , )
+ Machine mode = RVVMF64BImode. */
+DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, RVVMF64BI, _b64, , )
/* SEW/LMUL = 32:
- Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx4BI, VNx2BI, VNx1BI, _b32, , )
+ Machine mode = RVVMF32BImode. */
+DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, RVVMF32BI, _b32, , )
/* SEW/LMUL = 16:
- Machine mode = VNx8BImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
- Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32. */
-DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx8BI, VNx4BI, VNx2BI, _b16, , )
+ Machine mode = RVVMF16BImode. */
+DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, RVVMF16BI, _b16, , )
/* SEW/LMUL = 8:
- Machine mode = VNx16BImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx16BI, VNx8BI, VNx4BI, _b8, , )
+ Machine mode = RVVMF8BImode. */
+DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, RVVMF8BI, _b8, , )
/* SEW/LMUL = 4:
- Machine mode = VNx32BImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx32BI, VNx16BI, VNx8BI, _b4, , )
+ Machine mode = RVVMF4BImode. */
+DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, RVVMF4BI, _b4, , )
/* SEW/LMUL = 2:
- Machine mode = VNx64BImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx64BI, VNx32BI, VNx16BI, _b2, , )
+ Machine mode = RVVMF2BImode. */
+DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, RVVMF2BI, _b2, , )
/* SEW/LMUL = 1:
- Machine mode = VNx128BImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx128BI, VNx64BI, VNx32BI, _b1, , )
+ Machine mode = RVVM1BImode. */
+DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, RVVM1BI, _b1, , )
/* LMUL = 1/8:
Only enble when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1QImode when TARGET_MIN_VLEN < 128.
- Machine mode = VNx2QImode when TARGET_MIN_VLEN >= 128. */
-DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx2QI, VNx1QI, VOID, _i8mf8, _i8,
+ Machine mode = RVVMF8QImode. */
+DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, RVVMF8QI, _i8mf8, _i8,
+ _e8mf8)
+DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, RVVMF8QI, _u8mf8, _u8,
_e8mf8)
-DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx2QI, VNx1QI, VOID, _u8mf8,
- _u8, _e8mf8)
/* Define tuple types for SEW = 8, LMUL = MF8. */
DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t, int8, 2, _i8mf8x2)
DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t, uint8, 2, _u8mf8x2)
@@ -158,13 +140,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18, __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t, int8, 8, _i8mf8x8)
DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t, uint8, 8, _u8mf8x8)
/* LMUL = 1/4:
- Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI, VNx1QI, _i8mf4,
- _i8, _e8mf4)
-DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI, VNx1QI, _u8mf4,
- _u8, _e8mf4)
+ Machine mode = RVVMF4QImode. */
+DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, RVVMF4QI, _i8mf4, _i8,
+ _e8mf4)
+DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, RVVMF4QI, _u8mf4, _u8,
+ _e8mf4)
/* Define tuple types for SEW = 8, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t, int8, 2, _i8mf4x2)
DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t, uint8, 2, _u8mf4x2)
@@ -181,13 +161,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18, __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t, int8, 8, _i8mf4x8)
DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t, uint8, 8, _u8mf4x8)
/* LMUL = 1/2:
- Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI, VNx2QI, _i8mf2,
- _i8, _e8mf2)
-DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI, VNx2QI, _u8mf2,
- _u8, _e8mf2)
+ Machine mode = RVVMF2QImode. */
+DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, RVVMF2QI, _i8mf2, _i8,
+ _e8mf2)
+DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, RVVMF2QI, _u8mf2, _u8,
+ _e8mf2)
/* Define tuple types for SEW = 8, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t, int8, 2, _i8mf2x2)
DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t, uint8, 2, _u8mf2x2)
@@ -204,13 +182,10 @@ DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18, __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t, int8, 8, _i8mf2x8)
DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t, uint8, 8, _u8mf2x8)
/* LMUL = 1:
- Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI, VNx4QI, _i8m1, _i8,
+ Machine mode = RVVM1QImode. */
+DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, RVVM1QI, _i8m1, _i8, _e8m1)
+DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, RVVM1QI, _u8m1, _u8,
_e8m1)
-DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI, VNx4QI, _u8m1,
- _u8, _e8m1)
/* Define tuple types for SEW = 8, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8, 2, _i8m1x2)
DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t, uint8, 2, _u8m1x2)
@@ -227,13 +202,10 @@ DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17, __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _
DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8, 8, _i8m1x8)
DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t, uint8, 8, _u8m1x8)
/* LMUL = 2:
- Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI, VNx8QI, _i8m2, _i8,
+ Machine mode = RVVM2QImode. */
+DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, RVVM2QI, _i8m2, _i8, _e8m2)
+DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, RVVM2QI, _u8m2, _u8,
_e8m2)
-DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI, VNx8QI, _u8m2,
- _u8, _e8m2)
/* Define tuple types for SEW = 8, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8, 2, _i8m2x2)
DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t, uint8, 2, _u8m2x2)
@@ -242,33 +214,26 @@ DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17, __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _
DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8, 4, _i8m2x4)
DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t, uint8, 4, _u8m2x4)
/* LMUL = 4:
- Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI, VNx16QI, _i8m4, _i8,
+ Machine mode = RVVM4QImode. */
+DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, RVVM4QI, _i8m4, _i8, _e8m4)
+DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, RVVM4QI, _u8m4, _u8,
_e8m4)
-DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI, VNx16QI, _u8m4,
- _u8, _e8m4)
/* Define tuple types for SEW = 8, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8, 2, _i8m4x2)
DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t, uint8, 2, _u8m4x2)
/* LMUL = 8:
- Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI, VNx32QI, _i8m8, _i8,
+ Machine mode = RVVM8QImode. */
+DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, RVVM8QI, _i8m8, _i8, _e8m8)
+DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, RVVM8QI, _u8m8, _u8,
_e8m8)
-DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI, VNx32QI, _u8m8,
- _u8, _e8m8)
/* LMUL = 1/4:
Only enble when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
- Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128. */
-DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI, VOID, _i16mf4,
- _i16, _e16mf4)
-DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI, VNx1HI, VOID,
- _u16mf4, _u16, _e16mf4)
+ Machine mode = RVVMF4HImode. */
+DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, RVVMF4HI, _i16mf4, _i16,
+ _e16mf4)
+DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, RVVMF4HI, _u16mf4,
+ _u16, _e16mf4)
/* Define tuple types for SEW = 16, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t, int16, 2, _i16mf4x2)
DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t, vuint16mf4_t, uint16, 2, _u16mf4x2)
@@ -285,13 +250,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19, __rvv_uint16mf4x7_t, vuint16mf4_t, uint1
DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t, int16, 8, _i16mf4x8)
DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t, vuint16mf4_t, uint16, 8, _u16mf4x8)
/* LMUL = 1/2:
- Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI, VNx1HI, _i16mf2,
- _i16, _e16mf2)
-DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI, VNx2HI, VNx1HI,
- _u16mf2, _u16, _e16mf2)
+ Machine mode = RVVMF2HImode. */
+DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, RVVMF2HI, _i16mf2, _i16,
+ _e16mf2)
+DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, RVVMF2HI, _u16mf2,
+ _u16, _e16mf2)
/* Define tuple types for SEW = 16, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t, int16, 2, _i16mf2x2)
DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t, vuint16mf2_t, uint16, 2, _u16mf2x2)
@@ -308,13 +271,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19, __rvv_uint16mf2x7_t, vuint16mf2_t, uint1
DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t, int16, 8, _i16mf2x8)
DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t, vuint16mf2_t, uint16, 8, _u16mf2x8)
/* LMUL = 1:
- Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI, VNx2HI, _i16m1,
- _i16, _e16m1)
-DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI, VNx2HI, _u16m1,
- _u16, _e16m1)
+ Machine mode = RVVM1HImode. */
+DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, RVVM1HI, _i16m1, _i16,
+ _e16m1)
+DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, RVVM1HI, _u16m1, _u16,
+ _e16m1)
/* Define tuple types for SEW = 16, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t, int16, 2, _i16m1x2)
DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t, uint16, 2, _u16m1x2)
@@ -331,13 +292,11 @@ DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18, __rvv_uint16m1x7_t, vuint16m1_t, uint16,
DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t, int16, 8, _i16m1x8)
DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t, uint16, 8, _u16m1x8)
/* LMUL = 2:
- Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI, VNx4HI, _i16m2,
- _i16, _e16m2)
-DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI, VNx4HI, _u16m2,
- _u16, _e16m2)
+ Machine mode = RVVM1H2mode. */
+DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, RVVM2HI, _i16m2, _i16,
+ _e16m2)
+DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, RVVM2HI, _u16m2, _u16,
+ _e16m2)
/* Define tuple types for SEW = 16, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t, int16, 2, _i16m2x2)
DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t, uint16, 2, _u16m2x2)
@@ -346,33 +305,28 @@ DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18, __rvv_uint16m2x3_t, vuint16m2_t, uint16,
DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t, int16, 4, _i16m2x4)
DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t, uint16, 4, _u16m2x4)
/* LMUL = 4:
- Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI, VNx8HI, _i16m4,
- _i16, _e16m4)
-DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI, VNx16HI, VNx8HI,
- _u16m4, _u16, _e16m4)
+ Machine mode = RVVM4HImode. */
+DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, RVVM4HI, _i16m4, _i16,
+ _e16m4)
+DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, RVVM4HI, _u16m4, _u16,
+ _e16m4)
/* Define tuple types for SEW = 16, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t, int16, 2, _i16m4x2)
DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t, uint16, 2, _u16m4x2)
/* LMUL = 8:
- Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI, VNx16HI, _i16m8,
- _i16, _e16m8)
-DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI, VNx32HI, VNx16HI,
- _u16m8, _u16, _e16m8)
+ Machine mode = RVVM8HImode. */
+DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, RVVM8HI, _i16m8, _i16,
+ _e16m8)
+DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, RVVM8HI, _u16m8, _u16,
+ _e16m8)
/* LMUL = 1/2:
Only enble when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
- Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128. */
-DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI, VOID, _i32mf2,
- _i32, _e32mf2)
-DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI, VNx1SI, VOID,
- _u32mf2, _u32, _e32mf2)
+ Machine mode = RVVMF2SImode. */
+DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, RVVMF2SI, _i32mf2, _i32,
+ _e32mf2)
+DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, RVVMF2SI, _u32mf2,
+ _u32, _e32mf2)
/* Define tuple types for SEW = 32, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t, int32, 2, _i32mf2x2)
DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t, vuint32mf2_t, uint32, 2, _u32mf2x2)
@@ -389,13 +343,11 @@ DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19, __rvv_uint32mf2x7_t, vuint32mf2_t, uint3
DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t, int32, 8, _i32mf2x8)
DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t, vuint32mf2_t, uint32, 8, _u32mf2x8)
/* LMUL = 1:
- Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx4SI, VNx2SI, VNx1SI, _i32m1,
- _i32, _e32m1)
-DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx4SI, VNx2SI, VNx1SI, _u32m1,
- _u32, _e32m1)
+ Machine mode = RVVM1SImode. */
+DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, RVVM1SI, _i32m1, _i32,
+ _e32m1)
+DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, RVVM1SI, _u32m1, _u32,
+ _e32m1)
/* Define tuple types for SEW = 32, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint32m1x2_t, 17, __rvv_int32m1x2_t, vint32m1_t, int32, 2, _i32m1x2)
DEF_RVV_TUPLE_TYPE (vuint32m1x2_t, 18, __rvv_uint32m1x2_t, vuint32m1_t, uint32, 2, _u32m1x2)
@@ -412,13 +364,11 @@ DEF_RVV_TUPLE_TYPE (vuint32m1x7_t, 18, __rvv_uint32m1x7_t, vuint32m1_t, uint32,
DEF_RVV_TUPLE_TYPE (vint32m1x8_t, 17, __rvv_int32m1x8_t, vint32m1_t, int32, 8, _i32m1x8)
DEF_RVV_TUPLE_TYPE (vuint32m1x8_t, 18, __rvv_uint32m1x8_t, vuint32m1_t, uint32, 8, _u32m1x8)
/* LMUL = 2:
- Machine mode = VNx8SImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx8SI, VNx4SI, VNx2SI, _i32m2,
- _i32, _e32m2)
-DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx8SI, VNx4SI, VNx2SI, _u32m2,
- _u32, _e32m2)
+ Machine mode = RVVM2SImode. */
+DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, RVVM2SI, _i32m2, _i32,
+ _e32m2)
+DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, RVVM2SI, _u32m2, _u32,
+ _e32m2)
/* Define tuple types for SEW = 32, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint32m2x2_t, 17, __rvv_int32m2x2_t, vint32m2_t, int32, 2, _i32m2x2)
DEF_RVV_TUPLE_TYPE (vuint32m2x2_t, 18, __rvv_uint32m2x2_t, vuint32m2_t, uint32, 2, _u32m2x2)
@@ -427,31 +377,27 @@ DEF_RVV_TUPLE_TYPE (vuint32m2x3_t, 18, __rvv_uint32m2x3_t, vuint32m2_t, uint32,
DEF_RVV_TUPLE_TYPE (vint32m2x4_t, 17, __rvv_int32m2x4_t, vint32m2_t, int32, 4, _i32m2x4)
DEF_RVV_TUPLE_TYPE (vuint32m2x4_t, 18, __rvv_uint32m2x4_t, vuint32m2_t, uint32, 4, _u32m2x4)
/* LMUL = 4:
- Machine mode = VNx16SImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx16SI, VNx8SI, VNx4SI, _i32m4,
- _i32, _e32m4)
-DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx16SI, VNx8SI, VNx4SI, _u32m4,
- _u32, _e32m4)
+ Machine mode = RVVM4SImode. */
+DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, RVVM4SI, _i32m4, _i32,
+ _e32m4)
+DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, RVVM4SI, _u32m4, _u32,
+ _e32m4)
/* Define tuple types for SEW = 32, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint32m4x2_t, 17, __rvv_int32m4x2_t, vint32m4_t, int32, 2, _i32m4x2)
DEF_RVV_TUPLE_TYPE (vuint32m4x2_t, 18, __rvv_uint32m4x2_t, vuint32m4_t, uint32, 2, _u32m4x2)
/* LMUL = 8:
- Machine mode = VNx32SImode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx32SI, VNx16SI, VNx8SI, _i32m8,
- _i32, _e32m8)
-DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx32SI, VNx16SI, VNx8SI,
- _u32m8, _u32, _e32m8)
+ Machine mode = RVVM8SImode. */
+DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, RVVM8SI, _i32m8, _i32,
+ _e32m8)
+DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, RVVM8SI, _u32m8, _u32,
+ _e32m8)
/* SEW = 64:
Disable when !TARGET_VECTOR_ELEN_64. */
-DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, VNx2DI, VNx1DI, VOID, _i64m1,
- _i64, _e64m1)
-DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx2DI, VNx1DI, VOID, _u64m1,
- _u64, _e64m1)
+DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, RVVM1DI, _i64m1, _i64,
+ _e64m1)
+DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, RVVM1DI, _u64m1, _u64,
+ _e64m1)
/* Define tuple types for SEW = 64, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint64m1x2_t, 17, __rvv_int64m1x2_t, vint64m1_t, int64, 2, _i64m1x2)
DEF_RVV_TUPLE_TYPE (vuint64m1x2_t, 18, __rvv_uint64m1x2_t, vuint64m1_t, uint64, 2, _u64m1x2)
@@ -467,10 +413,10 @@ DEF_RVV_TUPLE_TYPE (vint64m1x7_t, 17, __rvv_int64m1x7_t, vint64m1_t, int64, 7, _
DEF_RVV_TUPLE_TYPE (vuint64m1x7_t, 18, __rvv_uint64m1x7_t, vuint64m1_t, uint64, 7, _u64m1x7)
DEF_RVV_TUPLE_TYPE (vint64m1x8_t, 17, __rvv_int64m1x8_t, vint64m1_t, int64, 8, _i64m1x8)
DEF_RVV_TUPLE_TYPE (vuint64m1x8_t, 18, __rvv_uint64m1x8_t, vuint64m1_t, uint64, 8, _u64m1x8)
-DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, VNx4DI, VNx2DI, VOID, _i64m2,
- _i64, _e64m2)
-DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx4DI, VNx2DI, VOID, _u64m2,
- _u64, _e64m2)
+DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, RVVM2DI, _i64m2, _i64,
+ _e64m2)
+DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, RVVM2DI, _u64m2, _u64,
+ _e64m2)
/* Define tuple types for SEW = 64, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint64m2x2_t, 17, __rvv_int64m2x2_t, vint64m2_t, int64, 2, _i64m2x2)
DEF_RVV_TUPLE_TYPE (vuint64m2x2_t, 18, __rvv_uint64m2x2_t, vuint64m2_t, uint64, 2, _u64m2x2)
@@ -478,22 +424,22 @@ DEF_RVV_TUPLE_TYPE (vint64m2x3_t, 17, __rvv_int64m2x3_t, vint64m2_t, int64, 3, _
DEF_RVV_TUPLE_TYPE (vuint64m2x3_t, 18, __rvv_uint64m2x3_t, vuint64m2_t, uint64, 3, _u64m2x3)
DEF_RVV_TUPLE_TYPE (vint64m2x4_t, 17, __rvv_int64m2x4_t, vint64m2_t, int64, 4, _i64m2x4)
DEF_RVV_TUPLE_TYPE (vuint64m2x4_t, 18, __rvv_uint64m2x4_t, vuint64m2_t, uint64, 4, _u64m2x4)
-DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, VNx8DI, VNx4DI, VOID, _i64m4,
- _i64, _e64m4)
-DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx8DI, VNx4DI, VOID, _u64m4,
- _u64, _e64m4)
+DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, RVVM4DI, _i64m4, _i64,
+ _e64m4)
+DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, RVVM4DI, _u64m4, _u64,
+ _e64m4)
/* Define tuple types for SEW = 64, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint64m4x2_t, 17, __rvv_int64m4x2_t, vint64m4_t, int64, 2, _i64m4x2)
DEF_RVV_TUPLE_TYPE (vuint64m4x2_t, 18, __rvv_uint64m4x2_t, vuint64m4_t, uint64, 2, _u64m4x2)
-DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, VNx16DI, VNx8DI, VOID, _i64m8,
- _i64, _e64m8)
-DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx16DI, VNx8DI, VOID, _u64m8,
- _u64, _e64m8)
+DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, RVVM8DI, _i64m8, _i64,
+ _e64m8)
+DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, RVVM8DI, _u64m8, _u64,
+ _e64m8)
/* Enabled if TARGET_VECTOR_ELEN_FP_16 && (TARGET_ZVFH or TARGET_ZVFHMIN). */
/* LMUL = 1/4. */
-DEF_RVV_TYPE (vfloat16mf4_t, 18, __rvv_float16mf4_t, float16, VNx2HF, VNx1HF, VOID,
- _f16mf4, _f16, _e16mf4)
+DEF_RVV_TYPE (vfloat16mf4_t, 18, __rvv_float16mf4_t, float16, RVVMF4HF, _f16mf4,
+ _f16, _e16mf4)
/* Define tuple types for SEW = 16, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vfloat16mf4x2_t, 20, __rvv_float16mf4x2_t, vfloat16mf4_t, float, 2, _f16mf4x2)
DEF_RVV_TUPLE_TYPE (vfloat16mf4x3_t, 20, __rvv_float16mf4x3_t, vfloat16mf4_t, float, 3, _f16mf4x3)
@@ -503,8 +449,8 @@ DEF_RVV_TUPLE_TYPE (vfloat16mf4x6_t, 20, __rvv_float16mf4x6_t, vfloat16mf4_t, fl
DEF_RVV_TUPLE_TYPE (vfloat16mf4x7_t, 20, __rvv_float16mf4x7_t, vfloat16mf4_t, float, 7, _f16mf4x7)
DEF_RVV_TUPLE_TYPE (vfloat16mf4x8_t, 20, __rvv_float16mf4x8_t, vfloat16mf4_t, float, 8, _f16mf4x8)
/* LMUL = 1/2. */
-DEF_RVV_TYPE (vfloat16mf2_t, 18, __rvv_float16mf2_t, float16, VNx4HF, VNx2HF, VNx1HF,
- _f16mf2, _f16, _e16mf2)
+DEF_RVV_TYPE (vfloat16mf2_t, 18, __rvv_float16mf2_t, float16, RVVMF2HF, _f16mf2,
+ _f16, _e16mf2)
/* Define tuple types for SEW = 16, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vfloat16mf2x2_t, 20, __rvv_float16mf2x2_t, vfloat16mf2_t, float, 2, _f16mf2x2)
DEF_RVV_TUPLE_TYPE (vfloat16mf2x3_t, 20, __rvv_float16mf2x3_t, vfloat16mf2_t, float, 3, _f16mf2x3)
@@ -514,8 +460,8 @@ DEF_RVV_TUPLE_TYPE (vfloat16mf2x6_t, 20, __rvv_float16mf2x6_t, vfloat16mf2_t, fl
DEF_RVV_TUPLE_TYPE (vfloat16mf2x7_t, 20, __rvv_float16mf2x7_t, vfloat16mf2_t, float, 7, _f16mf2x7)
DEF_RVV_TUPLE_TYPE (vfloat16mf2x8_t, 20, __rvv_float16mf2x8_t, vfloat16mf2_t, float, 8, _f16mf2x8)
/* LMUL = 1. */
-DEF_RVV_TYPE (vfloat16m1_t, 17, __rvv_float16m1_t, float16, VNx8HF, VNx4HF, VNx2HF,
- _f16m1, _f16, _e16m1)
+DEF_RVV_TYPE (vfloat16m1_t, 17, __rvv_float16m1_t, float16, RVVM1HF, _f16m1,
+ _f16, _e16m1)
/* Define tuple types for SEW = 16, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat16m1x2_t, 19, __rvv_float16m1x2_t, vfloat16m1_t, float, 2, _f16m1x2)
DEF_RVV_TUPLE_TYPE (vfloat16m1x3_t, 19, __rvv_float16m1x3_t, vfloat16m1_t, float, 3, _f16m1x3)
@@ -525,28 +471,27 @@ DEF_RVV_TUPLE_TYPE (vfloat16m1x6_t, 19, __rvv_float16m1x6_t, vfloat16m1_t, float
DEF_RVV_TUPLE_TYPE (vfloat16m1x7_t, 19, __rvv_float16m1x7_t, vfloat16m1_t, float, 7, _f16m1x7)
DEF_RVV_TUPLE_TYPE (vfloat16m1x8_t, 19, __rvv_float16m1x8_t, vfloat16m1_t, float, 8, _f16m1x8)
/* LMUL = 2. */
-DEF_RVV_TYPE (vfloat16m2_t, 17, __rvv_float16m2_t, float16, VNx16HF, VNx8HF, VNx4HF,
- _f16m2, _f16, _e16m2)
+DEF_RVV_TYPE (vfloat16m2_t, 17, __rvv_float16m2_t, float16, RVVM2HF, _f16m2,
+ _f16, _e16m2)
/* Define tuple types for SEW = 16, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat16m2x2_t, 19, __rvv_float16m2x2_t, vfloat16m2_t, float, 2, _f16m2x2)
DEF_RVV_TUPLE_TYPE (vfloat16m2x3_t, 19, __rvv_float16m2x3_t, vfloat16m2_t, float, 3, _f16m2x3)
DEF_RVV_TUPLE_TYPE (vfloat16m2x4_t, 19, __rvv_float16m2x4_t, vfloat16m2_t, float, 4, _f16m2x4)
/* LMUL = 4. */
-DEF_RVV_TYPE (vfloat16m4_t, 17, __rvv_float16m4_t, float16, VNx32HF, VNx16HF, VNx8HF,
- _f16m4, _f16, _e16m4)
+DEF_RVV_TYPE (vfloat16m4_t, 17, __rvv_float16m4_t, float16, RVVM4HF, _f16m4,
+ _f16, _e16m4)
/* Define tuple types for SEW = 16, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat16m4x2_t, 19, __rvv_float16m4x2_t, vfloat16m4_t, float, 2, _f16m4x2)
/* LMUL = 8. */
-DEF_RVV_TYPE (vfloat16m8_t, 16, __rvv_float16m8_t, float16, VNx64HF, VNx32HF, VNx16HF,
- _f16m8, _f16, _e16m8)
+DEF_RVV_TYPE (vfloat16m8_t, 16, __rvv_float16m8_t, float16, RVVM8HF, _f16m8,
+ _f16, _e16m8)
/* Disable all when !TARGET_VECTOR_ELEN_FP_32. */
/* LMUL = 1/2:
Only enble when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1SFmode when TARGET_MIN_VLEN < 128.
- Machine mode = VNx2SFmode when TARGET_MIN_VLEN >= 128. */
-DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx2SF, VNx1SF, VOID,
- _f32mf2, _f32, _e32mf2)
+ Machine mode = RVVMF2SFmode. */
+DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, RVVMF2SF, _f32mf2,
+ _f32, _e32mf2)
/* Define tuple types for SEW = 32, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vfloat32mf2x2_t, 20, __rvv_float32mf2x2_t, vfloat32mf2_t, float, 2, _f32mf2x2)
DEF_RVV_TUPLE_TYPE (vfloat32mf2x3_t, 20, __rvv_float32mf2x3_t, vfloat32mf2_t, float, 3, _f32mf2x3)
@@ -556,11 +501,9 @@ DEF_RVV_TUPLE_TYPE (vfloat32mf2x6_t, 20, __rvv_float32mf2x6_t, vfloat32mf2_t, fl
DEF_RVV_TUPLE_TYPE (vfloat32mf2x7_t, 20, __rvv_float32mf2x7_t, vfloat32mf2_t, float, 7, _f32mf2x7)
DEF_RVV_TUPLE_TYPE (vfloat32mf2x8_t, 20, __rvv_float32mf2x8_t, vfloat32mf2_t, float, 8, _f32mf2x8)
/* LMUL = 1:
- Machine mode = VNx4SFmode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx4SF, VNx2SF, VNx1SF,
- _f32m1, _f32, _e32m1)
+ Machine mode = RVVM1SFmode. */
+DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, RVVM1SF, _f32m1, _f32,
+ _e32m1)
/* Define tuple types for SEW = 32, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat32m1x2_t, 19, __rvv_float32m1x2_t, vfloat32m1_t, float, 2, _f32m1x2)
DEF_RVV_TUPLE_TYPE (vfloat32m1x3_t, 19, __rvv_float32m1x3_t, vfloat32m1_t, float, 3, _f32m1x3)
@@ -570,33 +513,27 @@ DEF_RVV_TUPLE_TYPE (vfloat32m1x6_t, 19, __rvv_float32m1x6_t, vfloat32m1_t, float
DEF_RVV_TUPLE_TYPE (vfloat32m1x7_t, 19, __rvv_float32m1x7_t, vfloat32m1_t, float, 7, _f32m1x7)
DEF_RVV_TUPLE_TYPE (vfloat32m1x8_t, 19, __rvv_float32m1x8_t, vfloat32m1_t, float, 8, _f32m1x8)
/* LMUL = 2:
- Machine mode = VNx8SFmode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx8SF, VNx4SF, VNx2SF,
- _f32m2, _f32, _e32m2)
+ Machine mode = RVVM2SFmode. */
+DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, RVVM2SF, _f32m2, _f32,
+ _e32m2)
/* Define tuple types for SEW = 32, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat32m2x2_t, 19, __rvv_float32m2x2_t, vfloat32m2_t, float, 2, _f32m2x2)
DEF_RVV_TUPLE_TYPE (vfloat32m2x3_t, 19, __rvv_float32m2x3_t, vfloat32m2_t, float, 3, _f32m2x3)
DEF_RVV_TUPLE_TYPE (vfloat32m2x4_t, 19, __rvv_float32m2x4_t, vfloat32m2_t, float, 4, _f32m2x4)
/* LMUL = 4:
- Machine mode = VNx16SFmode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx16SF, VNx8SF, VNx4SF,
- _f32m4, _f32, _e32m4)
+ Machine mode = RVVM4SFmode. */
+DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, RVVM4SF, _f32m4, _f32,
+ _e32m4)
/* Define tuple types for SEW = 32, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat32m4x2_t, 19, __rvv_float32m4x2_t, vfloat32m4_t, float, 2, _f32m4x2)
/* LMUL = 8:
- Machine mode = VNx32SFmode when TARGET_MIN_VLEN >= 128.
- Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
- Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32. */
-DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx32SF, VNx16SF, VNx8SF,
- _f32m8, _f32, _e32m8)
+ Machine mode = RVVM8SFmode. */
+DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, RVVM8SF, _f32m8, _f32,
+ _e32m8)
/* SEW = 64:
Disable when !TARGET_VECTOR_ELEN_FP_64. */
-DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx2DF, VNx1DF, VOID, _f64m1,
+DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, RVVM1DF, _f64m1,
_f64, _e64m1)
/* Define tuple types for SEW = 64, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat64m1x2_t, 19, __rvv_float64m1x2_t, vfloat64m1_t, double, 2, _f64m1x2)
@@ -606,17 +543,17 @@ DEF_RVV_TUPLE_TYPE (vfloat64m1x5_t, 19, __rvv_float64m1x5_t, vfloat64m1_t, doubl
DEF_RVV_TUPLE_TYPE (vfloat64m1x6_t, 19, __rvv_float64m1x6_t, vfloat64m1_t, double, 6, _f64m1x6)
DEF_RVV_TUPLE_TYPE (vfloat64m1x7_t, 19, __rvv_float64m1x7_t, vfloat64m1_t, double, 7, _f64m1x7)
DEF_RVV_TUPLE_TYPE (vfloat64m1x8_t, 19, __rvv_float64m1x8_t, vfloat64m1_t, double, 8, _f64m1x8)
-DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx4DF, VNx2DF, VOID, _f64m2,
+DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, RVVM2DF, _f64m2,
_f64, _e64m2)
/* Define tuple types for SEW = 64, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat64m2x2_t, 19, __rvv_float64m2x2_t, vfloat64m2_t, double, 2, _f64m2x2)
DEF_RVV_TUPLE_TYPE (vfloat64m2x3_t, 19, __rvv_float64m2x3_t, vfloat64m2_t, double, 3, _f64m2x3)
DEF_RVV_TUPLE_TYPE (vfloat64m2x4_t, 19, __rvv_float64m2x4_t, vfloat64m2_t, double, 4, _f64m2x4)
-DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID, _f64m4,
+DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, RVVM4DF, _f64m4,
_f64, _e64m4)
/* Define tuple types for SEW = 64, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat64m4x2_t, 19, __rvv_float64m4x2_t, vfloat64m4_t, double, 2, _f64m4x2)
-DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
+DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, RVVM8DF, _f64m8,
_f64, _e64m8)
DEF_RVV_OP_TYPE (vv)
@@ -31,345 +31,260 @@ along with GCC; see the file COPYING3. If not see
Note: N/A means the corresponding vector type is disabled.
-|Types |LMUL=1|LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
-|int64_t |VNx1DI|VNx2DI |VNx4DI |VNx8DI |N/A |N/A |N/A |
-|uint64_t|VNx1DI|VNx2DI |VNx4DI |VNx8DI |N/A |N/A |N/A |
-|int32_t |VNx2SI|VNx4SI |VNx8SI |VNx16SI|VNx1SI |N/A |N/A |
-|uint32_t|VNx2SI|VNx4SI |VNx8SI |VNx16SI|VNx1SI |N/A |N/A |
-|int16_t |VNx4HI|VNx8HI |VNx16HI|VNx32HI|VNx2HI |VNx1HI |N/A |
-|uint16_t|VNx4HI|VNx8HI |VNx16HI|VNx32HI|VNx2HI |VNx1HI |N/A |
-|int8_t |VNx8QI|VNx16QI|VNx32QI|VNx64QI|VNx4QI |VNx2QI |VNx1QI |
-|uint8_t |VNx8QI|VNx16QI|VNx32QI|VNx64QI|VNx4QI |VNx2QI |VNx1QI |
-|float64 |VNx1DF|VNx2DF |VNx4DF |VNx8DF |N/A |N/A |N/A |
-|float32 |VNx2SF|VNx4SF |VNx8SF |VNx16SF|VNx1SF |N/A |N/A |
-|float16 |VNx4HF|VNx8HF |VNx16HF|VNx32HF|VNx2HF |VNx1HF |N/A |
-
-Mask Types Encode the ratio of SEW/LMUL into the
-mask types. There are the following mask types.
-
-n = SEW/LMUL
-
-|Types|n=1 |n=2 |n=4 |n=8 |n=16 |n=32 |n=64 |
-|bool |VNx64BI|VNx32BI|VNx16BI|VNx8BI|VNx4BI|VNx2BI|VNx1BI|
-
-There are the following data types for MIN_VLEN = 32.
-
-|Types |LMUL=1|LMUL=2|LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
-|int64_t |N/A |N/A |N/A |N/A |N/A |N/A |N/A |
-|uint64_t|N/A |N/A |N/A |N/A |N/A |N/A |N/A |
-|int32_t |VNx1SI|VNx2SI|VNx4SI |VNx8SI |N/A |N/A |N/A |
-|uint32_t|VNx1SI|VNx2SI|VNx4SI |VNx8SI |N/A |N/A |N/A |
-|int16_t |VNx2HI|VNx4HI|VNx8HI |VNx16HI|VNx1HI |N/A |N/A |
-|uint16_t|VNx2HI|VNx4HI|VNx8HI |VNx16HI|VNx1HI |N/A |N/A |
-|int8_t |VNx4QI|VNx8QI|VNx16QI|VNx32QI|VNx2QI |VNx1QI |N/A |
-|uint8_t |VNx4QI|VNx8QI|VNx16QI|VNx32QI|VNx2QI |VNx1QI |N/A |
-|float64 |N/A |N/A |N/A |N/A |N/A |N/A |N/A |
-|float32 |VNx1SF|VNx2SF|VNx4SF |VNx8SF |N/A |N/A |N/A |
-|float16 |VNx2HF|VNx4HF|VNx8HF |VNx16HF|VNx1HF |N/A |N/A |
-
-Mask Types Encode the ratio of SEW/LMUL into the
-mask types. There are the following mask types.
-
-n = SEW/LMUL
-
-|Types|n=1 |n=2 |n=4 |n=8 |n=16 |n=32 |n=64|
-|bool |VNx32BI|VNx16BI|VNx8BI|VNx4BI|VNx2BI|VNx1BI|N/A |
-
-TODO: FP16 vector needs support of 'zvfh', we don't support it yet. */
+Encode SEW and LMUL into data types.
+ We enforce the constraint LMUL ≥ SEW/ELEN in the implementation.
+ There are the following data types for ELEN = 64.
+
+ |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
+ |DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
+ |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
+ |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
+ |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
+ |DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
+ |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
+ |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
+
+There are the following data types for ELEN = 32.
+
+ |Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
+ |SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
+ |HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
+ |QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
+ |SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
+ |HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A |
+
+Encode the ratio of SEW/LMUL into the mask types.
+ There are the following mask types.
+
+ n = SEW/LMUL
+
+ |Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
+ |BI |RVVM1BI|RVVMF2BI|RVVMF4BI|RVVMF8BI|RVVMF16BI|RVVMF32BI|RVVMF64BI| */
/* Return 'REQUIREMENT' for machine_mode 'MODE'.
- For example: 'MODE' = VNx64BImode needs TARGET_MIN_VLEN > 32. */
+ For example: 'MODE' = RVVMF64BImode needs TARGET_MIN_VLEN > 32. */
#ifndef ENTRY
-#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32, \
- VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64, \
- VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
+#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO)
#endif
+
+/* Disable modes if TARGET_MIN_VLEN == 32. */
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF8BI, true, LMUL_1, 8)
+ENTRY (RVVMF4BI, true, LMUL_2, 4)
+ENTRY (RVVMF2BI, true, LMUL_4, 2)
+ENTRY (RVVM1BI, true, LMUL_8, 1)
+
+/* Disable modes if TARGET_MIN_VLEN == 32. */
+ENTRY (RVVM8QI, true, LMUL_8, 1)
+ENTRY (RVVM4QI, true, LMUL_4, 2)
+ENTRY (RVVM2QI, true, LMUL_2, 4)
+ENTRY (RVVM1QI, true, LMUL_1, 8)
+ENTRY (RVVMF2QI, true, LMUL_F2, 16)
+ENTRY (RVVMF4QI, true, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+
+/* Disable modes if TARGET_MIN_VLEN == 32. */
+ENTRY (RVVM8HI, true, LMUL_8, 2)
+ENTRY (RVVM4HI, true, LMUL_4, 4)
+ENTRY (RVVM2HI, true, LMUL_2, 8)
+ENTRY (RVVM1HI, true, LMUL_1, 16)
+ENTRY (RVVMF2HI, true, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+
+/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
+ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
+ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
+ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
+ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+
+/* Disable modes if TARGET_MIN_VLEN == 32. */
+ENTRY (RVVM8SI, true, LMUL_8, 4)
+ENTRY (RVVM4SI, true, LMUL_4, 8)
+ENTRY (RVVM2SI, true, LMUL_2, 16)
+ENTRY (RVVM1SI, true, LMUL_1, 32)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+
+/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
+ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
+ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
+ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
+ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+
+/* Disable modes if !TARGET_VECTOR_ELEN_64. */
+ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
+ENTRY (RVVM4DI, TARGET_VECTOR_ELEN_64, LMUL_4, 16)
+ENTRY (RVVM2DI, TARGET_VECTOR_ELEN_64, LMUL_2, 32)
+ENTRY (RVVM1DI, TARGET_VECTOR_ELEN_64, LMUL_1, 64)
+
+/* Disable modes if !TARGET_VECTOR_ELEN_FP_64. */
+ENTRY (RVVM8DF, TARGET_VECTOR_ELEN_FP_64, LMUL_8, 8)
+ENTRY (RVVM4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_4, 16)
+ENTRY (RVVM2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_2, 32)
+ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
+
+/* Tuple modes for segment loads/stores according to NF.
+
+ Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
+
+ When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
+ When LMUL is M2, NF can be 2 ~ 4.
+ When LMUL is M4, NF can be 4. */
+
#ifndef TUPLE_ENTRY
-#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
- RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64, \
- RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128, \
- RATIO_FOR_MIN_VLEN128)
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO)
#endif
-/* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN == 32. */
-ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
-ENTRY (VNx64BI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2)
-ENTRY (VNx32BI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4)
-ENTRY (VNx16BI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
-ENTRY (VNx8BI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
-ENTRY (VNx4BI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-ENTRY (VNx2BI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-ENTRY (VNx1BI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-
-/* SEW = 8. Disable VNx64QImode when TARGET_MIN_VLEN == 32. */
-ENTRY (VNx128QI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
-ENTRY (VNx64QI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2)
-ENTRY (VNx32QI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4)
-ENTRY (VNx16QI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
-ENTRY (VNx8QI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
-ENTRY (VNx4QI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-ENTRY (VNx2QI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-ENTRY (VNx1QI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-
-/* SEW = 16. Disable VNx32HImode when TARGET_MIN_VLEN == 32. */
-ENTRY (VNx64HI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2)
-ENTRY (VNx32HI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 2, LMUL_4, 4)
-ENTRY (VNx16HI, true, LMUL_8, 2, LMUL_4, 4, LMUL_2, 8)
-ENTRY (VNx8HI, true, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
-ENTRY (VNx4HI, true, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-ENTRY (VNx2HI, true, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-ENTRY (VNx1HI, TARGET_MIN_VLEN < 128, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-
-/* SEW = 16 for float point. Enabled when 'zvfh' or 'zvfhmin' is given. */
-ENTRY (VNx64HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, \
- LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2)
-ENTRY (VNx32HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, \
- LMUL_RESERVED, 0, LMUL_8, 2, LMUL_4, 4)
-ENTRY (VNx16HF, TARGET_VECTOR_ELEN_FP_16, \
- LMUL_8, 2, LMUL_4, 4, LMUL_2, 8)
-ENTRY (VNx8HF, TARGET_VECTOR_ELEN_FP_16, \
- LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
-ENTRY (VNx4HF, TARGET_VECTOR_ELEN_FP_16, \
- LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-ENTRY (VNx2HF, TARGET_VECTOR_ELEN_FP_16, \
- LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-ENTRY (VNx1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, \
- LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-
-/* SEW = 32. Disable VNx16SImode when TARGET_MIN_VLEN == 32.
- For single-precision floating-point, we need TARGET_VECTOR_ELEN_FP_32 to be
- true. */
-ENTRY (VNx32SI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4)
-ENTRY (VNx16SI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 4, LMUL_4, 8)
-ENTRY (VNx8SI, true, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16)
-ENTRY (VNx4SI, true, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
-ENTRY (VNx2SI, true, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-ENTRY (VNx1SI, TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-
-ENTRY (VNx32SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4)
-ENTRY (VNx16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0,
- LMUL_8, 4, LMUL_4, 8)
-ENTRY (VNx8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16)
-ENTRY (VNx4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
-ENTRY (VNx2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-ENTRY (VNx1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-
-/* SEW = 64. Enable when TARGET_VECTOR_ELEN_64 is true.
- For double-precision floating-point, we need TARGET_VECTOR_ELEN_FP_64 to be
- true. */
-ENTRY (VNx16DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
-ENTRY (VNx8DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16)
-ENTRY (VNx4DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
-ENTRY (VNx2DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-ENTRY (VNx1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-
-ENTRY (VNx16DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
-ENTRY (VNx8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0,
- LMUL_8, 8, LMUL_4, 16)
-ENTRY (VNx4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
-ENTRY (VNx2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-ENTRY (VNx1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-
-/* Enable or disable the tuple type. BASE_MODE is the base vector mode of the
- tuple mode. For example, the BASE_MODE of VNx2x1SImode is VNx1SImode. ALL
- tuple modes should always satisfy NF * BASE_MODE LMUL <= 8. */
-
-/* Tuple modes for EEW = 8. */
-TUPLE_ENTRY (VNx2x64QI, TARGET_MIN_VLEN >= 128, VNx64QI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 2)
-TUPLE_ENTRY (VNx2x32QI, TARGET_MIN_VLEN >= 64, VNx32QI, 2, LMUL_RESERVED, 0, LMUL_4, 2, LMUL_2, 4)
-TUPLE_ENTRY (VNx3x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
-TUPLE_ENTRY (VNx4x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
-TUPLE_ENTRY (VNx2x16QI, true, VNx16QI, 2, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
-TUPLE_ENTRY (VNx3x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 3, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
-TUPLE_ENTRY (VNx4x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 4, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
-TUPLE_ENTRY (VNx5x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
-TUPLE_ENTRY (VNx6x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
-TUPLE_ENTRY (VNx7x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
-TUPLE_ENTRY (VNx8x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
-TUPLE_ENTRY (VNx2x8QI, true, VNx8QI, 2, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx3x8QI, true, VNx8QI, 3, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx4x8QI, true, VNx8QI, 4, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx5x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 5, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx6x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 6, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx7x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 7, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx8x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 8, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
-TUPLE_ENTRY (VNx2x4QI, true, VNx4QI, 2, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx3x4QI, true, VNx4QI, 3, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx4x4QI, true, VNx4QI, 4, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx5x4QI, true, VNx4QI, 5, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx6x4QI, true, VNx4QI, 6, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx7x4QI, true, VNx4QI, 7, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx8x4QI, true, VNx4QI, 8, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
-TUPLE_ENTRY (VNx2x2QI, true, VNx2QI, 2, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx3x2QI, true, VNx2QI, 3, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx4x2QI, true, VNx2QI, 4, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx5x2QI, true, VNx2QI, 5, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx6x2QI, true, VNx2QI, 6, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx7x2QI, true, VNx2QI, 7, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx8x2QI, true, VNx2QI, 8, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
-TUPLE_ENTRY (VNx2x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 2, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 3, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 4, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 5, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 6, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 7, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 8, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
-
-/* Tuple modes for EEW = 16. */
-TUPLE_ENTRY (VNx2x32HI, TARGET_MIN_VLEN >= 128, VNx32HI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
-TUPLE_ENTRY (VNx2x16HI, TARGET_MIN_VLEN >= 64, VNx16HI, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
-TUPLE_ENTRY (VNx3x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
-TUPLE_ENTRY (VNx4x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
-TUPLE_ENTRY (VNx2x8HI, true, VNx8HI, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx3x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx4x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx5x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx6x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx7x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx8x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx2x4HI, true, VNx4HI, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx3x4HI, true, VNx4HI, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx4x4HI, true, VNx4HI, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx5x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx6x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx7x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx8x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx2x2HI, true, VNx2HI, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx3x2HI, true, VNx2HI, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx4x2HI, true, VNx2HI, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx5x2HI, true, VNx2HI, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx6x2HI, true, VNx2HI, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx7x2HI, true, VNx2HI, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx8x2HI, true, VNx2HI, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx2x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx2x32HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx32HF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
-TUPLE_ENTRY (VNx2x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx16HF, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
-TUPLE_ENTRY (VNx3x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx16HF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
-TUPLE_ENTRY (VNx4x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx16HF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
-TUPLE_ENTRY (VNx2x8HF, TARGET_VECTOR_ELEN_FP_16, VNx8HF, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx3x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx8HF, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx8HF, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
-TUPLE_ENTRY (VNx5x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx6x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx7x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx8x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
-TUPLE_ENTRY (VNx2x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx3x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx4x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx5x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx6x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx7x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx8x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
-TUPLE_ENTRY (VNx2x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx3x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx4x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx5x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx6x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx7x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx8x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
-TUPLE_ENTRY (VNx2x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
-
-/* Tuple modes for EEW = 32. */
-TUPLE_ENTRY (VNx2x16SI, TARGET_MIN_VLEN >= 128, VNx16SI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
-TUPLE_ENTRY (VNx2x8SI, TARGET_MIN_VLEN >= 64, VNx8SI, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
-TUPLE_ENTRY (VNx3x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
-TUPLE_ENTRY (VNx4x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
-TUPLE_ENTRY (VNx2x4SI, true, VNx4SI, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx3x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx4x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx5x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx6x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx7x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx8x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx2x2SI, true, VNx2SI, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx3x2SI, true, VNx2SI, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx4x2SI, true, VNx2SI, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx5x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx6x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx7x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx8x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx2x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx2x16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, VNx16SF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
-TUPLE_ENTRY (VNx2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx8SF, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
-TUPLE_ENTRY (VNx3x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
-TUPLE_ENTRY (VNx4x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
-TUPLE_ENTRY (VNx2x4SF, TARGET_VECTOR_ELEN_FP_32, VNx4SF, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx3x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx4x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
-TUPLE_ENTRY (VNx5x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx6x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx7x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx8x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
-TUPLE_ENTRY (VNx2x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx3x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx4x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx5x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx6x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx7x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx8x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
-TUPLE_ENTRY (VNx2x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
-
-/* Tuple modes for EEW = 64. */
-TUPLE_ENTRY (VNx2x8DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx8DI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
-TUPLE_ENTRY (VNx2x4DI, TARGET_VECTOR_ELEN_64, VNx4DI, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
-TUPLE_ENTRY (VNx3x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
-TUPLE_ENTRY (VNx4x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
-TUPLE_ENTRY (VNx2x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx3x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx4x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx5x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx6x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx7x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx8x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx2x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx2x8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx8DF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
-TUPLE_ENTRY (VNx2x4DF, TARGET_VECTOR_ELEN_FP_64, VNx4DF, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
-TUPLE_ENTRY (VNx3x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
-TUPLE_ENTRY (VNx4x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
-TUPLE_ENTRY (VNx2x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx3x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx4x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-TUPLE_ENTRY (VNx5x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx6x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx7x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx8x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
-TUPLE_ENTRY (VNx2x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx3x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx4x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx5x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx6x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx7x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
-TUPLE_ENTRY (VNx8x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
+TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
+TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
+TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
+TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
+TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+
+TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+
+TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+
+TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+
+TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+
+TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x6DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x5DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVM2x4DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVM2x3DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVM4x2DI, TARGET_VECTOR_ELEN_64, RVVM4DI, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 2, LMUL_1, 16)
+
+TUPLE_ENTRY (RVVM1x8DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 8, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x7DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 7, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x6DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 6, LMUL_1, 16)
+TUPLE_ENTRY (RVVM1x5DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 5, LMUL_1, 16)
+TUPLE_ENTRY (RVVM2x4DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 4, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x4DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 4, LMUL_1, 16)
+TUPLE_ENTRY (RVVM2x3DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 3, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x3DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 3, LMUL_1, 16)
+TUPLE_ENTRY (RVVM4x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM4DF, 2, LMUL_4, 4)
+TUPLE_ENTRY (RVVM2x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 2, LMUL_2, 8)
+TUPLE_ENTRY (RVVM1x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 2, LMUL_1, 16)
-#undef ENTRY
#undef TUPLE_ENTRY
+#undef ENTRY
@@ -892,9 +892,9 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
return false;
/* Fix bug:
- (insn 12 34 13 2 (set (reg:VNx8DI 120 v24 [orig:134 _1 ] [134])
- (if_then_else:VNx8DI (unspec:VNx8BI [
- (const_vector:VNx8BI repeat [
+ (insn 12 34 13 2 (set (reg:RVVM4DI 120 v24 [orig:134 _1 ] [134])
+ (if_then_else:RVVM4DI (unspec:RVVMF8BI [
+ (const_vector:RVVMF8BI repeat [
(const_int 1 [0x1])
])
(const_int 0 [0])
@@ -903,13 +903,13 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
(reg:SI 66 vl)
(reg:SI 67 vtype)
] UNSPEC_VPREDICATE)
- (plus:VNx8DI (reg/v:VNx8DI 104 v8 [orig:137 op1 ] [137])
- (sign_extend:VNx8DI (vec_duplicate:VNx8SI (reg:SI 15 a5
- [140])))) (unspec:VNx8DI [ (const_int 0 [0]) ] UNSPEC_VUNDEF))) "rvv.c":8:12
+ (plus:RVVM4DI (reg/v:RVVM4DI 104 v8 [orig:137 op1 ] [137])
+ (sign_extend:RVVM4DI (vec_duplicate:RVVM4SI (reg:SI 15 a5
+ [140])))) (unspec:RVVM4DI [ (const_int 0 [0]) ] UNSPEC_VUNDEF))) "rvv.c":8:12
2784 {pred_single_widen_addsvnx8di_scalar} (expr_list:REG_EQUIV
- (mem/c:VNx8DI (reg:DI 10 a0 [142]) [1 <retval>+0 S[64, 64] A128])
- (expr_list:REG_EQUAL (if_then_else:VNx8DI (unspec:VNx8BI [
- (const_vector:VNx8BI repeat [
+ (mem/c:RVVM4DI (reg:DI 10 a0 [142]) [1 <retval>+0 S[64, 64] A128])
+ (expr_list:REG_EQUAL (if_then_else:RVVM4DI (unspec:RVVMF8BI [
+ (const_vector:RVVMF8BI repeat [
(const_int 1 [0x1])
])
(reg/v:DI 13 a3 [orig:139 vl ] [139])
@@ -918,11 +918,11 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
(reg:SI 66 vl)
(reg:SI 67 vtype)
] UNSPEC_VPREDICATE)
- (plus:VNx8DI (reg/v:VNx8DI 104 v8 [orig:137 op1 ] [137])
- (const_vector:VNx8DI repeat [
+ (plus:RVVM4DI (reg/v:RVVM4DI 104 v8 [orig:137 op1 ] [137])
+ (const_vector:RVVM4DI repeat [
(const_int 2730 [0xaaa])
]))
- (unspec:VNx8DI [
+ (unspec:RVVM4DI [
(const_int 0 [0])
] UNSPEC_VUNDEF))
(nil))))
@@ -972,8 +972,8 @@ riscv_valid_lo_sum_p (enum riscv_symbol_type sym_type, machine_mode mode,
}
/* Return true if mode is the RVV enabled mode.
- For example: 'VNx1DI' mode is disabled if MIN_VLEN == 32.
- 'VNx1SI' mode is enabled if MIN_VLEN == 32. */
+ For example: 'RVVMF2SI' mode is disabled,
+ wheras 'RVVM1SI' mode is enabled if MIN_VLEN == 32. */
bool
riscv_v_ext_vector_mode_p (machine_mode mode)
@@ -1023,11 +1023,36 @@ riscv_v_ext_mode_p (machine_mode mode)
poly_int64
riscv_v_adjust_nunits (machine_mode mode, int scale)
{
+ gcc_assert (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL);
if (riscv_v_ext_mode_p (mode))
- return riscv_vector_chunks * scale;
+ {
+ if (TARGET_MIN_VLEN == 32)
+ scale = scale / 2;
+ return riscv_vector_chunks * scale;
+ }
return scale;
}
+/* Call from ADJUST_NUNITS in riscv-modes.def. Return the correct
+ NUNITS size for corresponding machine_mode. */
+
+poly_int64
+riscv_v_adjust_nunits (machine_mode mode, bool fractional_p, int lmul, int nf)
+{
+ if (riscv_v_ext_mode_p (mode))
+ {
+ scalar_mode smode = GET_MODE_INNER (mode);
+ int size = GET_MODE_SIZE (smode);
+ int nunits_per_chunk = riscv_bytes_per_vector_chunk / size;
+ if (fractional_p)
+ return nunits_per_chunk / lmul * riscv_vector_chunks * nf;
+ else
+ return nunits_per_chunk * lmul * riscv_vector_chunks * nf;
+ }
+ /* Set the disabled RVV modes size as 1 by default. */
+ return 1;
+}
+
/* Call from ADJUST_BYTESIZE in riscv-modes.def. Return the correct
BYTE size for corresponding machine_mode. */
@@ -1035,17 +1060,20 @@ poly_int64
riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
if (riscv_v_ext_vector_mode_p (mode))
- {
- poly_uint16 mode_size = GET_MODE_SIZE (mode);
-
- if (maybe_eq (mode_size, (uint16_t)-1))
- mode_size = riscv_vector_chunks * scale;
+ {
+ poly_int64 nunits = GET_MODE_NUNITS (mode);
+ poly_int64 mode_size = GET_MODE_SIZE (mode);
- if (known_gt (mode_size, BYTES_PER_RISCV_VECTOR))
- mode_size = BYTES_PER_RISCV_VECTOR;
+ if (maybe_eq (mode_size, (uint16_t) -1))
+ mode_size = riscv_vector_chunks * scale;
- return mode_size;
- }
+ if (nunits.coeffs[0] > 8)
+ return exact_div (nunits, 8);
+ else if (nunits.is_constant ())
+ return 1;
+ else
+ return poly_int64 (1, 1);
+ }
return scale;
}
@@ -1056,10 +1084,7 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
poly_int64
riscv_v_adjust_precision (machine_mode mode, int scale)
{
- if (riscv_v_ext_vector_mode_p (mode))
- return riscv_vector_chunks * scale;
-
- return scale;
+ return riscv_v_adjust_nunits (mode, scale);
}
/* Return true if X is a valid address for machine mode MODE. If it is,
@@ -6482,15 +6507,18 @@ riscv_init_machine_status (void)
static poly_uint16
riscv_convert_vector_bits (void)
{
- int chunk_num = 1;
- if (TARGET_MIN_VLEN >= 128)
+ int chunk_num;
+ if (TARGET_MIN_VLEN > 32)
{
- /* We have Full 'V' extension for application processors. It's specified
- by -march=rv64gcv/rv32gcv, The 'V' extension depends upon the Zvl128b
- and Zve64d extensions. Thus the number of bytes in a vector is 16 + 16
- * x1 which is riscv_vector_chunks * 16 = poly_int (16, 16). */
- riscv_bytes_per_vector_chunk = 16;
+ /* When targetting minimum VLEN > 32, we should use 64-bit chunk size.
+ Otherwise we can not include SEW = 64bits.
+ Runtime invariant: The single indeterminate represent the
+ number of 64-bit chunks in a vector beyond minimum length of 64 bits.
+ Thus the number of bytes in a vector is 8 + 8 * x1 which is
+ riscv_vector_chunks * 8 = poly_int (8, 8). */
+ riscv_bytes_per_vector_chunk = 8;
/* Adjust BYTES_PER_RISCV_VECTOR according to TARGET_MIN_VLEN:
+ - TARGET_MIN_VLEN = 64bit: [8,8]
- TARGET_MIN_VLEN = 128bit: [16,16]
- TARGET_MIN_VLEN = 256bit: [32,32]
- TARGET_MIN_VLEN = 512bit: [64,64]
@@ -6498,17 +6526,7 @@ riscv_convert_vector_bits (void)
- TARGET_MIN_VLEN = 2048bit: [256,256]
- TARGET_MIN_VLEN = 4096bit: [512,512]
FIXME: We currently DON'T support TARGET_MIN_VLEN > 4096bit. */
- chunk_num = TARGET_MIN_VLEN / 128;
- }
- else if (TARGET_MIN_VLEN > 32)
- {
- /* When targetting minimum VLEN > 32, we should use 64-bit chunk size.
- Otherwise we can not include SEW = 64bits.
- Runtime invariant: The single indeterminate represent the
- number of 64-bit chunks in a vector beyond minimum length of 64 bits.
- Thus the number of bytes in a vector is 8 + 8 * x1 which is
- riscv_vector_chunks * 8 = poly_int (8, 8). */
- riscv_bytes_per_vector_chunk = 8;
+ chunk_num = TARGET_MIN_VLEN / 64;
}
else
{
@@ -6518,6 +6536,7 @@ riscv_convert_vector_bits (void)
Thus the number of bytes in a vector is 4 + 4 * x1 which is
riscv_vector_chunks * 4 = poly_int (4, 4). */
riscv_bytes_per_vector_chunk = 4;
+ chunk_num = 1;
}
/* Set riscv_vector_chunks as poly (1, 1) run-time constant if TARGET_VECTOR
@@ -1040,6 +1040,7 @@ extern unsigned riscv_stack_boundary;
extern unsigned riscv_bytes_per_vector_chunk;
extern poly_uint16 riscv_vector_chunks;
extern poly_int64 riscv_v_adjust_nunits (enum machine_mode, int);
+extern poly_int64 riscv_v_adjust_nunits (machine_mode, bool, int, int);
extern poly_int64 riscv_v_adjust_precision (enum machine_mode, int);
extern poly_int64 riscv_v_adjust_bytesize (enum machine_mode, int);
/* The number of bits and bytes in a RVV vector. */
@@ -172,44 +172,51 @@
;; Main data type used by the insn
(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,HF,SF,DF,TF,
- VNx1BI,VNx2BI,VNx4BI,VNx8BI,VNx16BI,VNx32BI,VNx64BI,VNx128BI,
- VNx1QI,VNx2QI,VNx4QI,VNx8QI,VNx16QI,VNx32QI,VNx64QI,VNx128QI,
- VNx1HI,VNx2HI,VNx4HI,VNx8HI,VNx16HI,VNx32HI,VNx64HI,
- VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,
- VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,
- VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF,
- VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,
- VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,
- VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,
- VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI,
- VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI,
- VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI,
- VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI,
- VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI,
- VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,
- VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,
- VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,
- VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI,
- VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI,
- VNx2x32HF,VNx2x16HF,VNx3x16HF,VNx4x16HF,
- VNx2x8HF,VNx3x8HF,VNx4x8HF,VNx5x8HF,VNx6x8HF,VNx7x8HF,VNx8x8HF,
- VNx2x4HF,VNx3x4HF,VNx4x4HF,VNx5x4HF,VNx6x4HF,VNx7x4HF,VNx8x4HF,
- VNx2x2HF,VNx3x2HF,VNx4x2HF,VNx5x2HF,VNx6x2HF,VNx7x2HF,VNx8x2HF,
- VNx2x1HF,VNx3x1HF,VNx4x1HF,VNx5x1HF,VNx6x1HF,VNx7x1HF,VNx8x1HF,
- VNx2x16SI,VNx2x8SI,VNx3x8SI,VNx4x8SI,
- VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,
- VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,
- VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,
- VNx2x16SF,VNx2x8SF,VNx3x8SF,VNx4x8SF,
- VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF,
- VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF,
- VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF,
- VNx2x8DI,VNx2x4DI,VNx3x4DI,VNx4x4DI,
- VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,
- VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,
- VNx2x8DF,VNx2x4DF,VNx3x4DF,VNx4x4DF,
- VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF,
- VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF"
+ RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,
+ RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,
+ RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,
+ RVVM8HF,RVVM4HF,RVVM2HF,RVVM1HF,RVVMF2HF,RVVMF4HF,
+ RVVM8SI,RVVM4SI,RVVM2SI,RVVM1SI,RVVMF2SI,
+ RVVM8SF,RVVM4SF,RVVM2SF,RVVM1SF,RVVMF2SF,
+ RVVM8DI,RVVM4DI,RVVM2DI,RVVM1DI,
+ RVVM8DF,RVVM4DF,RVVM2DF,RVVM1DF,
+ RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,
+ RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,
+ RVVM1x6QI,RVVMF2x6QI,RVVMF4x6QI,RVVMF8x6QI,
+ RVVM1x5QI,RVVMF2x5QI,RVVMF4x5QI,RVVMF8x5QI,
+ RVVM2x4QI,RVVM1x4QI,RVVMF2x4QI,RVVMF4x4QI,RVVMF8x4QI,
+ RVVM2x3QI,RVVM1x3QI,RVVMF2x3QI,RVVMF4x3QI,RVVMF8x3QI,
+ RVVM4x2QI,RVVM2x2QI,RVVM1x2QI,RVVMF2x2QI,RVVMF4x2QI,RVVMF8x2QI,
+ RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,
+ RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,
+ RVVM1x6HI,RVVMF2x6HI,RVVMF4x6HI,
+ RVVM1x5HI,RVVMF2x5HI,RVVMF4x5HI,
+ RVVM2x4HI,RVVM1x4HI,RVVMF2x4HI,RVVMF4x4HI,
+ RVVM2x3HI,RVVM1x3HI,RVVMF2x3HI,RVVMF4x3HI,
+ RVVM4x2HI,RVVM2x2HI,RVVM1x2HI,RVVMF2x2HI,RVVMF4x2HI,
+ RVVM1x8HF,RVVMF2x8HF,RVVMF4x8HF,RVVM1x7HF,RVVMF2x7HF,
+ RVVMF4x7HF,RVVM1x6HF,RVVMF2x6HF,RVVMF4x6HF,RVVM1x5HF,
+ RVVMF2x5HF,RVVMF4x5HF,RVVM2x4HF,RVVM1x4HF,RVVMF2x4HF,
+ RVVMF4x4HF,RVVM2x3HF,RVVM1x3HF,RVVMF2x3HF,RVVMF4x3HF,
+ RVVM4x2HF,RVVM2x2HF,RVVM1x2HF,RVVMF2x2HF,RVVMF4x2HF,
+ RVVM1x8SI,RVVMF2x8SI,
+ RVVM1x7SI,RVVMF2x7SI,
+ RVVM1x6SI,RVVMF2x6SI,
+ RVVM1x5SI,RVVMF2x5SI,
+ RVVM2x4SI,RVVM1x4SI,RVVMF2x4SI,
+ RVVM2x3SI,RVVM1x3SI,RVVMF2x3SI,
+ RVVM4x2SI,RVVM2x2SI,RVVM1x2SI,RVVMF2x2SI,
+ RVVM1x8SF,RVVMF2x8SF,RVVM1x7SF,RVVMF2x7SF,
+ RVVM1x6SF,RVVMF2x6SF,RVVM1x5SF,RVVMF2x5SF,
+ RVVM2x4SF,RVVM1x4SF,RVVMF2x4SF,RVVM2x3SF,
+ RVVM1x3SF,RVVMF2x3SF,RVVM4x2SF,RVVM2x2SF,
+ RVVM1x2SF,RVVMF2x2SF,
+ RVVM1x8DI,RVVM1x7DI,RVVM1x6DI,RVVM1x5DI,
+ RVVM2x4DI,RVVM1x4DI,RVVM2x3DI,RVVM1x3DI,
+ RVVM4x2DI,RVVM2x2DI,RVVM1x2DI,RVVM1x8DF,
+ RVVM1x7DF,RVVM1x6DF,RVVM1x5DF,RVVM2x4DF,
+ RVVM1x4DF,RVVM2x3DF,RVVM1x3DF,RVVM4x2DF,
+ RVVM2x2DF,RVVM1x2DF"
(const_string "unknown"))
;; True if the main data type is twice the size of a word.
@@ -447,13 +454,13 @@
vfncvtitof,vfwcvtftoi,vfcvtftoi,vfcvtitof,
vfredo,vfredu,vfwredo,vfwredu,
vfslide1up,vfslide1down")
- (and (eq_attr "mode" "VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF")
+ (and (eq_attr "mode" "RVVM8HF,RVVM4HF,RVVM2HF,RVVM1HF,RVVMF2HF,RVVMF4HF")
(match_test "!TARGET_ZVFH")))
(const_string "yes")
;; The mode records as QI for the FP16 <=> INT8 instruction.
(and (eq_attr "type" "vfncvtftoi,vfwcvtitof")
- (and (eq_attr "mode" "VNx1QI,VNx2QI,VNx4QI,VNx8QI,VNx16QI,VNx32QI,VNx64QI")
+ (and (eq_attr "mode" "RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI")
(match_test "!TARGET_ZVFH")))
(const_string "yes")
]
@@ -88,219 +88,213 @@
])
(define_mode_iterator V [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
-
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
-
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VEEWEXT2 [
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128") (VNx2HF "TARGET_VECTOR_ELEN_FP_16") (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16") (VNx16HF "TARGET_VECTOR_ELEN_FP_16") (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VEEWEXT4 [
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VEEWEXT8 [
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VEEWTRUNC2 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN >= 128")
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128") (VNx2HF "TARGET_VECTOR_ELEN_FP_16") (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16") (VNx16HF "TARGET_VECTOR_ELEN_FP_16") (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VEEWTRUNC4 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI (VNx32QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI (VNx16HI "TARGET_MIN_VLEN >= 128")
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128") (VNx2HF "TARGET_VECTOR_ELEN_FP_16") (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16") (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
+ RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VEEWTRUNC8 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI (VNx16QI "TARGET_MIN_VLEN >= 128")
+ RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VLMULEXT2 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VLMULEXT4 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI (VNx32QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI (VNx16HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI (VNx8SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VLMULEXT8 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI (VNx16QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI (VNx8HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI (VNx4SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VLMULEXT16 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI (VNx8QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI (VNx4HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") (VNx2SI "TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VLMULEXT32 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI (VNx4QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") (VNx2HI "TARGET_MIN_VLEN >= 128")
+ RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VLMULEXT64 [
- (VNx1QI "TARGET_MIN_VLEN < 128") (VNx2QI "TARGET_MIN_VLEN >= 128")
+ (RVVMF8QI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VEI16 [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VI [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
-])
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
-(define_mode_iterator VWI [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VF_ZVFHMIN [
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
-
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
;; This iterator is the same as above but with TARGET_VECTOR_ELEN_FP_16
@@ -311,1246 +305,1186 @@
;; since this will only disable insn alternatives in reload but still
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
- (VNx1HF "TARGET_ZVFH && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_ZVFH")
- (VNx4HF "TARGET_ZVFH")
- (VNx8HF "TARGET_ZVFH")
- (VNx16HF "TARGET_ZVFH")
- (VNx32HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_ZVFH && TARGET_MIN_VLEN >= 128")
-
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
-])
-
-(define_mode_iterator VWF [
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_MIN_VLEN < 128") VNx2SF VNx4SF VNx8SF (VNx16SF "TARGET_MIN_VLEN > 32") (VNx32SF "TARGET_MIN_VLEN >= 128")
+ (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
+ (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VFULLI [
- (VNx1QI "!TARGET_FULL_V") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_FULL_V")
- (VNx1HI "!TARGET_FULL_V") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_FULL_V")
- (VNx1SI "!TARGET_FULL_V") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_FULL_V")
- (VNx2DI "TARGET_FULL_V") (VNx4DI "TARGET_FULL_V") (VNx8DI "TARGET_FULL_V") (VNx16DI "TARGET_FULL_V")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
])
(define_mode_iterator VI_QH [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_D [
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VFULLI_D [
- (VNx2DI "TARGET_FULL_V") (VNx4DI "TARGET_FULL_V") (VNx8DI "TARGET_FULL_V") (VNx16DI "TARGET_FULL_V")
+ (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V")
+ (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
])
-(define_mode_iterator VNX1_QHSD [
- (VNx1QI "TARGET_MIN_VLEN < 128")
- (VNx1HI "TARGET_MIN_VLEN < 128")
- (VNx1SI "TARGET_MIN_VLEN < 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+(define_mode_iterator RATIO64 [
+ (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVM1DI "TARGET_VECTOR_ELEN_64")
+ (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
-(define_mode_iterator VNX2_QHSD [
- VNx2QI
- VNx2HI
- VNx2SI
- (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
+(define_mode_iterator RATIO32 [
+ RVVMF4QI
+ RVVMF2HI
+ RVVM1SI
+ (RVVM2DI "TARGET_VECTOR_ELEN_64")
+ (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
-(define_mode_iterator VNX4_QHSD [
- VNx4QI
- VNx4HI
- VNx4SI
- (VNx4DI "TARGET_VECTOR_ELEN_64")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
+(define_mode_iterator RATIO16 [
+ RVVMF2QI
+ RVVM1HI
+ RVVM2SI
+ (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
])
-(define_mode_iterator VNX8_QHSD [
- VNx8QI
- VNx8HI
- VNx8SI
- (VNx8DI "TARGET_VECTOR_ELEN_64")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
+(define_mode_iterator RATIO8 [
+ RVVM1QI
+ RVVM2HI
+ RVVM4SI
+ (RVVM8DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
])
-(define_mode_iterator VNX16_QHSD [
- VNx16QI
- VNx16HI
- (VNx16SI "TARGET_MIN_VLEN > 32")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+(define_mode_iterator RATIO4 [
+ RVVM2QI
+ RVVM4HI
+ RVVM8SI
+ (RVVM4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32")
])
-(define_mode_iterator VNX32_QHS [
- VNx32QI
- (VNx32HI "TARGET_MIN_VLEN > 32")
- (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+(define_mode_iterator RATIO2 [
+ RVVM4QI
+ RVVM8HI
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16")
])
-(define_mode_iterator VNX64_QH [
- (VNx64QI "TARGET_MIN_VLEN > 32")
- (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
+(define_mode_iterator RATIO1 [
+ RVVM8QI
])
-(define_mode_iterator VNX128_Q [
- (VNx128QI "TARGET_MIN_VLEN >= 128")
+(define_mode_iterator RATIO64I [
+ (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
-(define_mode_iterator VNX1_QHSDI [
- (VNx1QI "TARGET_MIN_VLEN < 128")
- (VNx1HI "TARGET_MIN_VLEN < 128")
- (VNx1SI "TARGET_MIN_VLEN < 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128 && TARGET_64BIT")
+(define_mode_iterator RATIO32I [
+ RVVMF4QI
+ RVVMF2HI
+ RVVM1SI
+ (RVVM2DI "TARGET_VECTOR_ELEN_64")
])
-(define_mode_iterator VNX2_QHSDI [
- VNx2QI
- VNx2HI
- VNx2SI
- (VNx2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
+(define_mode_iterator RATIO16I [
+ RVVMF2QI
+ RVVM1HI
+ RVVM2SI
+ (RVVM4DI "TARGET_VECTOR_ELEN_64")
])
-(define_mode_iterator VNX4_QHSDI [
- VNx4QI
- VNx4HI
- VNx4SI
- (VNx4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
+(define_mode_iterator RATIO8I [
+ RVVM1QI
+ RVVM2HI
+ RVVM4SI
+ (RVVM8DI "TARGET_VECTOR_ELEN_64")
])
-(define_mode_iterator VNX8_QHSDI [
- VNx8QI
- VNx8HI
- VNx8SI
- (VNx8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
+(define_mode_iterator RATIO4I [
+ RVVM2QI
+ RVVM4HI
+ RVVM8SI
])
-(define_mode_iterator VNX16_QHSDI [
- VNx16QI
- VNx16HI
- (VNx16SI "TARGET_MIN_VLEN > 32")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128 && TARGET_64BIT")
+(define_mode_iterator RATIO2I [
+ RVVM4QI
+ RVVM8HI
])
-(define_mode_iterator VNX32_QHSI [
- VNx32QI
- (VNx32HI "TARGET_MIN_VLEN > 32")
- (VNx32SI "TARGET_MIN_VLEN >= 128")
-])
+(define_mode_iterator V_WHOLE [
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI
-(define_mode_iterator VNX64_QHI [
- (VNx64QI "TARGET_MIN_VLEN > 32")
- (VNx64HI "TARGET_MIN_VLEN >= 128")
-])
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI
-(define_mode_iterator V_WHOLE [
- (VNx4QI "TARGET_MIN_VLEN == 32") (VNx8QI "TARGET_MIN_VLEN < 128") VNx16QI VNx32QI
- (VNx64QI "TARGET_MIN_VLEN > 32") (VNx128QI "TARGET_MIN_VLEN >= 128")
- (VNx2HI "TARGET_MIN_VLEN == 32") (VNx4HI "TARGET_MIN_VLEN < 128") VNx8HI VNx16HI
- (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN == 32") (VNx2SI "TARGET_MIN_VLEN < 128") VNx4SI VNx8SI
- (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
-
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN == 32")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
-
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN == 32")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") (RVVM1HF "TARGET_VECTOR_ELEN_FP_16")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI
+
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator V_FRACT [
- (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI (VNx4QI "TARGET_MIN_VLEN > 32") (VNx8QI "TARGET_MIN_VLEN >= 128")
- (VNx1HI "TARGET_MIN_VLEN < 128") (VNx2HI "TARGET_MIN_VLEN > 32") (VNx4HI "TARGET_MIN_VLEN >= 128")
+ RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
+ RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (VNx1SI "TARGET_MIN_VLEN == 64") (VNx2SI "TARGET_MIN_VLEN >= 128")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN == 64")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VB [
- (VNx1BI "TARGET_MIN_VLEN < 128") VNx2BI VNx4BI VNx8BI VNx16BI VNx32BI
- (VNx64BI "TARGET_MIN_VLEN > 32") (VNx128BI "TARGET_MIN_VLEN >= 128")
+ (RVVMF64BI "TARGET_MIN_VLEN > 32") RVVMF32BI RVVMF16BI RVVMF8BI RVVMF4BI RVVMF2BI RVVM1BI
])
(define_mode_iterator VWEXTI [
- (VNx1HI "TARGET_MIN_VLEN < 128") VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32") (VNx64HI "TARGET_MIN_VLEN >= 128")
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
;; Same iterator split reason as VF_ZVFHMIN and VF.
(define_mode_iterator VWEXTF_ZVFHMIN [
- (VNx1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_16")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VWEXTF [
- (VNx1SF "TARGET_ZVFH && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_ZVFH")
- (VNx4SF "TARGET_ZVFH")
- (VNx8SF "TARGET_ZVFH")
- (VNx16SF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_ZVFH && TARGET_MIN_VLEN >= 128")
+ (RVVM8SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VWCONVERTI [
- (VNx1SI "TARGET_ZVFH && TARGET_MIN_VLEN < 128")
- (VNx2SI "TARGET_ZVFH")
- (VNx4SI "TARGET_ZVFH")
- (VNx8SI "TARGET_ZVFH")
- (VNx16SI "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
- (VNx32SI "TARGET_ZVFH && TARGET_MIN_VLEN >= 128")
+ (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
+ (RVVMF2SI "TARGET_ZVFH")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
- (VNx4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
- (VNx8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
])
(define_mode_iterator VQEXTI [
- (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VQEXTF [
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VOEXTI [
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VT [
- (VNx2x64QI "TARGET_MIN_VLEN >= 128")
- (VNx2x32QI "TARGET_MIN_VLEN >= 64")
- (VNx3x32QI "TARGET_MIN_VLEN >= 128")
- (VNx4x32QI "TARGET_MIN_VLEN >= 128")
- VNx2x16QI
- (VNx3x16QI "TARGET_MIN_VLEN >= 64")
- (VNx4x16QI "TARGET_MIN_VLEN >= 64")
- (VNx5x16QI "TARGET_MIN_VLEN >= 128")
- (VNx6x16QI "TARGET_MIN_VLEN >= 128")
- (VNx7x16QI "TARGET_MIN_VLEN >= 128")
- (VNx8x16QI "TARGET_MIN_VLEN >= 128")
- VNx2x8QI
- VNx3x8QI
- VNx4x8QI
- (VNx5x8QI "TARGET_MIN_VLEN >= 64")
- (VNx6x8QI "TARGET_MIN_VLEN >= 64")
- (VNx7x8QI "TARGET_MIN_VLEN >= 64")
- (VNx8x8QI "TARGET_MIN_VLEN >= 64")
- VNx2x4QI
- VNx3x4QI
- VNx4x4QI
- VNx5x4QI
- VNx6x4QI
- VNx7x4QI
- VNx8x4QI
- VNx2x2QI
- VNx3x2QI
- VNx4x2QI
- VNx5x2QI
- VNx6x2QI
- VNx7x2QI
- VNx8x2QI
- (VNx2x1QI "TARGET_MIN_VLEN < 128")
- (VNx3x1QI "TARGET_MIN_VLEN < 128")
- (VNx4x1QI "TARGET_MIN_VLEN < 128")
- (VNx5x1QI "TARGET_MIN_VLEN < 128")
- (VNx6x1QI "TARGET_MIN_VLEN < 128")
- (VNx7x1QI "TARGET_MIN_VLEN < 128")
- (VNx8x1QI "TARGET_MIN_VLEN < 128")
- (VNx2x32HI "TARGET_MIN_VLEN >= 128")
- (VNx2x16HI "TARGET_MIN_VLEN >= 64")
- (VNx3x16HI "TARGET_MIN_VLEN >= 128")
- (VNx4x16HI "TARGET_MIN_VLEN >= 128")
- VNx2x8HI
- (VNx3x8HI "TARGET_MIN_VLEN >= 64")
- (VNx4x8HI "TARGET_MIN_VLEN >= 64")
- (VNx5x8HI "TARGET_MIN_VLEN >= 128")
- (VNx6x8HI "TARGET_MIN_VLEN >= 128")
- (VNx7x8HI "TARGET_MIN_VLEN >= 128")
- (VNx8x8HI "TARGET_MIN_VLEN >= 128")
- VNx2x4HI
- VNx3x4HI
- VNx4x4HI
- (VNx5x4HI "TARGET_MIN_VLEN >= 64")
- (VNx6x4HI "TARGET_MIN_VLEN >= 64")
- (VNx7x4HI "TARGET_MIN_VLEN >= 64")
- (VNx8x4HI "TARGET_MIN_VLEN >= 64")
- VNx2x2HI
- VNx3x2HI
- VNx4x2HI
- VNx5x2HI
- VNx6x2HI
- VNx7x2HI
- VNx8x2HI
- (VNx2x1HI "TARGET_MIN_VLEN < 128")
- (VNx3x1HI "TARGET_MIN_VLEN < 128")
- (VNx4x1HI "TARGET_MIN_VLEN < 128")
- (VNx5x1HI "TARGET_MIN_VLEN < 128")
- (VNx6x1HI "TARGET_MIN_VLEN < 128")
- (VNx7x1HI "TARGET_MIN_VLEN < 128")
- (VNx8x1HI "TARGET_MIN_VLEN < 128")
- (VNx2x16SI "TARGET_MIN_VLEN >= 128")
- (VNx2x8SI "TARGET_MIN_VLEN >= 64")
- (VNx3x8SI "TARGET_MIN_VLEN >= 128")
- (VNx4x8SI "TARGET_MIN_VLEN >= 128")
- VNx2x4SI
- (VNx3x4SI "TARGET_MIN_VLEN >= 64")
- (VNx4x4SI "TARGET_MIN_VLEN >= 64")
- (VNx5x4SI "TARGET_MIN_VLEN >= 128")
- (VNx6x4SI "TARGET_MIN_VLEN >= 128")
- (VNx7x4SI "TARGET_MIN_VLEN >= 128")
- (VNx8x4SI "TARGET_MIN_VLEN >= 128")
- VNx2x2SI
- VNx3x2SI
- VNx4x2SI
- (VNx5x2SI "TARGET_MIN_VLEN >= 64")
- (VNx6x2SI "TARGET_MIN_VLEN >= 64")
- (VNx7x2SI "TARGET_MIN_VLEN >= 64")
- (VNx8x2SI "TARGET_MIN_VLEN >= 64")
- (VNx2x1SI "TARGET_MIN_VLEN < 128")
- (VNx3x1SI "TARGET_MIN_VLEN < 128")
- (VNx4x1SI "TARGET_MIN_VLEN < 128")
- (VNx5x1SI "TARGET_MIN_VLEN < 128")
- (VNx6x1SI "TARGET_MIN_VLEN < 128")
- (VNx7x1SI "TARGET_MIN_VLEN < 128")
- (VNx8x1SI "TARGET_MIN_VLEN < 128")
- (VNx2x8DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x4DI "TARGET_VECTOR_ELEN_64")
- (VNx3x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx4x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x2DI "TARGET_VECTOR_ELEN_64")
- (VNx3x2DI "TARGET_VECTOR_ELEN_64")
- (VNx4x2DI "TARGET_VECTOR_ELEN_64")
- (VNx5x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx6x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx7x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx8x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx3x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx4x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx5x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx6x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx7x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx8x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx2x32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx2x16HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx3x16HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx4x16HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx2x8HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx3x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx4x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx5x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx6x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx7x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx8x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
- (VNx2x4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx3x4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4x4HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx5x4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx6x4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx7x4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx8x4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64")
- (VNx2x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx3x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx4x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx5x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx6x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx7x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx8x2HF "TARGET_VECTOR_ELEN_FP_16")
- (VNx2x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx3x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx4x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx5x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx6x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx7x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx8x1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
- (VNx2x16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx2x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx3x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx4x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx2x4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx3x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx4x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx5x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx6x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx7x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx8x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx2x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx3x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx5x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx6x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx7x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx8x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx2x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx3x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx4x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx5x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx6x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx7x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx8x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2x8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx3x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx4x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx3x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx5x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx6x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx7x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx8x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx3x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx4x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx5x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx6x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx7x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx8x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
-])
-
-(define_mode_iterator V1I [
- (VNx1QI "TARGET_MIN_VLEN < 128")
- (VNx1HI "TARGET_MIN_VLEN < 128")
- (VNx1SI "TARGET_MIN_VLEN < 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
-])
-
-(define_mode_iterator V2I [
- VNx2QI
- VNx2HI
- VNx2SI
- (VNx2DI "TARGET_VECTOR_ELEN_64")
-])
-
-(define_mode_iterator V4I [
- VNx4QI
- VNx4HI
- VNx4SI
- (VNx4DI "TARGET_VECTOR_ELEN_64")
-])
-
-(define_mode_iterator V8I [
- VNx8QI
- VNx8HI
- VNx8SI
- (VNx8DI "TARGET_VECTOR_ELEN_64")
-])
-
-(define_mode_iterator V16I [
- VNx16QI
- VNx16HI
- (VNx16SI "TARGET_MIN_VLEN > 32")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
-])
-
-(define_mode_iterator V32I [
- VNx32QI
- (VNx32HI "TARGET_MIN_VLEN > 32")
- (VNx32SI "TARGET_MIN_VLEN >= 128")
-])
-
-(define_mode_iterator V64I [
- (VNx64QI "TARGET_MIN_VLEN > 32")
- (VNx64HI "TARGET_MIN_VLEN >= 128")
+ RVVM1x8QI RVVMF2x8QI RVVMF4x8QI (RVVMF8x8QI "TARGET_MIN_VLEN > 32")
+ RVVM1x7QI RVVMF2x7QI RVVMF4x7QI (RVVMF8x7QI "TARGET_MIN_VLEN > 32")
+ RVVM1x6QI RVVMF2x6QI RVVMF4x6QI (RVVMF8x6QI "TARGET_MIN_VLEN > 32")
+ RVVM1x5QI RVVMF2x5QI RVVMF4x5QI (RVVMF8x5QI "TARGET_MIN_VLEN > 32")
+ RVVM2x4QI RVVM1x4QI RVVMF2x4QI RVVMF4x4QI (RVVMF8x4QI "TARGET_MIN_VLEN > 32")
+ RVVM2x3QI RVVM1x3QI RVVMF2x3QI RVVMF4x3QI (RVVMF8x3QI "TARGET_MIN_VLEN > 32")
+ RVVM4x2QI RVVM2x2QI RVVM1x2QI RVVMF2x2QI RVVMF4x2QI (RVVMF8x2QI "TARGET_MIN_VLEN > 32")
+
+ RVVM1x8HI RVVMF2x8HI (RVVMF4x8HI "TARGET_MIN_VLEN > 32")
+ RVVM1x7HI RVVMF2x7HI (RVVMF4x7HI "TARGET_MIN_VLEN > 32")
+ RVVM1x6HI RVVMF2x6HI (RVVMF4x6HI "TARGET_MIN_VLEN > 32")
+ RVVM1x5HI RVVMF2x5HI (RVVMF4x5HI "TARGET_MIN_VLEN > 32")
+ RVVM2x4HI RVVM1x4HI RVVMF2x4HI (RVVMF4x4HI "TARGET_MIN_VLEN > 32")
+ RVVM2x3HI RVVM1x3HI RVVMF2x3HI (RVVMF4x3HI "TARGET_MIN_VLEN > 32")
+ RVVM4x2HI RVVM2x2HI RVVM1x2HI RVVMF2x2HI (RVVMF4x2HI "TARGET_MIN_VLEN > 32")
+
+ (RVVM1x8HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x8HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x8HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1x7HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x7HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x7HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1x6HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x6HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x6HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1x5HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x5HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x5HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM2x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM2x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x3HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM4x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+
+ RVVM1x8SI (RVVMF2x8SI "TARGET_MIN_VLEN > 32")
+ RVVM1x7SI (RVVMF2x7SI "TARGET_MIN_VLEN > 32")
+ RVVM1x6SI (RVVMF2x6SI "TARGET_MIN_VLEN > 32")
+ RVVM1x5SI (RVVMF2x5SI "TARGET_MIN_VLEN > 32")
+ RVVM2x4SI RVVM1x4SI (RVVMF2x4SI "TARGET_MIN_VLEN > 32")
+ RVVM2x3SI RVVM1x3SI (RVVMF2x3SI "TARGET_MIN_VLEN > 32")
+ RVVM4x2SI RVVM2x2SI RVVM1x2SI (RVVMF2x2SI "TARGET_MIN_VLEN > 32")
+
+ (RVVM1x8SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1x7SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x7SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1x6SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x6SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1x5SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x5SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM2x4SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x4SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM2x3SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x3SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x3SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM4x2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2x2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+
+ (RVVM1x8DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x7DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x6DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x5DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2x4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2x3DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x3DI "TARGET_VECTOR_ELEN_64")
+ (RVVM4x2DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2x2DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x2DI "TARGET_VECTOR_ELEN_64")
+
+ (RVVM1x8DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x7DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x6DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x5DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2x4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2x3DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x3DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM4x2DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2x2DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator V1T [
- (VNx2x1QI "TARGET_MIN_VLEN < 128")
- (VNx3x1QI "TARGET_MIN_VLEN < 128")
- (VNx4x1QI "TARGET_MIN_VLEN < 128")
- (VNx5x1QI "TARGET_MIN_VLEN < 128")
- (VNx6x1QI "TARGET_MIN_VLEN < 128")
- (VNx7x1QI "TARGET_MIN_VLEN < 128")
- (VNx8x1QI "TARGET_MIN_VLEN < 128")
- (VNx2x1HI "TARGET_MIN_VLEN < 128")
- (VNx3x1HI "TARGET_MIN_VLEN < 128")
- (VNx4x1HI "TARGET_MIN_VLEN < 128")
- (VNx5x1HI "TARGET_MIN_VLEN < 128")
- (VNx6x1HI "TARGET_MIN_VLEN < 128")
- (VNx7x1HI "TARGET_MIN_VLEN < 128")
- (VNx8x1HI "TARGET_MIN_VLEN < 128")
- (VNx2x1SI "TARGET_MIN_VLEN < 128")
- (VNx3x1SI "TARGET_MIN_VLEN < 128")
- (VNx4x1SI "TARGET_MIN_VLEN < 128")
- (VNx5x1SI "TARGET_MIN_VLEN < 128")
- (VNx6x1SI "TARGET_MIN_VLEN < 128")
- (VNx7x1SI "TARGET_MIN_VLEN < 128")
- (VNx8x1SI "TARGET_MIN_VLEN < 128")
- (VNx2x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx3x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx4x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx5x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx6x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx7x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx8x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx2x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx3x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx4x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx5x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx6x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx7x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx8x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx3x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx4x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx5x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx6x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx7x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx8x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+ (RVVMF8x2QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x3QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x4QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x5QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x6QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x7QI "TARGET_MIN_VLEN > 32")
+ (RVVMF8x8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x2HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x3HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x5HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x6HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x7HI "TARGET_MIN_VLEN > 32")
+ (RVVMF4x8HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x3SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x4SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x5SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x6SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x7SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2x8SI "TARGET_MIN_VLEN > 32")
+ (RVVM1x2DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x3DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x5DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x6DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x7DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x8DI "TARGET_VECTOR_ELEN_64")
+ (RVVMF4x2HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x3HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x4HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x5HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x6HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x7HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4x8HF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x2SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x3SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x4SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x5SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x6SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x7SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVMF2x8SF "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x2DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x3DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x5DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x6DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x7DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM1x8DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator V2T [
- VNx2x2QI
- VNx3x2QI
- VNx4x2QI
- VNx5x2QI
- VNx6x2QI
- VNx7x2QI
- VNx8x2QI
- VNx2x2HI
- VNx3x2HI
- VNx4x2HI
- VNx5x2HI
- VNx6x2HI
- VNx7x2HI
- VNx8x2HI
- VNx2x2SI
- VNx3x2SI
- VNx4x2SI
- (VNx5x2SI "TARGET_MIN_VLEN >= 64")
- (VNx6x2SI "TARGET_MIN_VLEN >= 64")
- (VNx7x2SI "TARGET_MIN_VLEN >= 64")
- (VNx8x2SI "TARGET_MIN_VLEN >= 64")
- (VNx2x2DI "TARGET_VECTOR_ELEN_64")
- (VNx3x2DI "TARGET_VECTOR_ELEN_64")
- (VNx4x2DI "TARGET_VECTOR_ELEN_64")
- (VNx5x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx6x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx7x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx8x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx3x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4x2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx5x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx6x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx7x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx8x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx2x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx3x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4x2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx5x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx6x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx7x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx8x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVMF4x2QI
+ RVVMF4x3QI
+ RVVMF4x4QI
+ RVVMF4x5QI
+ RVVMF4x6QI
+ RVVMF4x7QI
+ RVVMF4x8QI
+ RVVMF2x2HI
+ RVVMF2x3HI
+ RVVMF2x4HI
+ RVVMF2x5HI
+ RVVMF2x6HI
+ RVVMF2x7HI
+ RVVMF2x8HI
+ RVVM1x2SI
+ RVVM1x3SI
+ RVVM1x4SI
+ RVVM1x5SI
+ RVVM1x6SI
+ RVVM1x7SI
+ RVVM1x8SI
+ (RVVM2x2DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2x3DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2x4DI "TARGET_VECTOR_ELEN_64")
+ (RVVMF2x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x5HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x6HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x7HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2x8HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x3SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x4SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x5SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x6SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x7SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1x8SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2x2DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2x3DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2x4DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator V4T [
- VNx2x4QI
- VNx3x4QI
- VNx4x4QI
- VNx5x4QI
- VNx6x4QI
- VNx7x4QI
- VNx8x4QI
- VNx2x4HI
- VNx3x4HI
- VNx4x4HI
- (VNx5x4HI "TARGET_MIN_VLEN >= 64")
- (VNx6x4HI "TARGET_MIN_VLEN >= 64")
- (VNx7x4HI "TARGET_MIN_VLEN >= 64")
- (VNx8x4HI "TARGET_MIN_VLEN >= 64")
- VNx2x4SI
- (VNx3x4SI "TARGET_MIN_VLEN >= 64")
- (VNx4x4SI "TARGET_MIN_VLEN >= 64")
- (VNx5x4SI "TARGET_MIN_VLEN >= 128")
- (VNx6x4SI "TARGET_MIN_VLEN >= 128")
- (VNx7x4SI "TARGET_MIN_VLEN >= 128")
- (VNx8x4SI "TARGET_MIN_VLEN >= 128")
- (VNx2x4DI "TARGET_VECTOR_ELEN_64")
- (VNx3x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx4x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx3x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx4x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx5x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx6x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx7x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx8x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx2x4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx3x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx4x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVMF2x2QI
+ RVVMF2x3QI
+ RVVMF2x4QI
+ RVVMF2x5QI
+ RVVMF2x6QI
+ RVVMF2x7QI
+ RVVMF2x8QI
+ RVVM1x2HI
+ RVVM1x3HI
+ RVVM1x4HI
+ RVVM1x5HI
+ RVVM1x6HI
+ RVVM1x7HI
+ RVVM1x8HI
+ RVVM2x2SI
+ RVVM2x3SI
+ RVVM2x4SI
+ (RVVM4x2DI "TARGET_VECTOR_ELEN_64")
+ (RVVM1x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x5HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x6HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x7HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM1x8HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2x2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2x3SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM2x4SF "TARGET_VECTOR_ELEN_FP_32")
])
(define_mode_iterator V8T [
- VNx2x8QI
- VNx3x8QI
- VNx4x8QI
- (VNx5x8QI "TARGET_MIN_VLEN >= 64")
- (VNx6x8QI "TARGET_MIN_VLEN >= 64")
- (VNx7x8QI "TARGET_MIN_VLEN >= 64")
- (VNx8x8QI "TARGET_MIN_VLEN >= 64")
- VNx2x8HI
- (VNx3x8HI "TARGET_MIN_VLEN >= 64")
- (VNx4x8HI "TARGET_MIN_VLEN >= 64")
- (VNx5x8HI "TARGET_MIN_VLEN >= 128")
- (VNx6x8HI "TARGET_MIN_VLEN >= 128")
- (VNx7x8HI "TARGET_MIN_VLEN >= 128")
- (VNx8x8HI "TARGET_MIN_VLEN >= 128")
- (VNx2x8SI "TARGET_MIN_VLEN >= 64")
- (VNx3x8SI "TARGET_MIN_VLEN >= 128")
- (VNx4x8SI "TARGET_MIN_VLEN >= 128")
- (VNx2x8DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx2x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
- (VNx3x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx4x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx2x8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ RVVM1x2QI
+ RVVM1x3QI
+ RVVM1x4QI
+ RVVM1x5QI
+ RVVM1x6QI
+ RVVM1x7QI
+ RVVM1x8QI
+ RVVM2x2HI
+ RVVM2x3HI
+ RVVM2x4HI
+ RVVM4x2SI
+ (RVVM2x2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2x3HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM2x4HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVM4x2SF "TARGET_VECTOR_ELEN_FP_32")
])
(define_mode_iterator V16T [
- VNx2x16QI
- (VNx3x16QI "TARGET_MIN_VLEN >= 64")
- (VNx4x16QI "TARGET_MIN_VLEN >= 64")
- (VNx5x16QI "TARGET_MIN_VLEN >= 128")
- (VNx6x16QI "TARGET_MIN_VLEN >= 128")
- (VNx7x16QI "TARGET_MIN_VLEN >= 128")
- (VNx8x16QI "TARGET_MIN_VLEN >= 128")
- (VNx2x16HI "TARGET_MIN_VLEN >= 64")
- (VNx3x16HI "TARGET_MIN_VLEN >= 128")
- (VNx4x16HI "TARGET_MIN_VLEN >= 128")
- (VNx2x16SI "TARGET_MIN_VLEN >= 128")
- (VNx2x16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ RVVM2x2QI
+ RVVM2x3QI
+ RVVM2x4QI
+ RVVM4x2HI
+ (RVVM4x2HF "TARGET_VECTOR_ELEN_FP_16")
])
(define_mode_iterator V32T [
- (VNx2x32QI "TARGET_MIN_VLEN >= 64")
- (VNx3x32QI "TARGET_MIN_VLEN >= 128")
- (VNx4x32QI "TARGET_MIN_VLEN >= 128")
- (VNx2x32HI "TARGET_MIN_VLEN >= 128")
-])
-
-(define_mode_iterator V64T [
- (VNx2x64QI "TARGET_MIN_VLEN >= 128")
+ RVVM4x2QI
])
(define_mode_iterator VQI [
- (VNx1QI "TARGET_MIN_VLEN < 128")
- VNx2QI
- VNx4QI
- VNx8QI
- VNx16QI
- VNx32QI
- (VNx64QI "TARGET_MIN_VLEN > 32")
- (VNx128QI "TARGET_MIN_VLEN >= 128")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VHI [
- (VNx1HI "TARGET_MIN_VLEN < 128")
- VNx2HI
- VNx4HI
- VNx8HI
- VNx16HI
- (VNx32HI "TARGET_MIN_VLEN > 32")
- (VNx64HI "TARGET_MIN_VLEN >= 128")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VSI [
- (VNx1SI "TARGET_MIN_VLEN < 128")
- VNx2SI
- VNx4SI
- VNx8SI
- (VNx16SI "TARGET_MIN_VLEN > 32")
- (VNx32SI "TARGET_MIN_VLEN >= 128")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VDI [
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
- (VNx2DI "TARGET_VECTOR_ELEN_64")
- (VNx4DI "TARGET_VECTOR_ELEN_64")
- (VNx8DI "TARGET_VECTOR_ELEN_64")
- (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VHF [
- (VNx1HF "TARGET_ZVFH && TARGET_MIN_VLEN < 128")
- (VNx2HF "TARGET_ZVFH")
- (VNx4HF "TARGET_ZVFH")
- (VNx8HF "TARGET_ZVFH")
- (VNx16HF "TARGET_ZVFH")
- (VNx32HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
- (VNx64HF "TARGET_ZVFH && TARGET_MIN_VLEN >= 128")
+ (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
+ (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VSF [
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
- (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
- (VNx32SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+ (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VDF [
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
- (VNx16DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+ (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
+ (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator VQI_LMUL1 [
- (VNx16QI "TARGET_MIN_VLEN >= 128")
- (VNx8QI "TARGET_MIN_VLEN == 64")
- (VNx4QI "TARGET_MIN_VLEN == 32")
+ RVVM1QI
])
(define_mode_iterator VHI_LMUL1 [
- (VNx8HI "TARGET_MIN_VLEN >= 128")
- (VNx4HI "TARGET_MIN_VLEN == 64")
- (VNx2HI "TARGET_MIN_VLEN == 32")
+ RVVM1HI
])
(define_mode_iterator VSI_LMUL1 [
- (VNx4SI "TARGET_MIN_VLEN >= 128")
- (VNx2SI "TARGET_MIN_VLEN == 64")
- (VNx1SI "TARGET_MIN_VLEN == 32")
+ RVVM1SI
])
(define_mode_iterator VDI_LMUL1 [
- (VNx2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
- (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN == 64")
+ (RVVM1DI "TARGET_VECTOR_ELEN_64")
])
(define_mode_iterator VHF_LMUL1 [
- (VNx8HF "TARGET_ZVFH && TARGET_MIN_VLEN >= 128")
- (VNx4HF "TARGET_ZVFH && TARGET_MIN_VLEN == 64")
- (VNx2HF "TARGET_ZVFH && TARGET_MIN_VLEN == 32")
+ (RVVM1HF "TARGET_ZVFH")
])
(define_mode_iterator VSF_LMUL1 [
- (VNx4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
- (VNx2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN == 64")
- (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN == 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
])
(define_mode_iterator VDF_LMUL1 [
- (VNx2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
- (VNx1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN == 64")
+ (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
-(define_mode_attr VLMULX2 [
- (VNx1QI "VNx2QI") (VNx2QI "VNx4QI") (VNx4QI "VNx8QI") (VNx8QI "VNx16QI") (VNx16QI "VNx32QI") (VNx32QI "VNx64QI") (VNx64QI "VNx128QI")
- (VNx1HI "VNx2HI") (VNx2HI "VNx4HI") (VNx4HI "VNx8HI") (VNx8HI "VNx16HI") (VNx16HI "VNx32HI") (VNx32HI "VNx64HI")
- (VNx1SI "VNx2SI") (VNx2SI "VNx4SI") (VNx4SI "VNx8SI") (VNx8SI "VNx16SI") (VNx16SI "VNx32SI")
- (VNx1DI "VNx2DI") (VNx2DI "VNx4DI") (VNx4DI "VNx8DI") (VNx8DI "VNx16DI")
- (VNx1SF "VNx2SF") (VNx2SF "VNx4SF") (VNx4SF "VNx8SF") (VNx8SF "VNx16SF") (VNx16SF "VNx32SF")
- (VNx1DF "VNx2DF") (VNx2DF "VNx4DF") (VNx4DF "VNx8DF") (VNx8DF "VNx16DF")
-])
+(define_mode_attr VINDEX [
+ (RVVM8QI "RVVM8QI") (RVVM4QI "RVVM4QI") (RVVM2QI "RVVM2QI") (RVVM1QI "RVVM1QI")
+ (RVVMF2QI "RVVMF2QI") (RVVMF4QI "RVVMF4QI") (RVVMF8QI "RVVMF8QI")
-(define_mode_attr VLMULX4 [
- (VNx1QI "VNx4QI") (VNx2QI "VNx8QI") (VNx4QI "VNx16QI") (VNx8QI "VNx32QI") (VNx16QI "VNx64QI") (VNx32QI "VNx128QI")
- (VNx1HI "VNx4HI") (VNx2HI "VNx8HI") (VNx4HI "VNx16HI") (VNx8HI "VNx32HI") (VNx16HI "VNx64HI")
- (VNx1SI "VNx4SI") (VNx2SI "VNx8SI") (VNx4SI "VNx16SI") (VNx8SI "VNx32SI")
- (VNx1DI "VNx4DI") (VNx2DI "VNx8DI") (VNx4DI "VNx16DI")
- (VNx1SF "VNx4SF") (VNx2SF "VNx8SF") (VNx4SF "VNx16SF") (VNx8SF "VNx32SF")
- (VNx1DF "VNx4DF") (VNx2DF "VNx8DF") (VNx4DF "VNx16DF")
-])
+ (RVVM8HI "RVVM8HI") (RVVM4HI "RVVM4HI") (RVVM2HI "RVVM2HI") (RVVM1HI "RVVM1HI") (RVVMF2HI "RVVMF2HI") (RVVMF4HI "RVVMF4HI")
-(define_mode_attr VLMULX8 [
- (VNx1QI "VNx8QI") (VNx2QI "VNx16QI") (VNx4QI "VNx32QI") (VNx8QI "VNx64QI") (VNx16QI "VNx128QI")
- (VNx1HI "VNx8HI") (VNx2HI "VNx16HI") (VNx4HI "VNx32HI") (VNx8HI "VNx64HI")
- (VNx1SI "VNx8SI") (VNx2SI "VNx16SI") (VNx4SI "VNx32SI")
- (VNx1DI "VNx8DI") (VNx2DI "VNx16DI")
- (VNx1SF "VNx8SF") (VNx2SF "VNx16SF") (VNx4SF "VNx32SF")
- (VNx1DF "VNx8DF") (VNx2DF "VNx16DF")
-])
+ (RVVM8HF "RVVM8HI") (RVVM4HF "RVVM4HI") (RVVM2HF "RVVM2HI") (RVVM1HF "RVVM1HI") (RVVMF2HF "RVVMF2HI") (RVVMF4HF "RVVMF4HI")
-(define_mode_attr VLMULX16 [
- (VNx1QI "VNx16QI") (VNx2QI "VNx32QI") (VNx4QI "VNx64QI") (VNx8QI "VNx128QI")
- (VNx1HI "VNx16HI") (VNx2HI "VNx32HI") (VNx4HI "VNx64HI")
- (VNx1SI "VNx16SI") (VNx2SI "VNx32SI")
- (VNx1SF "VNx16SF") (VNx2SF "VNx32SF")
-])
+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM4SI") (RVVM2SI "RVVM2SI") (RVVM1SI "RVVM1SI") (RVVMF2SI "RVVMF2SI")
-(define_mode_attr VLMULX32 [
- (VNx1QI "VNx32QI") (VNx2QI "VNx64QI") (VNx4QI "VNx128QI")
- (VNx1HI "VNx32HI") (VNx2HI "VNx64HI")
-])
+ (RVVM8SF "RVVM8SI") (RVVM4SF "RVVM4SI") (RVVM2SF "RVVM2SI") (RVVM1SF "RVVM1SI") (RVVMF2SF "RVVMF2SI")
-(define_mode_attr VLMULX64 [
- (VNx1QI "VNx64QI") (VNx2QI "VNx128QI")
-])
+ (RVVM8DI "RVVM8DI") (RVVM4DI "RVVM4DI") (RVVM2DI "RVVM2DI") (RVVM1DI "RVVM1DI")
-(define_mode_attr VINDEX [
- (VNx1QI "VNx1QI") (VNx2QI "VNx2QI") (VNx4QI "VNx4QI") (VNx8QI "VNx8QI")
- (VNx16QI "VNx16QI") (VNx32QI "VNx32QI") (VNx64QI "VNx64QI") (VNx128QI "VNx128QI")
- (VNx1HI "VNx1HI") (VNx2HI "VNx2HI") (VNx4HI "VNx4HI") (VNx8HI "VNx8HI")
- (VNx16HI "VNx16HI") (VNx32HI "VNx32HI") (VNx64HI "VNx64HI")
- (VNx1SI "VNx1SI") (VNx2SI "VNx2SI") (VNx4SI "VNx4SI") (VNx8SI "VNx8SI")
- (VNx16SI "VNx16SI") (VNx32SI "VNx32SI")
- (VNx1DI "VNx1DI") (VNx2DI "VNx2DI") (VNx4DI "VNx4DI") (VNx8DI "VNx8DI") (VNx16DI "VNx16DI")
- (VNx1HF "VNx1HI") (VNx2HF "VNx2HI") (VNx4HF "VNx4HI") (VNx8HF "VNx8HI") (VNx16HF "VNx16HI") (VNx32HF "VNx32HI") (VNx64HF "VNx64HI")
- (VNx1SF "VNx1SI") (VNx2SF "VNx2SI") (VNx4SF "VNx4SI") (VNx8SF "VNx8SI")
- (VNx16SF "VNx16SI") (VNx32SF "VNx32SI")
- (VNx1DF "VNx1DI") (VNx2DF "VNx2DI") (VNx4DF "VNx4DI") (VNx8DF "VNx8DI") (VNx16DF "VNx16DI")
+ (RVVM8DF "RVVM8DI") (RVVM4DF "RVVM4DI") (RVVM2DF "RVVM2DI") (RVVM1DF "RVVM1DI")
])
(define_mode_attr VINDEXEI16 [
- (VNx1QI "VNx1HI") (VNx2QI "VNx2HI") (VNx4QI "VNx4HI") (VNx8QI "VNx8HI")
- (VNx16QI "VNx16HI") (VNx32QI "VNx32HI") (VNx64QI "VNx64HI")
- (VNx1HI "VNx1HI") (VNx2HI "VNx2HI") (VNx4HI "VNx4HI") (VNx8HI "VNx8HI")
- (VNx16HI "VNx16HI") (VNx32HI "VNx32HI") (VNx64HI "VNx64HI")
- (VNx1SI "VNx1HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
- (VNx16SI "VNx16HI") (VNx32SI "VNx32HI")
- (VNx1DI "VNx1HI") (VNx2DI "VNx2HI") (VNx4DI "VNx4HI") (VNx8DI "VNx8HI") (VNx16DI "VNx16HI")
- (VNx1SF "VNx1HI") (VNx2SF "VNx2HI") (VNx4SF "VNx4HI") (VNx8SF "VNx8HI")
- (VNx16SF "VNx16HI") (VNx32SF "VNx32HI")
- (VNx1DF "VNx1HI") (VNx2DF "VNx2HI") (VNx4DF "VNx4HI") (VNx8DF "VNx8HI") (VNx16DF "VNx16HI")
+ (RVVM4QI "RVVM8HI") (RVVM2QI "RVVM4HI") (RVVM1QI "RVVM2HI") (RVVMF2QI "RVVM1HI") (RVVMF4QI "RVVMF2HI") (RVVMF8QI "RVVMF4HI")
+
+ (RVVM8HI "RVVM8HI") (RVVM4HI "RVVM4HI") (RVVM2HI "RVVM2HI") (RVVM1HI "RVVM1HI") (RVVMF2HI "RVVMF2HI") (RVVMF4HI "RVVMF4HI")
+
+ (RVVM8SI "RVVM4HI") (RVVM4SI "RVVM2HI") (RVVM2SI "RVVM1HI") (RVVM1SI "RVVMF2HI") (RVVMF2SI "RVVMF4HI")
+
+ (RVVM8DI "RVVM2HI") (RVVM4DI "RVVM1HI") (RVVM2DI "RVVMF2HI") (RVVM1DI "RVVMF4HI")
+
+ (RVVM8HF "RVVM8HI") (RVVM4HF "RVVM4HI") (RVVM2HF "RVVM2HI") (RVVM1HF "RVVM1HI") (RVVMF2HF "RVVMF2HI") (RVVMF4HF "RVVMF4HI")
+
+ (RVVM8SF "RVVM4HI") (RVVM4SF "RVVM2HI") (RVVM2SF "RVVM1HI") (RVVM1SF "RVVMF2HI") (RVVMF2SF "RVVMF4HI")
+
+ (RVVM8DF "RVVM2HI") (RVVM4DF "RVVM1HI") (RVVM2DF "RVVMF2HI") (RVVM1DF "RVVMF4HI")
])
(define_mode_attr VM [
- (VNx1QI "VNx1BI") (VNx2QI "VNx2BI") (VNx4QI "VNx4BI") (VNx8QI "VNx8BI") (VNx16QI "VNx16BI") (VNx32QI "VNx32BI") (VNx64QI "VNx64BI") (VNx128QI "VNx128BI")
- (VNx1HI "VNx1BI") (VNx2HI "VNx2BI") (VNx4HI "VNx4BI") (VNx8HI "VNx8BI") (VNx16HI "VNx16BI") (VNx32HI "VNx32BI") (VNx64HI "VNx64BI")
- (VNx1SI "VNx1BI") (VNx2SI "VNx2BI") (VNx4SI "VNx4BI") (VNx8SI "VNx8BI") (VNx16SI "VNx16BI") (VNx32SI "VNx32BI")
- (VNx1DI "VNx1BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI") (VNx8DI "VNx8BI") (VNx16DI "VNx16BI")
- (VNx1HF "VNx1BI") (VNx2HF "VNx2BI") (VNx4HF "VNx4BI") (VNx8HF "VNx8BI") (VNx16HF "VNx16BI") (VNx32HF "VNx32BI") (VNx64HF "VNx64BI")
- (VNx1SF "VNx1BI") (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") (VNx16SF "VNx16BI") (VNx32SF "VNx32BI")
- (VNx1DF "VNx1BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI") (VNx16DF "VNx16BI")
- (VNx2x64QI "VNx64BI") (VNx2x32QI "VNx32BI") (VNx3x32QI "VNx32BI") (VNx4x32QI "VNx32BI")
- (VNx2x16QI "VNx16BI") (VNx3x16QI "VNx16BI") (VNx4x16QI "VNx16BI") (VNx5x16QI "VNx16BI") (VNx6x16QI "VNx16BI") (VNx7x16QI "VNx16BI") (VNx8x16QI "VNx16BI")
- (VNx2x8QI "VNx8BI") (VNx3x8QI "VNx8BI") (VNx4x8QI "VNx8BI") (VNx5x8QI "VNx8BI") (VNx6x8QI "VNx8BI") (VNx7x8QI "VNx8BI") (VNx8x8QI "VNx8BI")
- (VNx2x4QI "VNx4BI") (VNx3x4QI "VNx4BI") (VNx4x4QI "VNx4BI") (VNx5x4QI "VNx4BI") (VNx6x4QI "VNx4BI") (VNx7x4QI "VNx4BI") (VNx8x4QI "VNx4BI")
- (VNx2x2QI "VNx2BI") (VNx3x2QI "VNx2BI") (VNx4x2QI "VNx2BI") (VNx5x2QI "VNx2BI") (VNx6x2QI "VNx2BI") (VNx7x2QI "VNx2BI") (VNx8x2QI "VNx2BI")
- (VNx2x1QI "VNx1BI") (VNx3x1QI "VNx1BI") (VNx4x1QI "VNx1BI") (VNx5x1QI "VNx1BI") (VNx6x1QI "VNx1BI") (VNx7x1QI "VNx1BI") (VNx8x1QI "VNx1BI")
- (VNx2x32HI "VNx32BI") (VNx2x16HI "VNx16BI") (VNx3x16HI "VNx16BI") (VNx4x16HI "VNx16BI")
- (VNx2x8HI "VNx8BI") (VNx3x8HI "VNx8BI") (VNx4x8HI "VNx8BI") (VNx5x8HI "VNx8BI") (VNx6x8HI "VNx8BI") (VNx7x8HI "VNx8BI") (VNx8x8HI "VNx8BI")
- (VNx2x4HI "VNx4BI") (VNx3x4HI "VNx4BI") (VNx4x4HI "VNx4BI") (VNx5x4HI "VNx4BI") (VNx6x4HI "VNx4BI") (VNx7x4HI "VNx4BI") (VNx8x4HI "VNx4BI")
- (VNx2x2HI "VNx2BI") (VNx3x2HI "VNx2BI") (VNx4x2HI "VNx2BI") (VNx5x2HI "VNx2BI") (VNx6x2HI "VNx2BI") (VNx7x2HI "VNx2BI") (VNx8x2HI "VNx2BI")
- (VNx2x1HI "VNx1BI") (VNx3x1HI "VNx1BI") (VNx4x1HI "VNx1BI") (VNx5x1HI "VNx1BI") (VNx6x1HI "VNx1BI") (VNx7x1HI "VNx1BI") (VNx8x1HI "VNx1BI")
- (VNx2x16SI "VNx16BI") (VNx2x8SI "VNx8BI") (VNx3x8SI "VNx8BI") (VNx4x8SI "VNx8BI")
- (VNx2x4SI "VNx4BI") (VNx3x4SI "VNx4BI") (VNx4x4SI "VNx4BI") (VNx5x4SI "VNx4BI") (VNx6x4SI "VNx4BI") (VNx7x4SI "VNx4BI") (VNx8x4SI "VNx4BI")
- (VNx2x2SI "VNx2BI") (VNx3x2SI "VNx2BI") (VNx4x2SI "VNx2BI") (VNx5x2SI "VNx2BI") (VNx6x2SI "VNx2BI") (VNx7x2SI "VNx2BI") (VNx8x2SI "VNx2BI")
- (VNx2x1SI "VNx1BI") (VNx3x1SI "VNx1BI") (VNx4x1SI "VNx1BI") (VNx5x1SI "VNx1BI") (VNx6x1SI "VNx1BI") (VNx7x1SI "VNx1BI") (VNx8x1SI "VNx1BI")
- (VNx2x8DI "VNx8BI") (VNx2x4DI "VNx4BI") (VNx3x4DI "VNx4BI") (VNx4x4DI "VNx4BI")
- (VNx2x2DI "VNx2BI") (VNx3x2DI "VNx2BI") (VNx4x2DI "VNx2BI") (VNx5x2DI "VNx2BI") (VNx6x2DI "VNx2BI") (VNx7x2DI "VNx2BI") (VNx8x2DI "VNx2BI")
- (VNx2x1DI "VNx1BI") (VNx3x1DI "VNx1BI") (VNx4x1DI "VNx1BI") (VNx5x1DI "VNx1BI") (VNx6x1DI "VNx1BI") (VNx7x1DI "VNx1BI") (VNx8x1DI "VNx1BI")
- (VNx2x32HF "VNx32BI") (VNx2x16HF "VNx16BI") (VNx3x16HF "VNx16BI") (VNx4x16HF "VNx16BI")
- (VNx2x8HF "VNx8BI") (VNx3x8HF "VNx8BI") (VNx4x8HF "VNx8BI") (VNx5x8HF "VNx8BI") (VNx6x8HF "VNx8BI") (VNx7x8HF "VNx8BI") (VNx8x8HF "VNx8BI")
- (VNx2x4HF "VNx4BI") (VNx3x4HF "VNx4BI") (VNx4x4HF "VNx4BI") (VNx5x4HF "VNx4BI") (VNx6x4HF "VNx4BI") (VNx7x4HF "VNx4BI") (VNx8x4HF "VNx4BI")
- (VNx2x2HF "VNx2BI") (VNx3x2HF "VNx2BI") (VNx4x2HF "VNx2BI") (VNx5x2HF "VNx2BI") (VNx6x2HF "VNx2BI") (VNx7x2HF "VNx2BI") (VNx8x2HF "VNx2BI")
- (VNx2x1HF "VNx1BI") (VNx3x1HF "VNx1BI") (VNx4x1HF "VNx1BI") (VNx5x1HF "VNx1BI") (VNx6x1HF "VNx1BI") (VNx7x1HF "VNx1BI") (VNx8x1HF "VNx1BI")
- (VNx2x16SF "VNx16BI") (VNx2x8SF "VNx8BI") (VNx3x8SF "VNx8BI") (VNx4x8SF "VNx8BI")
- (VNx2x4SF "VNx4BI") (VNx3x4SF "VNx4BI") (VNx4x4SF "VNx4BI") (VNx5x4SF "VNx4BI") (VNx6x4SF "VNx4BI") (VNx7x4SF "VNx4BI") (VNx8x4SF "VNx4BI")
- (VNx2x2SF "VNx2BI") (VNx3x2SF "VNx2BI") (VNx4x2SF "VNx2BI") (VNx5x2SF "VNx2BI") (VNx6x2SF "VNx2BI") (VNx7x2SF "VNx2BI") (VNx8x2SF "VNx2BI")
- (VNx2x1SF "VNx1BI") (VNx3x1SF "VNx1BI") (VNx4x1SF "VNx1BI") (VNx5x1SF "VNx1BI") (VNx6x1SF "VNx1BI") (VNx7x1SF "VNx1BI") (VNx8x1SF "VNx1BI")
- (VNx2x8DF "VNx8BI")
- (VNx2x4DF "VNx4BI") (VNx3x4DF "VNx4BI") (VNx4x4DF "VNx4BI")
- (VNx2x2DF "VNx2BI") (VNx3x2DF "VNx2BI") (VNx4x2DF "VNx2BI") (VNx5x2DF "VNx2BI") (VNx6x2DF "VNx2BI") (VNx7x2DF "VNx2BI") (VNx8x2DF "VNx2BI")
- (VNx2x1DF "VNx1BI") (VNx3x1DF "VNx1BI") (VNx4x1DF "VNx1BI") (VNx5x1DF "VNx1BI") (VNx6x1DF "VNx1BI") (VNx7x1DF "VNx1BI") (VNx8x1DF "VNx1BI")
+ (RVVM8QI "RVVM1BI") (RVVM4QI "RVVMF2BI") (RVVM2QI "RVVMF4BI") (RVVM1QI "RVVMF8BI") (RVVMF2QI "RVVMF16BI") (RVVMF4QI "RVVMF32BI") (RVVMF8QI "RVVMF64BI")
+
+ (RVVM8HI "RVVMF2BI") (RVVM4HI "RVVMF4BI") (RVVM2HI "RVVMF8BI") (RVVM1HI "RVVMF16BI") (RVVMF2HI "RVVMF32BI") (RVVMF4HI "RVVMF64BI")
+
+ (RVVM8HF "RVVMF2BI") (RVVM4HF "RVVMF4BI") (RVVM2HF "RVVMF8BI") (RVVM1HF "RVVMF16BI") (RVVMF2HF "RVVMF32BI") (RVVMF4HF "RVVMF64BI")
+
+ (RVVM8SI "RVVMF4BI") (RVVM4SI "RVVMF8BI") (RVVM2SI "RVVMF16BI") (RVVM1SI "RVVMF32BI") (RVVMF2SI "RVVMF64BI")
+
+ (RVVM8SF "RVVMF4BI") (RVVM4SF "RVVMF8BI") (RVVM2SF "RVVMF16BI") (RVVM1SF "RVVMF32BI") (RVVMF2SF "RVVMF64BI")
+
+ (RVVM8DI "RVVMF8BI") (RVVM4DI "RVVMF16BI") (RVVM2DI "RVVMF32BI") (RVVM1DI "RVVMF64BI")
+
+ (RVVM8DF "RVVMF8BI") (RVVM4DF "RVVMF16BI") (RVVM2DF "RVVMF32BI") (RVVM1DF "RVVMF64BI")
+
+ (RVVM1x8QI "RVVMF8BI") (RVVMF2x8QI "RVVMF16BI") (RVVMF4x8QI "RVVMF32BI") (RVVMF8x8QI "RVVMF64BI")
+ (RVVM1x7QI "RVVMF8BI") (RVVMF2x7QI "RVVMF16BI") (RVVMF4x7QI "RVVMF32BI") (RVVMF8x7QI "RVVMF64BI")
+ (RVVM1x6QI "RVVMF8BI") (RVVMF2x6QI "RVVMF16BI") (RVVMF4x6QI "RVVMF32BI") (RVVMF8x6QI "RVVMF64BI")
+ (RVVM1x5QI "RVVMF8BI") (RVVMF2x5QI "RVVMF16BI") (RVVMF4x5QI "RVVMF32BI") (RVVMF8x5QI "RVVMF64BI")
+ (RVVM2x4QI "RVVMF4BI") (RVVM1x4QI "RVVMF8BI") (RVVMF2x4QI "RVVMF16BI") (RVVMF4x4QI "RVVMF32BI") (RVVMF8x4QI "RVVMF64BI")
+ (RVVM2x3QI "RVVMF4BI") (RVVM1x3QI "RVVMF8BI") (RVVMF2x3QI "RVVMF16BI") (RVVMF4x3QI "RVVMF32BI") (RVVMF8x3QI "RVVMF64BI")
+ (RVVM4x2QI "RVVMF2BI") (RVVM2x2QI "RVVMF4BI") (RVVM1x2QI "RVVMF8BI") (RVVMF2x2QI "RVVMF16BI") (RVVMF4x2QI "RVVMF32BI") (RVVMF8x2QI "RVVMF64BI")
+
+ (RVVM1x8HI "RVVMF16BI") (RVVMF2x8HI "RVVMF32BI") (RVVMF4x8HI "RVVMF64BI")
+ (RVVM1x7HI "RVVMF16BI") (RVVMF2x7HI "RVVMF32BI") (RVVMF4x7HI "RVVMF64BI")
+ (RVVM1x6HI "RVVMF16BI") (RVVMF2x6HI "RVVMF32BI") (RVVMF4x6HI "RVVMF64BI")
+ (RVVM1x5HI "RVVMF16BI") (RVVMF2x5HI "RVVMF32BI") (RVVMF4x5HI "RVVMF64BI")
+ (RVVM2x4HI "RVVMF8BI") (RVVM1x4HI "RVVMF16BI") (RVVMF2x4HI "RVVMF32BI") (RVVMF4x4HI "RVVMF64BI")
+ (RVVM2x3HI "RVVMF8BI") (RVVM1x3HI "RVVMF16BI") (RVVMF2x3HI "RVVMF32BI") (RVVMF4x3HI "RVVMF64BI")
+ (RVVM4x2HI "RVVMF4BI") (RVVM2x2HI "RVVMF8BI") (RVVM1x2HI "RVVMF16BI") (RVVMF2x2HI "RVVMF32BI") (RVVMF4x2HI "RVVMF64BI")
+
+ (RVVM1x8HF "RVVMF16BI") (RVVMF2x8HF "RVVMF32BI") (RVVMF4x8HF "RVVMF64BI")
+ (RVVM1x7HF "RVVMF16BI") (RVVMF2x7HF "RVVMF32BI") (RVVMF4x7HF "RVVMF64BI")
+ (RVVM1x6HF "RVVMF16BI") (RVVMF2x6HF "RVVMF32BI") (RVVMF4x6HF "RVVMF64BI")
+ (RVVM1x5HF "RVVMF16BI") (RVVMF2x5HF "RVVMF32BI") (RVVMF4x5HF "RVVMF64BI")
+ (RVVM2x4HF "RVVMF8BI") (RVVM1x4HF "RVVMF16BI") (RVVMF2x4HF "RVVMF32BI") (RVVMF4x4HF "RVVMF64BI")
+ (RVVM2x3HF "RVVMF8BI") (RVVM1x3HF "RVVMF16BI") (RVVMF2x3HF "RVVMF32BI") (RVVMF4x3HF "RVVMF64BI")
+ (RVVM4x2HF "RVVMF4BI") (RVVM2x2HF "RVVMF8BI") (RVVM1x2HF "RVVMF16BI") (RVVMF2x2HF "RVVMF32BI") (RVVMF4x2HF "RVVMF64BI")
+
+ (RVVM1x8SI "RVVMF32BI") (RVVMF2x8SI "RVVMF64BI")
+ (RVVM1x7SI "RVVMF32BI") (RVVMF2x7SI "RVVMF64BI")
+ (RVVM1x6SI "RVVMF32BI") (RVVMF2x6SI "RVVMF64BI")
+ (RVVM1x5SI "RVVMF32BI") (RVVMF2x5SI "RVVMF64BI")
+ (RVVM2x4SI "RVVMF16BI") (RVVM1x4SI "RVVMF32BI") (RVVMF2x4SI "RVVMF64BI")
+ (RVVM2x3SI "RVVMF16BI") (RVVM1x3SI "RVVMF32BI") (RVVMF2x3SI "RVVMF64BI")
+ (RVVM4x2SI "RVVMF8BI") (RVVM2x2SI "RVVMF16BI") (RVVM1x2SI "RVVMF32BI") (RVVMF2x2SI "RVVMF64BI")
+
+ (RVVM1x8SF "RVVMF32BI") (RVVMF2x8SF "RVVMF64BI")
+ (RVVM1x7SF "RVVMF32BI") (RVVMF2x7SF "RVVMF64BI")
+ (RVVM1x6SF "RVVMF32BI") (RVVMF2x6SF "RVVMF64BI")
+ (RVVM1x5SF "RVVMF32BI") (RVVMF2x5SF "RVVMF64BI")
+ (RVVM2x4SF "RVVMF16BI") (RVVM1x4SF "RVVMF32BI") (RVVMF2x4SF "RVVMF64BI")
+ (RVVM2x3SF "RVVMF16BI") (RVVM1x3SF "RVVMF32BI") (RVVMF2x3SF "RVVMF64BI")
+ (RVVM4x2SF "RVVMF8BI") (RVVM2x2SF "RVVMF16BI") (RVVM1x2SF "RVVMF32BI") (RVVMF2x2SF "RVVMF64BI")
+
+ (RVVM1x8DI "RVVMF64BI")
+ (RVVM1x7DI "RVVMF64BI")
+ (RVVM1x6DI "RVVMF64BI")
+ (RVVM1x5DI "RVVMF64BI")
+ (RVVM2x4DI "RVVMF32BI")
+ (RVVM1x4DI "RVVMF64BI")
+ (RVVM2x3DI "RVVMF32BI")
+ (RVVM1x3DI "RVVMF64BI")
+ (RVVM4x2DI "RVVMF16BI")
+ (RVVM2x2DI "RVVMF32BI")
+ (RVVM1x2DI "RVVMF64BI")
+
+ (RVVM1x8DF "RVVMF64BI")
+ (RVVM1x7DF "RVVMF64BI")
+ (RVVM1x6DF "RVVMF64BI")
+ (RVVM1x5DF "RVVMF64BI")
+ (RVVM2x4DF "RVVMF32BI")
+ (RVVM1x4DF "RVVMF64BI")
+ (RVVM2x3DF "RVVMF32BI")
+ (RVVM1x3DF "RVVMF64BI")
+ (RVVM4x2DF "RVVMF16BI")
+ (RVVM2x2DF "RVVMF32BI")
+ (RVVM1x2DF "RVVMF64BI")
])
(define_mode_attr vm [
- (VNx1QI "vnx1bi") (VNx2QI "vnx2bi") (VNx4QI "vnx4bi") (VNx8QI "vnx8bi") (VNx16QI "vnx16bi") (VNx32QI "vnx32bi") (VNx64QI "vnx64bi") (VNx128QI "vnx128bi")
- (VNx1HI "vnx1bi") (VNx2HI "vnx2bi") (VNx4HI "vnx4bi") (VNx8HI "vnx8bi") (VNx16HI "vnx16bi") (VNx32HI "vnx32bi") (VNx64HI "vnx64bi")
- (VNx1SI "vnx1bi") (VNx2SI "vnx2bi") (VNx4SI "vnx4bi") (VNx8SI "vnx8bi") (VNx16SI "vnx16bi") (VNx32SI "vnx32bi")
- (VNx1DI "vnx1bi") (VNx2DI "vnx2bi") (VNx4DI "vnx4bi") (VNx8DI "vnx8bi") (VNx16DI "vnx16bi")
- (VNx1HF "vnx1bi") (VNx2HF "vnx2bi") (VNx4HF "vnx4bi") (VNx8HF "vnx8bi") (VNx16HF "vnx16bi") (VNx32HF "vnx32bi") (VNx64HF "vnx64bi")
- (VNx1SF "vnx1bi") (VNx2SF "vnx2bi") (VNx4SF "vnx4bi") (VNx8SF "vnx8bi") (VNx16SF "vnx16bi") (VNx32SF "vnx32bi")
- (VNx1DF "vnx1bi") (VNx2DF "vnx2bi") (VNx4DF "vnx4bi") (VNx8DF "vnx8bi") (VNx16DF "vnx16bi")
+ (RVVM8QI "rvvm1bi") (RVVM4QI "rvvmf2bi") (RVVM2QI "rvvmf4bi") (RVVM1QI "rvvmf8bi") (RVVMF2QI "rvvmf16bi") (RVVMF4QI "rvvmf32bi") (RVVMF8QI "rvvmf64bi")
+
+ (RVVM8HI "rvvmf2bi") (RVVM4HI "rvvmf4bi") (RVVM2HI "rvvmf8bi") (RVVM1HI "rvvmf16bi") (RVVMF2HI "rvvmf32bi") (RVVMF4HI "rvvmf64bi")
+
+ (RVVM8HF "rvvmf2bi") (RVVM4HF "rvvmf4bi") (RVVM2HF "rvvmf8bi") (RVVM1HF "rvvmf16bi") (RVVMF2HF "rvvmf32bi") (RVVMF4HF "rvvmf64bi")
+
+ (RVVM8SI "rvvmf4bi") (RVVM4SI "rvvmf8bi") (RVVM2SI "rvvmf16bi") (RVVM1SI "rvvmf32bi") (RVVMF2SI "rvvmf64bi")
+
+ (RVVM8SF "rvvmf4bi") (RVVM4SF "rvvmf8bi") (RVVM2SF "rvvmf16bi") (RVVM1SF "rvvmf32bi") (RVVMF2SF "rvvmf64bi")
+
+ (RVVM8DI "rvvmf8bi") (RVVM4DI "rvvmf16bi") (RVVM2DI "rvvmf32bi") (RVVM1DI "rvvmf64bi")
+
+ (RVVM8DF "rvvmf8bi") (RVVM4DF "rvvmf16bi") (RVVM2DF "rvvmf32bi") (RVVM1DF "rvvmf64bi")
+
+ (RVVM1x8QI "rvvmf8bi") (RVVMF2x8QI "rvvmf16bi") (RVVMF4x8QI "rvvmf32bi") (RVVMF8x8QI "rvvmf64bi")
+ (RVVM1x7QI "rvvmf8bi") (RVVMF2x7QI "rvvmf16bi") (RVVMF4x7QI "rvvmf32bi") (RVVMF8x7QI "rvvmf64bi")
+ (RVVM1x6QI "rvvmf8bi") (RVVMF2x6QI "rvvmf16bi") (RVVMF4x6QI "rvvmf32bi") (RVVMF8x6QI "rvvmf64bi")
+ (RVVM1x5QI "rvvmf8bi") (RVVMF2x5QI "rvvmf16bi") (RVVMF4x5QI "rvvmf32bi") (RVVMF8x5QI "rvvmf64bi")
+ (RVVM2x4QI "rvvmf4bi") (RVVM1x4QI "rvvmf8bi") (RVVMF2x4QI "rvvmf16bi") (RVVMF4x4QI "rvvmf32bi") (RVVMF8x4QI "rvvmf64bi")
+ (RVVM2x3QI "rvvmf4bi") (RVVM1x3QI "rvvmf8bi") (RVVMF2x3QI "rvvmf16bi") (RVVMF4x3QI "rvvmf32bi") (RVVMF8x3QI "rvvmf64bi")
+ (RVVM4x2QI "rvvmf2bi") (RVVM2x2QI "rvvmf4bi") (RVVM1x2QI "rvvmf8bi") (RVVMF2x2QI "rvvmf16bi") (RVVMF4x2QI "rvvmf32bi") (RVVMF8x2QI "rvvmf64bi")
+
+ (RVVM1x8HI "rvvmf16bi") (RVVMF2x8HI "rvvmf32bi") (RVVMF4x8HI "rvvmf64bi")
+ (RVVM1x7HI "rvvmf16bi") (RVVMF2x7HI "rvvmf32bi") (RVVMF4x7HI "rvvmf64bi")
+ (RVVM1x6HI "rvvmf16bi") (RVVMF2x6HI "rvvmf32bi") (RVVMF4x6HI "rvvmf64bi")
+ (RVVM1x5HI "rvvmf16bi") (RVVMF2x5HI "rvvmf32bi") (RVVMF4x5HI "rvvmf64bi")
+ (RVVM2x4HI "rvvmf8bi") (RVVM1x4HI "rvvmf16bi") (RVVMF2x4HI "rvvmf32bi") (RVVMF4x4HI "rvvmf64bi")
+ (RVVM2x3HI "rvvmf8bi") (RVVM1x3HI "rvvmf16bi") (RVVMF2x3HI "rvvmf32bi") (RVVMF4x3HI "rvvmf64bi")
+ (RVVM4x2HI "rvvmf4bi") (RVVM2x2HI "rvvmf8bi") (RVVM1x2HI "rvvmf16bi") (RVVMF2x2HI "rvvmf32bi") (RVVMF4x2HI "rvvmf64bi")
+
+ (RVVM1x8HF "rvvmf16bi") (RVVMF2x8HF "rvvmf32bi") (RVVMF4x8HF "rvvmf64bi")
+ (RVVM1x7HF "rvvmf16bi") (RVVMF2x7HF "rvvmf32bi") (RVVMF4x7HF "rvvmf64bi")
+ (RVVM1x6HF "rvvmf16bi") (RVVMF2x6HF "rvvmf32bi") (RVVMF4x6HF "rvvmf64bi")
+ (RVVM1x5HF "rvvmf16bi") (RVVMF2x5HF "rvvmf32bi") (RVVMF4x5HF "rvvmf64bi")
+ (RVVM2x4HF "rvvmf8bi") (RVVM1x4HF "rvvmf16bi") (RVVMF2x4HF "rvvmf32bi") (RVVMF4x4HF "rvvmf64bi")
+ (RVVM2x3HF "rvvmf8bi") (RVVM1x3HF "rvvmf16bi") (RVVMF2x3HF "rvvmf32bi") (RVVMF4x3HF "rvvmf64bi")
+ (RVVM4x2HF "rvvmf4bi") (RVVM2x2HF "rvvmf8bi") (RVVM1x2HF "rvvmf16bi") (RVVMF2x2HF "rvvmf32bi") (RVVMF4x2HF "rvvmf64bi")
+
+ (RVVM1x8SI "rvvmf32bi") (RVVMF2x8SI "rvvmf64bi")
+ (RVVM1x7SI "rvvmf32bi") (RVVMF2x7SI "rvvmf64bi")
+ (RVVM1x6SI "rvvmf32bi") (RVVMF2x6SI "rvvmf64bi")
+ (RVVM1x5SI "rvvmf32bi") (RVVMF2x5SI "rvvmf64bi")
+ (RVVM2x4SI "rvvmf16bi") (RVVM1x4SI "rvvmf32bi") (RVVMF2x4SI "rvvmf64bi")
+ (RVVM2x3SI "rvvmf16bi") (RVVM1x3SI "rvvmf32bi") (RVVMF2x3SI "rvvmf64bi")
+ (RVVM4x2SI "rvvmf4bi") (RVVM2x2SI "rvvmf16bi") (RVVM1x2SI "rvvmf32bi") (RVVMF2x2SI "rvvmf64bi")
+
+ (RVVM1x8SF "rvvmf32bi") (RVVMF2x8SF "rvvmf64bi")
+ (RVVM1x7SF "rvvmf32bi") (RVVMF2x7SF "rvvmf64bi")
+ (RVVM1x6SF "rvvmf32bi") (RVVMF2x6SF "rvvmf64bi")
+ (RVVM1x5SF "rvvmf32bi") (RVVMF2x5SF "rvvmf64bi")
+ (RVVM2x4SF "rvvmf16bi") (RVVM1x4SF "rvvmf32bi") (RVVMF2x4SF "rvvmf64bi")
+ (RVVM2x3SF "rvvmf16bi") (RVVM1x3SF "rvvmf32bi") (RVVMF2x3SF "rvvmf64bi")
+ (RVVM4x2SF "rvvmf4bi") (RVVM2x2SF "rvvmf16bi") (RVVM1x2SF "rvvmf32bi") (RVVMF2x2SF "rvvmf64bi")
+
+ (RVVM1x8DI "rvvmf64bi")
+ (RVVM1x7DI "rvvmf64bi")
+ (RVVM1x6DI "rvvmf64bi")
+ (RVVM1x5DI "rvvmf64bi")
+ (RVVM2x4DI "rvvmf32bi")
+ (RVVM1x4DI "rvvmf64bi")
+ (RVVM2x3DI "rvvmf32bi")
+ (RVVM1x3DI "rvvmf64bi")
+ (RVVM4x2DI "rvvmf16bi")
+ (RVVM2x2DI "rvvmf32bi")
+ (RVVM1x2DI "rvvmf64bi")
+
+ (RVVM1x8DF "rvvmf64bi")
+ (RVVM1x7DF "rvvmf64bi")
+ (RVVM1x6DF "rvvmf64bi")
+ (RVVM1x5DF "rvvmf64bi")
+ (RVVM2x4DF "rvvmf32bi")
+ (RVVM1x4DF "rvvmf64bi")
+ (RVVM2x3DF "rvvmf32bi")
+ (RVVM1x3DF "rvvmf64bi")
+ (RVVM4x2DF "rvvmf16bi")
+ (RVVM2x2DF "rvvmf32bi")
+ (RVVM1x2DF "rvvmf64bi")
])
(define_mode_attr VEL [
- (VNx1QI "QI") (VNx2QI "QI") (VNx4QI "QI") (VNx8QI "QI") (VNx16QI "QI") (VNx32QI "QI") (VNx64QI "QI") (VNx128QI "QI")
- (VNx1HI "HI") (VNx2HI "HI") (VNx4HI "HI") (VNx8HI "HI") (VNx16HI "HI") (VNx32HI "HI") (VNx64HI "HI")
- (VNx1SI "SI") (VNx2SI "SI") (VNx4SI "SI") (VNx8SI "SI") (VNx16SI "SI") (VNx32SI "SI")
- (VNx1DI "DI") (VNx2DI "DI") (VNx4DI "DI") (VNx8DI "DI") (VNx16DI "DI")
- (VNx1HF "HF") (VNx2HF "HF") (VNx4HF "HF") (VNx8HF "HF") (VNx16HF "HF") (VNx32HF "HF") (VNx64HF "HF")
- (VNx1SF "SF") (VNx2SF "SF") (VNx4SF "SF") (VNx8SF "SF") (VNx16SF "SF") (VNx32SF "SF")
- (VNx1DF "DF") (VNx2DF "DF") (VNx4DF "DF") (VNx8DF "DF") (VNx16DF "DF")
+ (RVVM8QI "QI") (RVVM4QI "QI") (RVVM2QI "QI") (RVVM1QI "QI") (RVVMF2QI "QI") (RVVMF4QI "QI") (RVVMF8QI "QI")
+
+ (RVVM8HI "HI") (RVVM4HI "HI") (RVVM2HI "HI") (RVVM1HI "HI") (RVVMF2HI "HI") (RVVMF4HI "HI")
+
+ (RVVM8HF "HF") (RVVM4HF "HF") (RVVM2HF "HF") (RVVM1HF "HF") (RVVMF2HF "HF") (RVVMF4HF "HF")
+
+ (RVVM8SI "SI") (RVVM4SI "SI") (RVVM2SI "SI") (RVVM1SI "SI") (RVVMF2SI "SI")
+
+ (RVVM8SF "SF") (RVVM4SF "SF") (RVVM2SF "SF") (RVVM1SF "SF") (RVVMF2SF "SF")
+
+ (RVVM8DI "DI") (RVVM4DI "DI") (RVVM2DI "DI") (RVVM1DI "DI")
+
+ (RVVM8DF "DF") (RVVM4DF "DF") (RVVM2DF "DF") (RVVM1DF "DF")
])
(define_mode_attr vel [
- (VNx1QI "qi") (VNx2QI "qi") (VNx4QI "qi") (VNx8QI "qi") (VNx16QI "qi") (VNx32QI "qi") (VNx64QI "qi") (VNx128QI "qi")
- (VNx1HI "hi") (VNx2HI "hi") (VNx4HI "hi") (VNx8HI "hi") (VNx16HI "hi") (VNx32HI "hi") (VNx64HI "hi")
- (VNx1SI "si") (VNx2SI "si") (VNx4SI "si") (VNx8SI "si") (VNx16SI "si") (VNx32SI "si")
- (VNx1DI "di") (VNx2DI "di") (VNx4DI "di") (VNx8DI "di") (VNx16DI "di")
- (VNx1HF "hf") (VNx2HF "hf") (VNx4HF "hf") (VNx8HF "hf") (VNx16HF "hf") (VNx32HF "hf") (VNx64HF "hf")
- (VNx1SF "sf") (VNx2SF "sf") (VNx4SF "sf") (VNx8SF "sf") (VNx16SF "sf") (VNx32SF "sf")
- (VNx1DF "df") (VNx2DF "df") (VNx4DF "df") (VNx8DF "df") (VNx16DF "df")
+ (RVVM8QI "qi") (RVVM4QI "qi") (RVVM2QI "qi") (RVVM1QI "qi") (RVVMF2QI "qi") (RVVMF4QI "qi") (RVVMF8QI "qi")
+
+ (RVVM8HI "hi") (RVVM4HI "hi") (RVVM2HI "hi") (RVVM1HI "hi") (RVVMF2HI "hi") (RVVMF4HI "hi")
+
+ (RVVM8HF "hf") (RVVM4HF "hf") (RVVM2HF "hf") (RVVM1HF "hf") (RVVMF2HF "hf") (RVVMF4HF "hf")
+
+ (RVVM8SI "si") (RVVM4SI "si") (RVVM2SI "si") (RVVM1SI "si") (RVVMF2SI "si")
+
+ (RVVM8SF "sf") (RVVM4SF "sf") (RVVM2SF "sf") (RVVM1SF "sf") (RVVMF2SF "sf")
+
+ (RVVM8DI "di") (RVVM4DI "di") (RVVM2DI "di") (RVVM1DI "di")
+
+ (RVVM8DF "df") (RVVM4DF "df") (RVVM2DF "df") (RVVM1DF "df")
])
(define_mode_attr VSUBEL [
- (VNx1HI "QI") (VNx2HI "QI") (VNx4HI "QI") (VNx8HI "QI") (VNx16HI "QI") (VNx32HI "QI") (VNx64HI "QI")
- (VNx1SI "HI") (VNx2SI "HI") (VNx4SI "HI") (VNx8SI "HI") (VNx16SI "HI") (VNx32SI "HI")
- (VNx1DI "SI") (VNx2DI "SI") (VNx4DI "SI") (VNx8DI "SI") (VNx16DI "SI")
- (VNx1SF "HF") (VNx2SF "HF") (VNx4SF "HF") (VNx8SF "HF") (VNx16SF "HF") (VNx32SF "HF")
- (VNx1DF "SF") (VNx2DF "SF") (VNx4DF "SF") (VNx8DF "SF") (VNx16DF "SF")
+ (RVVM8HI "QI") (RVVM4HI "QI") (RVVM2HI "QI") (RVVM1HI "QI") (RVVMF2HI "QI") (RVVMF4HI "QI")
+
+ (RVVM8SI "HI") (RVVM4SI "HI") (RVVM2SI "HI") (RVVM1SI "HI") (RVVMF2SI "HI")
+
+ (RVVM8SF "HF") (RVVM4SF "HF") (RVVM2SF "HF") (RVVM1SF "HF") (RVVMF2SF "HF")
+
+ (RVVM8DI "SI") (RVVM4DI "SI") (RVVM2DI "SI") (RVVM1DI "SI")
+
+ (RVVM8DF "SF") (RVVM4DF "SF") (RVVM2DF "SF") (RVVM1DF "SF")
])
(define_mode_attr nf [
- (VNx2x64QI "2") (VNx2x32QI "2") (VNx3x32QI "3") (VNx4x32QI "4")
- (VNx2x16QI "2") (VNx3x16QI "3") (VNx4x16QI "4") (VNx5x16QI "5") (VNx6x16QI "6") (VNx7x16QI "7") (VNx8x16QI "8")
- (VNx2x8QI "2") (VNx3x8QI "3") (VNx4x8QI "4") (VNx5x8QI "5") (VNx6x8QI "6") (VNx7x8QI "7") (VNx8x8QI "8")
- (VNx2x4QI "2") (VNx3x4QI "3") (VNx4x4QI "4") (VNx5x4QI "5") (VNx6x4QI "6") (VNx7x4QI "7") (VNx8x4QI "8")
- (VNx2x2QI "2") (VNx3x2QI "3") (VNx4x2QI "4") (VNx5x2QI "5") (VNx6x2QI "6") (VNx7x2QI "7") (VNx8x2QI "8")
- (VNx2x1QI "2") (VNx3x1QI "3") (VNx4x1QI "4") (VNx5x1QI "5") (VNx6x1QI "6") (VNx7x1QI "7") (VNx8x1QI "8")
- (VNx2x32HI "2") (VNx2x16HI "2") (VNx3x16HI "3") (VNx4x16HI "4")
- (VNx2x8HI "2") (VNx3x8HI "3") (VNx4x8HI "4") (VNx5x8HI "5") (VNx6x8HI "6") (VNx7x8HI "7") (VNx8x8HI "8")
- (VNx2x4HI "2") (VNx3x4HI "3") (VNx4x4HI "4") (VNx5x4HI "5") (VNx6x4HI "6") (VNx7x4HI "7") (VNx8x4HI "8")
- (VNx2x2HI "2") (VNx3x2HI "3") (VNx4x2HI "4") (VNx5x2HI "5") (VNx6x2HI "6") (VNx7x2HI "7") (VNx8x2HI "8")
- (VNx2x1HI "2") (VNx3x1HI "3") (VNx4x1HI "4") (VNx5x1HI "5") (VNx6x1HI "6") (VNx7x1HI "7") (VNx8x1HI "8")
- (VNx2x16SI "2") (VNx2x8SI "2") (VNx3x8SI "3") (VNx4x8SI "4")
- (VNx2x4SI "2") (VNx3x4SI "3") (VNx4x4SI "4") (VNx5x4SI "5") (VNx6x4SI "6") (VNx7x4SI "7") (VNx8x4SI "8")
- (VNx2x2SI "2") (VNx3x2SI "3") (VNx4x2SI "4") (VNx5x2SI "5") (VNx6x2SI "6") (VNx7x2SI "7") (VNx8x2SI "8")
- (VNx2x1SI "2") (VNx3x1SI "3") (VNx4x1SI "4") (VNx5x1SI "5") (VNx6x1SI "6") (VNx7x1SI "7") (VNx8x1SI "8")
- (VNx2x8DI "2") (VNx2x4DI "2") (VNx3x4DI "3") (VNx4x4DI "4")
- (VNx2x2DI "2") (VNx3x2DI "3") (VNx4x2DI "4") (VNx5x2DI "5") (VNx6x2DI "6") (VNx7x2DI "7") (VNx8x2DI "8")
- (VNx2x1DI "2") (VNx3x1DI "3") (VNx4x1DI "4") (VNx5x1DI "5") (VNx6x1DI "6") (VNx7x1DI "7") (VNx8x1DI "8")
- (VNx2x16SF "2") (VNx2x8SF "2") (VNx3x8SF "3") (VNx4x8SF "4")
- (VNx2x4SF "2") (VNx3x4SF "3") (VNx4x4SF "4") (VNx5x4SF "5") (VNx6x4SF "6") (VNx7x4SF "7") (VNx8x4SF "8")
- (VNx2x2SF "2") (VNx3x2SF "3") (VNx4x2SF "4") (VNx5x2SF "5") (VNx6x2SF "6") (VNx7x2SF "7") (VNx8x2SF "8")
- (VNx2x1SF "2") (VNx3x1SF "3") (VNx4x1SF "4") (VNx5x1SF "5") (VNx6x1SF "6") (VNx7x1SF "7") (VNx8x1SF "8")
- (VNx2x8DF "2")
- (VNx2x4DF "2") (VNx3x4DF "3") (VNx4x4DF "4")
- (VNx2x2DF "2") (VNx3x2DF "3") (VNx4x2DF "4") (VNx5x2DF "5") (VNx6x2DF "6") (VNx7x2DF "7") (VNx8x2DF "8")
- (VNx2x1DF "2") (VNx3x1DF "3") (VNx4x1DF "4") (VNx5x1DF "5") (VNx6x1DF "6") (VNx7x1DF "7") (VNx8x1DF "8")
+ (RVVM1x8QI "8") (RVVMF2x8QI "8") (RVVMF4x8QI "8") (RVVMF8x8QI "8")
+ (RVVM1x7QI "7") (RVVMF2x7QI "7") (RVVMF4x7QI "7") (RVVMF8x7QI "7")
+ (RVVM1x6QI "6") (RVVMF2x6QI "6") (RVVMF4x6QI "6") (RVVMF8x6QI "6")
+ (RVVM1x5QI "5") (RVVMF2x5QI "5") (RVVMF4x5QI "5") (RVVMF8x5QI "5")
+ (RVVM2x4QI "4") (RVVM1x4QI "4") (RVVMF2x4QI "4") (RVVMF4x4QI "4") (RVVMF8x4QI "4")
+ (RVVM2x3QI "3") (RVVM1x3QI "3") (RVVMF2x3QI "3") (RVVMF4x3QI "3") (RVVMF8x3QI "3")
+ (RVVM4x2QI "2") (RVVM2x2QI "2") (RVVM1x2QI "2") (RVVMF2x2QI "2") (RVVMF4x2QI "2") (RVVMF8x2QI "2")
+
+ (RVVM1x8HI "8") (RVVMF2x8HI "8") (RVVMF4x8HI "8")
+ (RVVM1x7HI "7") (RVVMF2x7HI "7") (RVVMF4x7HI "7")
+ (RVVM1x6HI "6") (RVVMF2x6HI "6") (RVVMF4x6HI "6")
+ (RVVM1x5HI "5") (RVVMF2x5HI "5") (RVVMF4x5HI "5")
+ (RVVM2x4HI "4") (RVVM1x4HI "4") (RVVMF2x4HI "4") (RVVMF4x4HI "4")
+ (RVVM2x3HI "3") (RVVM1x3HI "3") (RVVMF2x3HI "3") (RVVMF4x3HI "3")
+ (RVVM4x2HI "2") (RVVM2x2HI "2") (RVVM1x2HI "2") (RVVMF2x2HI "2") (RVVMF4x2HI "2")
+
+ (RVVM1x8HF "8") (RVVMF2x8HF "8") (RVVMF4x8HF "8")
+ (RVVM1x7HF "7") (RVVMF2x7HF "7") (RVVMF4x7HF "7")
+ (RVVM1x6HF "6") (RVVMF2x6HF "6") (RVVMF4x6HF "6")
+ (RVVM1x5HF "5") (RVVMF2x5HF "5") (RVVMF4x5HF "5")
+ (RVVM2x4HF "4") (RVVM1x4HF "4") (RVVMF2x4HF "4") (RVVMF4x4HF "4")
+ (RVVM2x3HF "3") (RVVM1x3HF "3") (RVVMF2x3HF "3") (RVVMF4x3HF "3")
+ (RVVM4x2HF "2") (RVVM2x2HF "2") (RVVM1x2HF "2") (RVVMF2x2HF "2") (RVVMF4x2HF "2")
+
+ (RVVM1x8SI "8") (RVVMF2x8SI "8")
+ (RVVM1x7SI "7") (RVVMF2x7SI "7")
+ (RVVM1x6SI "6") (RVVMF2x6SI "6")
+ (RVVM1x5SI "5") (RVVMF2x5SI "5")
+ (RVVM2x4SI "4") (RVVM1x4SI "4") (RVVMF2x4SI "4")
+ (RVVM2x3SI "3") (RVVM1x3SI "3") (RVVMF2x3SI "3")
+ (RVVM4x2SI "2") (RVVM2x2SI "2") (RVVM1x2SI "2") (RVVMF2x2SI "2")
+
+ (RVVM1x8SF "8") (RVVMF2x8SF "8")
+ (RVVM1x7SF "7") (RVVMF2x7SF "7")
+ (RVVM1x6SF "6") (RVVMF2x6SF "6")
+ (RVVM1x5SF "5") (RVVMF2x5SF "5")
+ (RVVM2x4SF "4") (RVVM1x4SF "4") (RVVMF2x4SF "4")
+ (RVVM2x3SF "3") (RVVM1x3SF "3") (RVVMF2x3SF "3")
+ (RVVM4x2SF "2") (RVVM2x2SF "2") (RVVM1x2SF "2") (RVVMF2x2SF "2")
+
+ (RVVM1x8DI "8")
+ (RVVM1x7DI "7")
+ (RVVM1x6DI "6")
+ (RVVM1x5DI "5")
+ (RVVM2x4DI "4")
+ (RVVM1x4DI "4")
+ (RVVM2x3DI "3")
+ (RVVM1x3DI "3")
+ (RVVM4x2DI "2")
+ (RVVM2x2DI "2")
+ (RVVM1x2DI "2")
+
+ (RVVM1x8DF "8")
+ (RVVM1x7DF "7")
+ (RVVM1x6DF "6")
+ (RVVM1x5DF "5")
+ (RVVM2x4DF "4")
+ (RVVM1x4DF "4")
+ (RVVM2x3DF "3")
+ (RVVM1x3DF "3")
+ (RVVM4x2DF "2")
+ (RVVM2x2DF "2")
+ (RVVM1x2DF "2")
])
(define_mode_attr sew [
- (VNx1QI "8") (VNx2QI "8") (VNx4QI "8") (VNx8QI "8") (VNx16QI "8") (VNx32QI "8") (VNx64QI "8") (VNx128QI "8")
- (VNx1HI "16") (VNx2HI "16") (VNx4HI "16") (VNx8HI "16") (VNx16HI "16") (VNx32HI "16") (VNx64HI "16")
- (VNx1SI "32") (VNx2SI "32") (VNx4SI "32") (VNx8SI "32") (VNx16SI "32") (VNx32SI "32")
- (VNx1DI "64") (VNx2DI "64") (VNx4DI "64") (VNx8DI "64") (VNx16DI "64")
- (VNx1HF "16") (VNx2HF "16") (VNx4HF "16") (VNx8HF "16") (VNx16HF "16") (VNx32HF "16") (VNx64HF "16")
- (VNx1SF "32") (VNx2SF "32") (VNx4SF "32") (VNx8SF "32") (VNx16SF "32") (VNx32SF "32")
- (VNx1DF "64") (VNx2DF "64") (VNx4DF "64") (VNx8DF "64") (VNx16DF "64")
- (VNx2x64QI "8") (VNx2x32QI "8") (VNx3x32QI "8") (VNx4x32QI "8")
- (VNx2x16QI "8") (VNx3x16QI "8") (VNx4x16QI "8") (VNx5x16QI "8") (VNx6x16QI "8") (VNx7x16QI "8") (VNx8x16QI "8")
- (VNx2x8QI "8") (VNx3x8QI "8") (VNx4x8QI "8") (VNx5x8QI "8") (VNx6x8QI "8") (VNx7x8QI "8") (VNx8x8QI "8")
- (VNx2x4QI "8") (VNx3x4QI "8") (VNx4x4QI "8") (VNx5x4QI "8") (VNx6x4QI "8") (VNx7x4QI "8") (VNx8x4QI "8")
- (VNx2x2QI "8") (VNx3x2QI "8") (VNx4x2QI "8") (VNx5x2QI "8") (VNx6x2QI "8") (VNx7x2QI "8") (VNx8x2QI "8")
- (VNx2x1QI "8") (VNx3x1QI "8") (VNx4x1QI "8") (VNx5x1QI "8") (VNx6x1QI "8") (VNx7x1QI "8") (VNx8x1QI "8")
- (VNx2x32HI "16") (VNx2x16HI "16") (VNx3x16HI "16") (VNx4x16HI "16")
- (VNx2x8HI "16") (VNx3x8HI "16") (VNx4x8HI "16") (VNx5x8HI "16") (VNx6x8HI "16") (VNx7x8HI "16") (VNx8x8HI "16")
- (VNx2x4HI "16") (VNx3x4HI "16") (VNx4x4HI "16") (VNx5x4HI "16") (VNx6x4HI "16") (VNx7x4HI "16") (VNx8x4HI "16")
- (VNx2x2HI "16") (VNx3x2HI "16") (VNx4x2HI "16") (VNx5x2HI "16") (VNx6x2HI "16") (VNx7x2HI "16") (VNx8x2HI "16")
- (VNx2x1HI "16") (VNx3x1HI "16") (VNx4x1HI "16") (VNx5x1HI "16") (VNx6x1HI "16") (VNx7x1HI "16") (VNx8x1HI "16")
- (VNx2x16SI "32") (VNx2x8SI "32") (VNx3x8SI "32") (VNx4x8SI "32")
- (VNx2x4SI "32") (VNx3x4SI "32") (VNx4x4SI "32") (VNx5x4SI "32") (VNx6x4SI "32") (VNx7x4SI "32") (VNx8x4SI "32")
- (VNx2x2SI "32") (VNx3x2SI "32") (VNx4x2SI "32") (VNx5x2SI "32") (VNx6x2SI "32") (VNx7x2SI "32") (VNx8x2SI "32")
- (VNx2x1SI "32") (VNx3x1SI "32") (VNx4x1SI "32") (VNx5x1SI "32") (VNx6x1SI "32") (VNx7x1SI "32") (VNx8x1SI "32")
- (VNx2x8DI "64") (VNx2x4DI "64") (VNx3x4DI "64") (VNx4x4DI "64")
- (VNx2x2DI "64") (VNx3x2DI "64") (VNx4x2DI "64") (VNx5x2DI "64") (VNx6x2DI "64") (VNx7x2DI "64") (VNx8x2DI "64")
- (VNx2x1DI "64") (VNx3x1DI "64") (VNx4x1DI "64") (VNx5x1DI "64") (VNx6x1DI "64") (VNx7x1DI "64") (VNx8x1DI "64")
- (VNx2x16SF "32") (VNx2x8SF "32") (VNx3x8SF "32") (VNx4x8SF "32")
- (VNx2x4SF "32") (VNx3x4SF "32") (VNx4x4SF "32") (VNx5x4SF "32") (VNx6x4SF "32") (VNx7x4SF "32") (VNx8x4SF "32")
- (VNx2x2SF "32") (VNx3x2SF "32") (VNx4x2SF "32") (VNx5x2SF "32") (VNx6x2SF "32") (VNx7x2SF "32") (VNx8x2SF "32")
- (VNx2x1SF "32") (VNx3x1SF "32") (VNx4x1SF "32") (VNx5x1SF "32") (VNx6x1SF "32") (VNx7x1SF "32") (VNx8x1SF "32")
- (VNx2x8DF "64")
- (VNx2x4DF "64") (VNx3x4DF "64") (VNx4x4DF "64")
- (VNx2x2DF "64") (VNx3x2DF "64") (VNx4x2DF "64") (VNx5x2DF "64") (VNx6x2DF "64") (VNx7x2DF "64") (VNx8x2DF "64")
- (VNx2x1DF "64") (VNx3x1DF "64") (VNx4x1DF "64") (VNx5x1DF "64") (VNx6x1DF "64") (VNx7x1DF "64") (VNx8x1DF "64")
+ (RVVM8QI "8") (RVVM4QI "8") (RVVM2QI "8") (RVVM1QI "8") (RVVMF2QI "8") (RVVMF4QI "8") (RVVMF8QI "8")
+
+ (RVVM8HI "16") (RVVM4HI "16") (RVVM2HI "16") (RVVM1HI "16") (RVVMF2HI "16") (RVVMF4HI "16")
+
+ (RVVM8HF "16") (RVVM4HF "16") (RVVM2HF "16") (RVVM1HF "16") (RVVMF2HF "16") (RVVMF4HF "16")
+
+ (RVVM8SI "32") (RVVM4SI "32") (RVVM2SI "32") (RVVM1SI "32") (RVVMF2SI "32")
+
+ (RVVM8SF "32") (RVVM4SF "32") (RVVM2SF "32") (RVVM1SF "32") (RVVMF2SF "32")
+
+ (RVVM8DI "64") (RVVM4DI "64") (RVVM2DI "64") (RVVM1DI "64")
+
+ (RVVM8DF "64") (RVVM4DF "64") (RVVM2DF "64") (RVVM1DF "64")
+
+ (RVVM1x8QI "8") (RVVMF2x8QI "8") (RVVMF4x8QI "8") (RVVMF8x8QI "8")
+ (RVVM1x7QI "8") (RVVMF2x7QI "8") (RVVMF4x7QI "8") (RVVMF8x7QI "8")
+ (RVVM1x6QI "8") (RVVMF2x6QI "8") (RVVMF4x6QI "8") (RVVMF8x6QI "8")
+ (RVVM1x5QI "8") (RVVMF2x5QI "8") (RVVMF4x5QI "8") (RVVMF8x5QI "8")
+ (RVVM2x4QI "8") (RVVM1x4QI "8") (RVVMF2x4QI "8") (RVVMF4x4QI "8") (RVVMF8x4QI "8")
+ (RVVM2x3QI "8") (RVVM1x3QI "8") (RVVMF2x3QI "8") (RVVMF4x3QI "8") (RVVMF8x3QI "8")
+ (RVVM4x2QI "8") (RVVM2x2QI "8") (RVVM1x2QI "8") (RVVMF2x2QI "8") (RVVMF4x2QI "8") (RVVMF8x2QI "8")
+
+ (RVVM1x8HI "16") (RVVMF2x8HI "16") (RVVMF4x8HI "16")
+ (RVVM1x7HI "16") (RVVMF2x7HI "16") (RVVMF4x7HI "16")
+ (RVVM1x6HI "16") (RVVMF2x6HI "16") (RVVMF4x6HI "16")
+ (RVVM1x5HI "16") (RVVMF2x5HI "16") (RVVMF4x5HI "16")
+ (RVVM2x4HI "16") (RVVM1x4HI "16") (RVVMF2x4HI "16") (RVVMF4x4HI "16")
+ (RVVM2x3HI "16") (RVVM1x3HI "16") (RVVMF2x3HI "16") (RVVMF4x3HI "16")
+ (RVVM4x2HI "16") (RVVM2x2HI "16") (RVVM1x2HI "16") (RVVMF2x2HI "16") (RVVMF4x2HI "16")
+
+ (RVVM1x8HF "16") (RVVMF2x8HF "16") (RVVMF4x8HF "16")
+ (RVVM1x7HF "16") (RVVMF2x7HF "16") (RVVMF4x7HF "16")
+ (RVVM1x6HF "16") (RVVMF2x6HF "16") (RVVMF4x6HF "16")
+ (RVVM1x5HF "16") (RVVMF2x5HF "16") (RVVMF4x5HF "16")
+ (RVVM2x4HF "16") (RVVM1x4HF "16") (RVVMF2x4HF "16") (RVVMF4x4HF "16")
+ (RVVM2x3HF "16") (RVVM1x3HF "16") (RVVMF2x3HF "16") (RVVMF4x3HF "16")
+ (RVVM4x2HF "16") (RVVM2x2HF "16") (RVVM1x2HF "16") (RVVMF2x2HF "16") (RVVMF4x2HF "16")
+
+ (RVVM1x8SI "32") (RVVMF2x8SI "32")
+ (RVVM1x7SI "32") (RVVMF2x7SI "32")
+ (RVVM1x6SI "32") (RVVMF2x6SI "32")
+ (RVVM1x5SI "32") (RVVMF2x5SI "32")
+ (RVVM2x4SI "32") (RVVM1x4SI "32") (RVVMF2x4SI "32")
+ (RVVM2x3SI "32") (RVVM1x3SI "32") (RVVMF2x3SI "32")
+ (RVVM4x2SI "32") (RVVM2x2SI "32") (RVVM1x2SI "32") (RVVMF2x2SI "32")
+
+ (RVVM1x8SF "32") (RVVMF2x8SF "32")
+ (RVVM1x7SF "32") (RVVMF2x7SF "32")
+ (RVVM1x6SF "32") (RVVMF2x6SF "32")
+ (RVVM1x5SF "32") (RVVMF2x5SF "32")
+ (RVVM2x4SF "32") (RVVM1x4SF "32") (RVVMF2x4SF "32")
+ (RVVM2x3SF "32") (RVVM1x3SF "32") (RVVMF2x3SF "32")
+ (RVVM4x2SF "32") (RVVM2x2SF "32") (RVVM1x2SF "32") (RVVMF2x2SF "32")
+
+ (RVVM1x8DI "64")
+ (RVVM1x7DI "64")
+ (RVVM1x6DI "64")
+ (RVVM1x5DI "64")
+ (RVVM2x4DI "64")
+ (RVVM1x4DI "64")
+ (RVVM2x3DI "64")
+ (RVVM1x3DI "64")
+ (RVVM4x2DI "64")
+ (RVVM2x2DI "64")
+ (RVVM1x2DI "64")
+
+ (RVVM1x8DF "64")
+ (RVVM1x7DF "64")
+ (RVVM1x6DF "64")
+ (RVVM1x5DF "64")
+ (RVVM2x4DF "64")
+ (RVVM1x4DF "64")
+ (RVVM2x3DF "64")
+ (RVVM1x3DF "64")
+ (RVVM4x2DF "64")
+ (RVVM2x2DF "64")
+ (RVVM1x2DF "64")
])
(define_mode_attr double_trunc_sew [
- (VNx1HI "8") (VNx2HI "8") (VNx4HI "8") (VNx8HI "8") (VNx16HI "8") (VNx32HI "8") (VNx64HI "8")
- (VNx1SI "16") (VNx2SI "16") (VNx4SI "16") (VNx8SI "16") (VNx16SI "16") (VNx32SI "16")
- (VNx1DI "32") (VNx2DI "32") (VNx4DI "32") (VNx8DI "32") (VNx16DI "32")
- (VNx1SF "16") (VNx2SF "16") (VNx4SF "16") (VNx8SF "16") (VNx16SF "16") (VNx32SF "16")
- (VNx1DF "32") (VNx2DF "32") (VNx4DF "32") (VNx8DF "32") (VNx16DF "32")
+ (RVVM8HI "8") (RVVM4HI "8") (RVVM2HI "8") (RVVM1HI "8") (RVVMF2HI "8") (RVVMF4HI "8")
+
+ (RVVM8HF "8") (RVVM4HF "8") (RVVM2HF "8") (RVVM1HF "8") (RVVMF2HF "8") (RVVMF4HF "8")
+
+ (RVVM8SI "16") (RVVM4SI "16") (RVVM2SI "16") (RVVM1SI "16") (RVVMF2SI "16")
+
+ (RVVM8SF "16") (RVVM4SF "16") (RVVM2SF "16") (RVVM1SF "16") (RVVMF2SF "16")
+
+ (RVVM8DI "32") (RVVM4DI "32") (RVVM2DI "32") (RVVM1DI "32")
+
+ (RVVM8DF "32") (RVVM4DF "32") (RVVM2DF "32") (RVVM1DF "32")
])
(define_mode_attr quad_trunc_sew [
- (VNx1SI "8") (VNx2SI "8") (VNx4SI "8") (VNx8SI "8") (VNx16SI "8") (VNx32SI "8")
- (VNx1DI "16") (VNx2DI "16") (VNx4DI "16") (VNx8DI "16") (VNx16DI "16")
- (VNx1SF "8") (VNx2SF "8") (VNx4SF "8") (VNx8SF "8") (VNx16SF "8") (VNx32SF "8")
- (VNx1DF "16") (VNx2DF "16") (VNx4DF "16") (VNx8DF "16") (VNx16DF "16")
+ (RVVM8SI "8") (RVVM4SI "8") (RVVM2SI "8") (RVVM1SI "8") (RVVMF2SI "8")
+
+ (RVVM8SF "8") (RVVM4SF "8") (RVVM2SF "8") (RVVM1SF "8") (RVVMF2SF "8")
+
+ (RVVM8DI "16") (RVVM4DI "16") (RVVM2DI "16") (RVVM1DI "16")
+
+ (RVVM8DF "16") (RVVM4DF "16") (RVVM2DF "16") (RVVM1DF "16")
])
(define_mode_attr oct_trunc_sew [
- (VNx1DI "8") (VNx2DI "8") (VNx4DI "8") (VNx8DI "8") (VNx16DI "8")
- (VNx1DF "8") (VNx2DF "8") (VNx4DF "8") (VNx8DF "8") (VNx16DF "8")
+ (RVVM8DI "8") (RVVM4DI "8") (RVVM2DI "8") (RVVM1DI "8")
+
+ (RVVM8DF "8") (RVVM4DF "8") (RVVM2DF "8") (RVVM1DF "8")
])
(define_mode_attr double_ext_sew [
- (VNx1QI "16") (VNx2QI "16") (VNx4QI "16") (VNx8QI "16") (VNx16QI "16") (VNx32QI "16") (VNx64QI "16")
- (VNx1HI "32") (VNx2HI "32") (VNx4HI "32") (VNx8HI "32") (VNx16HI "32") (VNx32HI "32")
- (VNx1SI "64") (VNx2SI "64") (VNx4SI "64") (VNx8SI "64") (VNx16SI "64")
- (VNx1SF "64") (VNx2SF "64") (VNx4SF "64") (VNx8SF "64") (VNx16SF "64")
+ (RVVM4QI "16") (RVVM2QI "16") (RVVM1QI "16") (RVVMF2QI "16") (RVVMF4QI "16") (RVVMF8QI "16")
+
+ (RVVM4HI "32") (RVVM2HI "32") (RVVM1HI "32") (RVVMF2HI "32") (RVVMF4HI "32")
+
+ (RVVM4HF "32") (RVVM2HF "32") (RVVM1HF "32") (RVVMF2HF "32") (RVVMF4HF "32")
+
+ (RVVM4SI "64") (RVVM2SI "64") (RVVM1SI "64") (RVVMF2SI "64")
+
+ (RVVM4SF "64") (RVVM2SF "64") (RVVM1SF "64") (RVVMF2SF "64")
])
(define_mode_attr quad_ext_sew [
- (VNx1QI "32") (VNx2QI "32") (VNx4QI "32") (VNx8QI "32") (VNx16QI "32") (VNx32QI "32")
- (VNx1HI "64") (VNx2HI "64") (VNx4HI "64") (VNx8HI "64") (VNx16HI "64")
+ (RVVM2QI "32") (RVVM1QI "32") (RVVMF2QI "32") (RVVMF4QI "32") (RVVMF8QI "32")
+
+ (RVVM2HI "64") (RVVM1HI "64") (RVVMF2HI "64") (RVVMF4HI "64")
+
+ (RVVM2HF "64") (RVVM1HF "64") (RVVMF2HF "64") (RVVMF4HF "64")
])
(define_mode_attr oct_ext_sew [
- (VNx1QI "64") (VNx2QI "64") (VNx4QI "64") (VNx8QI "64") (VNx16QI "64")
+ (RVVM1QI "64") (RVVMF2QI "64") (RVVMF4QI "64") (RVVMF8QI "64")
])
(define_mode_attr V_DOUBLE_TRUNC [
- (VNx1HI "VNx1QI") (VNx2HI "VNx2QI") (VNx4HI "VNx4QI") (VNx8HI "VNx8QI")
- (VNx16HI "VNx16QI") (VNx32HI "VNx32QI") (VNx64HI "VNx64QI")
- (VNx1SI "VNx1HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
- (VNx16SI "VNx16HI") (VNx32SI "VNx32HI")
- (VNx1DI "VNx1SI") (VNx2DI "VNx2SI") (VNx4DI "VNx4SI") (VNx8DI "VNx8SI")
- (VNx16DI "VNx16SI")
+ (RVVM8HI "RVVM4QI") (RVVM4HI "RVVM2QI") (RVVM2HI "RVVM1QI") (RVVM1HI "RVVMF2QI") (RVVMF2HI "RVVMF4QI") (RVVMF4HI "RVVMF8QI")
+
+ (RVVM8SI "RVVM4HI") (RVVM4SI "RVVM2HI") (RVVM2SI "RVVM1HI") (RVVM1SI "RVVMF2HI") (RVVMF2SI "RVVMF4HI")
+
+ (RVVM8SF "RVVM4HF") (RVVM4SF "RVVM2HF") (RVVM2SF "RVVM1HF") (RVVM1SF "RVVMF2HF") (RVVMF2SF "RVVMF4HF")
- (VNx1SF "VNx1HF") (VNx2SF "VNx2HF") (VNx4SF "VNx4HF") (VNx8SF "VNx8HF") (VNx16SF "VNx16HF") (VNx32SF "VNx32HF")
- (VNx1DF "VNx1SF") (VNx2DF "VNx2SF") (VNx4DF "VNx4SF") (VNx8DF "VNx8SF")
- (VNx16DF "VNx16SF")
+ (RVVM8DI "RVVM4SI") (RVVM4DI "RVVM2SI") (RVVM2DI "RVVM1SI") (RVVM1DI "RVVMF2SI")
+
+ (RVVM8DF "RVVM4SF") (RVVM4DF "RVVM2SF") (RVVM2DF "RVVM1SF") (RVVM1DF "RVVMF2SF")
])
(define_mode_attr V_QUAD_TRUNC [
- (VNx1SI "VNx1QI") (VNx2SI "VNx2QI") (VNx4SI "VNx4QI") (VNx8SI "VNx8QI")
- (VNx16SI "VNx16QI") (VNx32SI "VNx32QI")
- (VNx1DI "VNx1HI") (VNx2DI "VNx2HI")
- (VNx4DI "VNx4HI") (VNx8DI "VNx8HI") (VNx16DI "VNx16HI")
+ (RVVM8SI "RVVM2QI") (RVVM4SI "RVVM1QI") (RVVM2SI "RVVMF2QI") (RVVM1SI "RVVMF4QI") (RVVMF2SI "RVVMF8QI")
+
+ (RVVM8DI "RVVM2HI") (RVVM4DI "RVVM1HI") (RVVM2DI "RVVMF2HI") (RVVM1DI "RVVMF4HI")
- (VNx1DF "VNx1HF") (VNx2DF "VNx2HF") (VNx4DF "VNx4HF") (VNx8DF "VNx8HF")
- (VNx16DF "VNx16HF")
+ (RVVM8DF "RVVM2HF") (RVVM4DF "RVVM1HF") (RVVM2DF "RVVMF2HF") (RVVM1DF "RVVMF4HF")
])
(define_mode_attr V_OCT_TRUNC [
- (VNx1DI "VNx1QI") (VNx2DI "VNx2QI") (VNx4DI "VNx4QI") (VNx8DI "VNx8QI")
- (VNx16DI "VNx16QI")
+ (RVVM8DI "RVVM1QI") (RVVM4DI "RVVMF2QI") (RVVM2DI "RVVMF4QI") (RVVM1DI "RVVMF8QI")
])
; Again in lower case.
(define_mode_attr v_double_trunc [
- (VNx1HI "vnx1qi") (VNx2HI "vnx2qi") (VNx4HI "vnx4qi") (VNx8HI "vnx8qi")
- (VNx16HI "vnx16qi") (VNx32HI "vnx32qi") (VNx64HI "vnx64qi")
- (VNx1SI "vnx1hi") (VNx2SI "vnx2hi") (VNx4SI "vnx4hi") (VNx8SI "vnx8hi")
- (VNx16SI "vnx16hi") (VNx32SI "vnx32hi")
- (VNx1DI "vnx1si") (VNx2DI "vnx2si") (VNx4DI "vnx4si") (VNx8DI "vnx8si")
- (VNx16DI "vnx16si")
- (VNx1SF "vnx1hf") (VNx2SF "vnx2hf") (VNx4SF "vnx4hf") (VNx8SF "vnx8hf") (VNx16SF "vnx16hf") (VNx32SF "vnx32hf")
- (VNx1DF "vnx1sf") (VNx2DF "vnx2sf") (VNx4DF "vnx4sf") (VNx8DF "vnx8sf")
- (VNx16DF "vnx16sf")
+ (RVVM8HI "rvvm4qi") (RVVM4HI "rvvm2qi") (RVVM2HI "rvvm1qi") (RVVM1HI "rvvmf2qi") (RVVMF2HI "rvvmf4qi") (RVVMF4HI "rvvmf8qi")
+
+ (RVVM8SI "rvvm4hi") (RVVM4SI "rvvm2hi") (RVVM2SI "rvvm1hi") (RVVM1SI "rvvmf2hi") (RVVMF2SI "rvvmf4hi")
+
+ (RVVM8SF "rvvm4hf") (RVVM4SF "rvvm2hf") (RVVM2SF "rvvm1hf") (RVVM1SF "rvvmf2hf") (RVVMF2SF "rvvmf4hf")
+
+ (RVVM8DI "rvvm4si") (RVVM4DI "rvvm2si") (RVVM2DI "rvvm1si") (RVVM1DI "rvvmf2si")
+
+ (RVVM8DF "rvvm4sf") (RVVM4DF "rvvm2sf") (RVVM2DF "rvvm1sf") (RVVM1DF "rvvmf2sf")
])
(define_mode_attr v_quad_trunc [
- (VNx1SI "vnx1qi") (VNx2SI "vnx2qi") (VNx4SI "vnx4qi") (VNx8SI "vnx8qi")
- (VNx16SI "vnx16qi") (VNx32SI "vnx32qi")
- (VNx1DI "vnx1hi") (VNx2DI "vnx2hi") (VNx4DI "vnx4hi") (VNx8DI "vnx8hi")
- (VNx16DI "vnx16hi")
+ (RVVM8SI "rvvm2qi") (RVVM4SI "rvvm1qi") (RVVM2SI "rvvmf2qi") (RVVM1SI "rvvmf4qi") (RVVMF2SI "rvvmf8qi")
+
+ (RVVM8DI "rvvm2hi") (RVVM4DI "rvvm1hi") (RVVM2DI "rvvmf2hi") (RVVM1DI "rvvmf4hi")
- (VNx1DF "vnx1hf") (VNx2DF "vnx2hf") (VNx4DF "vnx4hf") (VNx8DF "vnx8hf")
- (VNx16DF "vnx16hf")
+ (RVVM8DF "rvvm2hf") (RVVM4DF "rvvm1hf") (RVVM2DF "rvvmf2hf") (RVVM1DF "rvvmf4hf")
])
(define_mode_attr v_oct_trunc [
- (VNx1DI "vnx1qi") (VNx2DI "vnx2qi") (VNx4DI "vnx4qi") (VNx8DI "vnx8qi")
- (VNx16DI "vnx16qi")
+ (RVVM8DI "rvvm1qi") (RVVM4DI "rvvmf2qi") (RVVM2DI "rvvmf4qi") (RVVM1DI "rvvmf8qi")
])
(define_mode_attr VINDEX_DOUBLE_TRUNC [
- (VNx1HI "VNx1QI") (VNx2HI "VNx2QI") (VNx4HI "VNx4QI") (VNx8HI "VNx8QI")
- (VNx16HI "VNx16QI") (VNx32HI "VNx32QI") (VNx64HI "VNx64QI")
- (VNx1HF "VNx1QI") (VNx2HF "VNx2QI") (VNx4HF "VNx4QI") (VNx8HF "VNx8QI")
- (VNx16HF "VNx16QI") (VNx32HF "VNx32QI") (VNx64HF "VNx64QI")
- (VNx1SI "VNx1HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
- (VNx16SI "VNx16HI") (VNx32SI "VNx32HI")
- (VNx1SF "VNx1HI") (VNx2SF "VNx2HI") (VNx4SF "VNx4HI") (VNx8SF "VNx8HI")
- (VNx16SF "VNx16HI") (VNx32SF "VNx32HI")
- (VNx1DI "VNx1SI") (VNx2DI "VNx2SI") (VNx4DI "VNx4SI") (VNx8DI "VNx8SI") (VNx16DI "VNx16SI")
- (VNx1DF "VNx1SI") (VNx2DF "VNx2SI") (VNx4DF "VNx4SI") (VNx8DF "VNx8SI") (VNx16DF "VNx16SI")
+ (RVVM8HI "RVVM4QI") (RVVM4HI "RVVM2QI") (RVVM2HI "RVVM1QI") (RVVM1HI "RVVMF2QI") (RVVMF2HI "RVVMF4QI") (RVVMF4HI "RVVMF8QI")
+
+ (RVVM8HF "RVVM4QI") (RVVM4HF "RVVM2QI") (RVVM2HF "RVVM1QI") (RVVM1HF "RVVMF2QI") (RVVMF2HF "RVVMF4QI") (RVVMF4HF "RVVMF8QI")
+
+ (RVVM8SI "RVVM4HI") (RVVM4SI "RVVM2HI") (RVVM2SI "RVVM1HI") (RVVM1SI "RVVMF2HI") (RVVMF2SI "RVVMF4HI")
+
+ (RVVM8SF "RVVM4HI") (RVVM4SF "RVVM2HI") (RVVM2SF "RVVM1HI") (RVVM1SF "RVVMF2HI") (RVVMF2SF "RVVMF4HI")
+
+ (RVVM8DI "RVVM4SI") (RVVM4DI "RVVM2SI") (RVVM2DI "RVVM1SI") (RVVM1DI "RVVMF2SI")
+
+ (RVVM8DF "RVVM4SI") (RVVM4DF "RVVM2SI") (RVVM2DF "RVVM1SI") (RVVM1DF "RVVMF2SI")
])
(define_mode_attr VINDEX_QUAD_TRUNC [
- (VNx1SI "VNx1QI") (VNx2SI "VNx2QI") (VNx4SI "VNx4QI") (VNx8SI "VNx8QI")
- (VNx16SI "VNx16QI") (VNx32SI "VNx32QI")
- (VNx1DI "VNx1HI") (VNx2DI "VNx2HI")
- (VNx4DI "VNx4HI") (VNx8DI "VNx8HI") (VNx16DI "VNx16HI")
- (VNx1SF "VNx1QI") (VNx2SF "VNx2QI") (VNx4SF "VNx4QI") (VNx8SF "VNx8QI")
- (VNx16SF "VNx16QI") (VNx32SF "VNx32QI")
- (VNx1DF "VNx1HI") (VNx2DF "VNx2HI")
- (VNx4DF "VNx4HI") (VNx8DF "VNx8HI") (VNx16DF "VNx16HI")
+ (RVVM8SI "RVVM2QI") (RVVM4SI "RVVM1QI") (RVVM2SI "RVVMF2QI") (RVVM1SI "RVVMF4QI") (RVVMF2SI "RVVMF8QI")
+
+ (RVVM8SF "RVVM2QI") (RVVM4SF "RVVM1QI") (RVVM2SF "RVVMF2QI") (RVVM1SF "RVVMF4QI") (RVVMF2SF "RVVMF8QI")
+
+ (RVVM8DI "RVVM2HI") (RVVM4DI "RVVM1HI") (RVVM2DI "RVVMF2HI") (RVVM1DI "RVVMF4HI")
+
+ (RVVM8DF "RVVM2HI") (RVVM4DF "RVVM1HI") (RVVM2DF "RVVMF2HI") (RVVM1DF "RVVMF4HI")
])
(define_mode_attr VINDEX_OCT_TRUNC [
- (VNx1DI "VNx1QI") (VNx2DI "VNx2QI") (VNx4DI "VNx4QI") (VNx8DI "VNx8QI") (VNx16DI "VNx16QI")
- (VNx1DF "VNx1QI") (VNx2DF "VNx2QI") (VNx4DF "VNx4QI") (VNx8DF "VNx8QI") (VNx16DF "VNx16QI")
+ (RVVM8DI "RVVM1QI") (RVVM4DI "RVVMF2QI") (RVVM2DI "RVVMF4QI") (RVVM1DI "RVVMF8QI")
+
+ (RVVM8DF "RVVM1QI") (RVVM4DF "RVVMF2QI") (RVVM2DF "RVVMF4QI") (RVVM1DF "RVVMF8QI")
])
(define_mode_attr VINDEX_DOUBLE_EXT [
- (VNx1QI "VNx1HI") (VNx2QI "VNx2HI") (VNx4QI "VNx4HI") (VNx8QI "VNx8HI") (VNx16QI "VNx16HI") (VNx32QI "VNx32HI") (VNx64QI "VNx64HI")
- (VNx1HI "VNx1SI") (VNx2HI "VNx2SI") (VNx4HI "VNx4SI") (VNx8HI "VNx8SI") (VNx16HI "VNx16SI") (VNx32HI "VNx32SI")
- (VNx1HF "VNx1SI") (VNx2HF "VNx2SI") (VNx4HF "VNx4SI") (VNx8HF "VNx8SI") (VNx16HF "VNx16SI") (VNx32HF "VNx32SI")
- (VNx1SI "VNx1DI") (VNx2SI "VNx2DI") (VNx4SI "VNx4DI") (VNx8SI "VNx8DI") (VNx16SI "VNx16DI")
- (VNx1SF "VNx1DI") (VNx2SF "VNx2DI") (VNx4SF "VNx4DI") (VNx8SF "VNx8DI") (VNx16SF "VNx16DI")
+ (RVVM4QI "RVVM8HI") (RVVM2QI "RVVM4HI") (RVVM1QI "RVVM2HI") (RVVMF2QI "RVVM1HI") (RVVMF4QI "RVVMF2HI") (RVVMF8QI "RVVMF4HI")
+
+ (RVVM4HI "RVVM8SI") (RVVM2HI "RVVM4SI") (RVVM1HI "RVVM2SI") (RVVMF2HI "RVVM1SI") (RVVMF4HI "RVVMF2SI")
+
+ (RVVM4HF "RVVM8SI") (RVVM2HF "RVVM4SI") (RVVM1HF "RVVM2SI") (RVVMF2HF "RVVM1SI") (RVVMF4HF "RVVMF2SI")
+
+ (RVVM4SI "RVVM8DI") (RVVM2SI "RVVM4DI") (RVVM1SI "RVVM2DI") (RVVMF2SI "RVVM1DI")
+
+ (RVVM4SF "RVVM8DI") (RVVM2SF "RVVM4DI") (RVVM1SF "RVVM2DI") (RVVMF2SF "RVVM1DI")
])
(define_mode_attr VINDEX_QUAD_EXT [
- (VNx1QI "VNx1SI") (VNx2QI "VNx2SI") (VNx4QI "VNx4SI") (VNx8QI "VNx8SI") (VNx16QI "VNx16SI") (VNx32QI "VNx32SI")
- (VNx1HI "VNx1DI") (VNx2HI "VNx2DI") (VNx4HI "VNx4DI") (VNx8HI "VNx8DI") (VNx16HI "VNx16DI")
- (VNx1HF "VNx1DI") (VNx2HF "VNx2DI") (VNx4HF "VNx4DI") (VNx8HF "VNx8DI") (VNx16HF "VNx16DI")
+ (RVVM2QI "RVVM8SI") (RVVM1QI "RVVM4SI") (RVVMF2QI "RVVM2SI") (RVVMF4QI "RVVM1SI") (RVVMF8QI "RVVMF2SI")
+
+ (RVVM2HI "RVVM8DI") (RVVM1HI "RVVM4DI") (RVVMF2HI "RVVM2DI") (RVVMF4HI "RVVM1DI")
+
+ (RVVM2HF "RVVM8DI") (RVVM1HF "RVVM4DI") (RVVMF2HF "RVVM2DI") (RVVMF4HF "RVVM1DI")
])
(define_mode_attr VINDEX_OCT_EXT [
- (VNx1QI "VNx1DI") (VNx2QI "VNx2DI") (VNx4QI "VNx4DI") (VNx8QI "VNx8DI") (VNx16QI "VNx16DI")
+ (RVVM1QI "RVVM8DI") (RVVMF2QI "RVVM4DI") (RVVMF4QI "RVVM2DI") (RVVMF8QI "RVVM1DI")
])
(define_mode_attr VCONVERT [
- (VNx1HF "VNx1HI") (VNx2HF "VNx2HI") (VNx4HF "VNx4HI") (VNx8HF "VNx8HI") (VNx16HF "VNx16HI") (VNx32HF "VNx32HI") (VNx64HF "VNx64HI")
- (VNx1SF "VNx1SI") (VNx2SF "VNx2SI") (VNx4SF "VNx4SI") (VNx8SF "VNx8SI") (VNx16SF "VNx16SI") (VNx32SF "VNx32SI")
- (VNx1DF "VNx1DI") (VNx2DF "VNx2DI") (VNx4DF "VNx4DI") (VNx8DF "VNx8DI") (VNx16DF "VNx16DI")
+ (RVVM8HF "RVVM8HI") (RVVM4HF "RVVM4HI") (RVVM2HF "RVVM2HI") (RVVM1HF "RVVM1HI") (RVVMF2HF "RVVMF2HI") (RVVMF4HF "RVVMF4HI")
+ (RVVM8SF "RVVM8SI") (RVVM4SF "RVVM4SI") (RVVM2SF "RVVM2SI") (RVVM1SF "RVVM1SI") (RVVMF2SF "RVVMF2SI")
+ (RVVM8DF "RVVM8DI") (RVVM4DF "RVVM4DI") (RVVM2DF "RVVM2DI") (RVVM1DF "RVVM1DI")
])
(define_mode_attr vconvert [
- (VNx1HF "vnx1hi") (VNx2HF "vnx2hi") (VNx4HF "vnx4hi") (VNx8HF "vnx8hi") (VNx16HF "vnx16hi") (VNx32HF "vnx32hi") (VNx64HF "vnx64hi")
- (VNx1SF "vnx1si") (VNx2SF "vnx2si") (VNx4SF "vnx4si") (VNx8SF "vnx8si") (VNx16SF "vnx16si") (VNx32SF "vnx32si")
- (VNx1DF "vnx1di") (VNx2DF "vnx2di") (VNx4DF "vnx4di") (VNx8DF "vnx8di") (VNx16DF "vnx16di")
+ (RVVM8HF "rvvm8hi") (RVVM4HF "rvvm4hi") (RVVM2HF "rvvm2hi") (RVVM1HF "rvvm1hi") (RVVMF2HF "rvvmf2hi") (RVVMF4HF "rvvmf4hi")
+ (RVVM8SF "rvvm8si") (RVVM4SF "rvvm4si") (RVVM2SF "rvvm2si") (RVVM1SF "rvvm1si") (RVVMF2SF "rvvmf2si")
+ (RVVM8DF "rvvm8di") (RVVM4DF "rvvm4di") (RVVM2DF "rvvm2di") (RVVM1DF "rvvm1di")
])
(define_mode_attr VNCONVERT [
- (VNx1HF "VNx1QI") (VNx2HF "VNx2QI") (VNx4HF "VNx4QI") (VNx8HF "VNx8QI") (VNx16HF "VNx16QI") (VNx32HF "VNx32QI") (VNx64HF "VNx64QI")
- (VNx1SF "VNx1HI") (VNx2SF "VNx2HI") (VNx4SF "VNx4HI") (VNx8SF "VNx8HI") (VNx16SF "VNx16HI") (VNx32SF "VNx32HI")
- (VNx1SI "VNx1HF") (VNx2SI "VNx2HF") (VNx4SI "VNx4HF") (VNx8SI "VNx8HF") (VNx16SI "VNx16HF") (VNx32SI "VNx32HF")
- (VNx1DI "VNx1SF") (VNx2DI "VNx2SF") (VNx4DI "VNx4SF") (VNx8DI "VNx8SF") (VNx16DI "VNx16SF")
- (VNx1DF "VNx1SI") (VNx2DF "VNx2SI") (VNx4DF "VNx4SI") (VNx8DF "VNx8SI") (VNx16DF "VNx16SI")
+ (RVVM8HF "RVVM4QI") (RVVM4HF "RVVM2QI") (RVVM2HF "RVVM1QI") (RVVM1HF "RVVMF2QI") (RVVMF2HF "RVVMF4QI") (RVVMF4HF "RVVMF8QI")
+
+ (RVVM8SI "RVVM4HF") (RVVM4SI "RVVM2HF") (RVVM2SI "RVVM1HF") (RVVM1SI "RVVMF2HF") (RVVMF2SI "RVVMF4HF")
+ (RVVM8SF "RVVM4HI") (RVVM4SF "RVVM2HI") (RVVM2SF "RVVM1HI") (RVVM1SF "RVVMF2HI") (RVVMF2SF "RVVMF4HI")
+
+ (RVVM8DI "RVVM4SF") (RVVM4DI "RVVM2SF") (RVVM2DI "RVVM1SF") (RVVM1DI "RVVMF2SF")
+ (RVVM8DF "RVVM4SI") (RVVM4DF "RVVM2SI") (RVVM2DF "RVVM1SI") (RVVM1DF "RVVMF2SI")
])
(define_mode_attr vnconvert [
- (VNx1HF "vnx1qi") (VNx2HF "vnx2qi") (VNx4HF "vnx4qi") (VNx8HF "vnx8qi") (VNx16HF "vnx16qi") (VNx32HF "vnx32qi") (VNx64HF "vnx64qi")
- (VNx1SF "vnx1hi") (VNx2SF "vnx2hi") (VNx4SF "vnx4hi") (VNx8SF "vnx8hi") (VNx16SF "vnx16hi") (VNx32SF "vnx32hi")
- (VNx1SI "vnx1hf") (VNx2SI "vnx2hf") (VNx4SI "vnx4hf") (VNx8SI "vnx8hf") (VNx16SI "vnx16hf") (VNx32SI "vnx32hf")
- (VNx1DI "vnx1sf") (VNx2DI "vnx2sf") (VNx4DI "vnx4sf") (VNx8DI "vnx8sf") (VNx16DI "vnx16sf")
- (VNx1DF "vnx1si") (VNx2DF "vnx2si") (VNx4DF "vnx4si") (VNx8DF "vnx8si") (VNx16DF "vnx16si")
+ (RVVM8HF "rvvm4qi") (RVVM4HF "rvvm2qi") (RVVM2HF "rvvm1qi") (RVVM1HF "rvvmf2qi") (RVVMF2HF "rvvmf4qi") (RVVMF4HF "rvvmf8qi")
+
+ (RVVM8SI "rvvm4hf") (RVVM4SI "rvvm2hf") (RVVM2SI "rvvm1hf") (RVVM1SI "rvvmf2hf") (RVVMF2SI "rvvmf4hf")
+ (RVVM8SF "rvvm4hi") (RVVM4SF "rvvm2hi") (RVVM2SF "rvvm1hi") (RVVM1SF "rvvmf2hi") (RVVMF2SF "rvvmf4hi")
+
+ (RVVM8DI "rvvm4sf") (RVVM4DI "rvvm2sf") (RVVM2DI "rvvm1sf") (RVVM1DI "rvvmf2sf")
+ (RVVM8DF "rvvm4si") (RVVM4DF "rvvm2si") (RVVM2DF "rvvm1si") (RVVM1DF "rvvmf2si")
])
(define_mode_attr VDEMOTE [
- (VNx1DI "VNx2SI") (VNx2DI "VNx4SI")
- (VNx4DI "VNx8SI") (VNx8DI "VNx16SI") (VNx16DI "VNx32SI")
+ (RVVM8DI "RVVM8SI") (RVVM4DI "RVVM4SI") (RVVM2DI "RVVM2SI") (RVVM1DI "RVVM1SI")
])
(define_mode_attr VMDEMOTE [
- (VNx1DI "VNx2BI") (VNx2DI "VNx4BI")
- (VNx4DI "VNx8BI") (VNx8DI "VNx16BI") (VNx16DI "VNx32BI")
+ (RVVM8DI "RVVMF4BI") (RVVM4DI "RVVMF8BI") (RVVM2DI "RVVMF16BI") (RVVM1DI "RVVMF32BI")
])
(define_mode_attr gs_extension [
- (VNx1QI "immediate_operand") (VNx2QI "immediate_operand") (VNx4QI "immediate_operand") (VNx8QI "immediate_operand") (VNx16QI "immediate_operand")
- (VNx32QI "vector_gs_extension_operand") (VNx64QI "const_1_operand")
- (VNx1HI "immediate_operand") (VNx2HI "immediate_operand") (VNx4HI "immediate_operand") (VNx8HI "immediate_operand") (VNx16HI "immediate_operand")
- (VNx32HI "vector_gs_extension_operand") (VNx64HI "const_1_operand")
- (VNx1SI "immediate_operand") (VNx2SI "immediate_operand") (VNx4SI "immediate_operand") (VNx8SI "immediate_operand") (VNx16SI "immediate_operand")
- (VNx32SI "vector_gs_extension_operand")
- (VNx1DI "immediate_operand") (VNx2DI "immediate_operand") (VNx4DI "immediate_operand") (VNx8DI "immediate_operand") (VNx16DI "immediate_operand")
+ (RVVM8QI "const_1_operand") (RVVM4QI "vector_gs_extension_operand")
+ (RVVM2QI "immediate_operand") (RVVM1QI "immediate_operand") (RVVMF2QI "immediate_operand")
+ (RVVMF4QI "immediate_operand") (RVVMF8QI "immediate_operand")
+
+ (RVVM8HI "const_1_operand") (RVVM4HI "vector_gs_extension_operand")
+ (RVVM2HI "immediate_operand") (RVVM1HI "immediate_operand")
+ (RVVMF2HI "immediate_operand") (RVVMF4HI "immediate_operand")
- (VNx1HF "immediate_operand") (VNx2HF "immediate_operand") (VNx4HF "immediate_operand") (VNx8HF "immediate_operand") (VNx16HF "immediate_operand")
- (VNx32HF "vector_gs_extension_operand") (VNx64HF "const_1_operand")
- (VNx1SF "immediate_operand") (VNx2SF "immediate_operand") (VNx4SF "immediate_operand") (VNx8SF "immediate_operand") (VNx16SF "immediate_operand")
- (VNx32SF "vector_gs_extension_operand")
- (VNx1DF "immediate_operand") (VNx2DF "immediate_operand") (VNx4DF "immediate_operand") (VNx8DF "immediate_operand") (VNx16DF "immediate_operand")
+ (RVVM8HF "const_1_operand") (RVVM4HF "vector_gs_extension_operand")
+ (RVVM2HF "immediate_operand") (RVVM1HF "immediate_operand")
+ (RVVMF2HF "immediate_operand") (RVVMF4HF "immediate_operand")
+
+ (RVVM8SI "vector_gs_extension_operand") (RVVM4SI "immediate_operand") (RVVM2SI "immediate_operand")
+ (RVVM1SI "immediate_operand") (RVVMF2SI "immediate_operand")
+
+ (RVVM8SF "vector_gs_extension_operand") (RVVM4SF "immediate_operand") (RVVM2SF "immediate_operand")
+ (RVVM1SF "immediate_operand") (RVVMF2SF "immediate_operand")
+
+ (RVVM8DI "immediate_operand") (RVVM4DI "immediate_operand")
+ (RVVM2DI "immediate_operand") (RVVM1DI "immediate_operand")
+
+ (RVVM8DF "immediate_operand") (RVVM4DF "immediate_operand")
+ (RVVM2DF "immediate_operand") (RVVM1DF "immediate_operand")
])
(define_mode_attr gs_scale [
- (VNx1QI "const_1_operand") (VNx2QI "const_1_operand") (VNx4QI "const_1_operand") (VNx8QI "const_1_operand")
- (VNx16QI "const_1_operand") (VNx32QI "const_1_operand") (VNx64QI "const_1_operand")
- (VNx1HI "vector_gs_scale_operand_16") (VNx2HI "vector_gs_scale_operand_16") (VNx4HI "vector_gs_scale_operand_16") (VNx8HI "vector_gs_scale_operand_16")
- (VNx16HI "vector_gs_scale_operand_16") (VNx32HI "vector_gs_scale_operand_16_rv32") (VNx64HI "const_1_operand")
- (VNx1SI "vector_gs_scale_operand_32") (VNx2SI "vector_gs_scale_operand_32") (VNx4SI "vector_gs_scale_operand_32") (VNx8SI "vector_gs_scale_operand_32")
- (VNx16SI "vector_gs_scale_operand_32") (VNx32SI "vector_gs_scale_operand_32_rv32")
- (VNx1DI "vector_gs_scale_operand_64") (VNx2DI "vector_gs_scale_operand_64") (VNx4DI "vector_gs_scale_operand_64") (VNx8DI "vector_gs_scale_operand_64")
- (VNx16DI "vector_gs_scale_operand_64")
-
- (VNx1HF "vector_gs_scale_operand_16") (VNx2HF "vector_gs_scale_operand_16") (VNx4HF "vector_gs_scale_operand_16") (VNx8HF "vector_gs_scale_operand_16")
- (VNx16HF "vector_gs_scale_operand_16") (VNx32HF "vector_gs_scale_operand_16_rv32") (VNx64HF "const_1_operand")
- (VNx1SF "vector_gs_scale_operand_32") (VNx2SF "vector_gs_scale_operand_32") (VNx4SF "vector_gs_scale_operand_32") (VNx8SF "vector_gs_scale_operand_32")
- (VNx16SF "vector_gs_scale_operand_32") (VNx32SF "vector_gs_scale_operand_32_rv32")
- (VNx1DF "vector_gs_scale_operand_64") (VNx2DF "vector_gs_scale_operand_64") (VNx4DF "vector_gs_scale_operand_64") (VNx8DF "vector_gs_scale_operand_64")
- (VNx16DF "vector_gs_scale_operand_64")
+ (RVVM8QI "const_1_operand") (RVVM4QI "const_1_operand")
+ (RVVM2QI "const_1_operand") (RVVM1QI "const_1_operand") (RVVMF2QI "const_1_operand")
+ (RVVMF4QI "const_1_operand") (RVVMF8QI "const_1_operand")
+
+ (RVVM8HI "const_1_operand") (RVVM4HI "vector_gs_scale_operand_16_rv32")
+ (RVVM2HI "vector_gs_scale_operand_16") (RVVM1HI "vector_gs_scale_operand_16")
+ (RVVMF2HI "vector_gs_scale_operand_16") (RVVMF4HI "vector_gs_scale_operand_16")
+
+ (RVVM8HF "const_1_operand") (RVVM4HF "vector_gs_scale_operand_16_rv32")
+ (RVVM2HF "vector_gs_scale_operand_16") (RVVM1HF "vector_gs_scale_operand_16")
+ (RVVMF2HF "vector_gs_scale_operand_16") (RVVMF4HF "vector_gs_scale_operand_16")
+
+ (RVVM8SI "vector_gs_scale_operand_32_rv32") (RVVM4SI "vector_gs_scale_operand_32") (RVVM2SI "vector_gs_scale_operand_32")
+ (RVVM1SI "vector_gs_scale_operand_32") (RVVMF2SI "vector_gs_scale_operand_32")
+
+ (RVVM8SF "vector_gs_scale_operand_32_rv32") (RVVM4SF "vector_gs_scale_operand_32") (RVVM2SF "vector_gs_scale_operand_32")
+ (RVVM1SF "vector_gs_scale_operand_32") (RVVMF2SF "vector_gs_scale_operand_32")
+
+ (RVVM8DI "vector_gs_scale_operand_64") (RVVM4DI "vector_gs_scale_operand_64")
+ (RVVM2DI "vector_gs_scale_operand_64") (RVVM1DI "vector_gs_scale_operand_64")
+
+ (RVVM8DF "vector_gs_scale_operand_64") (RVVM4DF "vector_gs_scale_operand_64")
+ (RVVM2DF "vector_gs_scale_operand_64") (RVVM1DF "vector_gs_scale_operand_64")
])
(define_int_iterator WREDUC [UNSPEC_WREDUC_SUM UNSPEC_WREDUC_USUM])
@@ -83,126 +83,242 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
- (cond [(eq_attr "mode" "VNx1QI,VNx2QI,VNx4QI,VNx8QI,VNx16QI,VNx32QI,VNx64QI,\
- VNx1BI,VNx2BI,VNx4BI,VNx8BI,VNx16BI,VNx32BI,VNx64BI,\
- VNx128QI,VNx128BI,VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,\
- VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI,\
- VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI,\
- VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI,\
- VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI,\
- VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI")
+ (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+ RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
+ RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
+ RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
+ RVVM1x6QI,RVVMF2x6QI,RVVMF4x6QI,RVVMF8x6QI,\
+ RVVM1x5QI,RVVMF2x5QI,RVVMF4x5QI,RVVMF8x5QI,\
+ RVVM2x4QI,RVVM1x4QI,RVVMF2x4QI,RVVMF4x4QI,RVVMF8x4QI,\
+ RVVM2x3QI,RVVM1x3QI,RVVMF2x3QI,RVVMF4x3QI,RVVMF8x3QI,\
+ RVVM4x2QI,RVVM2x2QI,RVVM1x2QI,RVVMF2x2QI,RVVMF4x2QI,RVVMF8x2QI")
(const_int 8)
- (eq_attr "mode" "VNx1HI,VNx2HI,VNx4HI,VNx8HI,VNx16HI,VNx32HI,VNx64HI,\
- VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF,\
- VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,\
- VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,\
- VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,\
- VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI,\
- VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI,\
- VNx2x32HF,VNx2x16HF,VNx3x16HF,VNx4x16HF,\
- VNx2x8HF,VNx3x8HF,VNx4x8HF,VNx5x8HF,VNx6x8HF,VNx7x8HF,VNx8x8HF,\
- VNx2x4HF,VNx3x4HF,VNx4x4HF,VNx5x4HF,VNx6x4HF,VNx7x4HF,VNx8x4HF,\
- VNx2x2HF,VNx3x2HF,VNx4x2HF,VNx5x2HF,VNx6x2HF,VNx7x2HF,VNx8x2HF,\
- VNx2x1HF,VNx3x1HF,VNx4x1HF,VNx5x1HF,VNx6x1HF,VNx7x1HF,VNx8x1HF")
+ (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
+ RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
+ RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
+ RVVM1x6HI,RVVMF2x6HI,RVVMF4x6HI,\
+ RVVM1x5HI,RVVMF2x5HI,RVVMF4x5HI,\
+ RVVM2x4HI,RVVM1x4HI,RVVMF2x4HI,RVVMF4x4HI,\
+ RVVM2x3HI,RVVM1x3HI,RVVMF2x3HI,RVVMF4x3HI,\
+ RVVM4x2HI,RVVM2x2HI,RVVM1x2HI,RVVMF2x2HI,RVVMF4x2HI,\
+ RVVM8HF,RVVM4HF,RVVM2HF,RVVM1HF,RVVMF2HF,RVVMF4HF,\
+ RVVM1x8HF,RVVMF2x8HF,RVVMF4x8HF,\
+ RVVM1x7HF,RVVMF2x7HF,RVVMF4x7HF,\
+ RVVM1x6HF,RVVMF2x6HF,RVVMF4x6HF,\
+ RVVM1x5HF,RVVMF2x5HF,RVVMF4x5HF,\
+ RVVM2x4HF,RVVM1x4HF,RVVMF2x4HF,RVVMF4x4HF,\
+ RVVM2x3HF,RVVM1x3HF,RVVMF2x3HF,RVVMF4x3HF,\
+ RVVM4x2HF,RVVM2x2HF,RVVM1x2HF,RVVMF2x2HF,RVVMF4x2HF")
(const_int 16)
- (eq_attr "mode" "VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,\
- VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,\
- VNx2x16SI,VNx2x8SI,VNx3x8SI,VNx4x8SI,\
- VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,\
- VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,\
- VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,\
- VNx2x16SF,VNx2x8SF,VNx3x8SF,VNx4x8SF,\
- VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF,\
- VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF,\
- VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF")
+ (eq_attr "mode" "RVVM8SI,RVVM4SI,RVVM2SI,RVVM1SI,RVVMF2SI,\
+ RVVM8SF,RVVM4SF,RVVM2SF,RVVM1SF,RVVMF2SF,\
+ RVVM1x8SI,RVVMF2x8SI,\
+ RVVM1x7SI,RVVMF2x7SI,\
+ RVVM1x6SI,RVVMF2x6SI,\
+ RVVM1x5SI,RVVMF2x5SI,\
+ RVVM2x4SI,RVVM1x4SI,\
+ RVVM2x3SI,RVVM1x3SI,\
+ RVVM4x2SI,RVVM2x2SI,RVVM1x2SI,RVVMF2x2SI,\
+ RVVM1x8SF,RVVMF2x8SF,\
+ RVVM1x7SF,RVVMF2x7SF,\
+ RVVM1x6SF,RVVMF2x6SF,\
+ RVVM1x5SF,RVVMF2x5SF,\
+ RVVM2x4SF,RVVM1x4SF,RVVMF2x4SF,\
+ RVVM2x3SF,RVVM1x3SF,RVVMF2x3SF,\
+ RVVM4x2SF,RVVM2x2SF,RVVM1x2SF,RVVMF2x2SF")
(const_int 32)
- (eq_attr "mode" "VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,\
- VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,\
- VNx2x8DI,VNx2x4DI,VNx3x4DI,VNx4x4DI,\
- VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,\
- VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,\
- VNx2x8DF,VNx2x4DF,VNx3x4DF,VNx4x4DF,\
- VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF,\
- VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF")
+ (eq_attr "mode" "RVVM8DI,RVVM4DI,RVVM2DI,RVVM1DI,\
+ RVVM8DF,RVVM4DF,RVVM2DF,RVVM1DF,\
+ RVVM1x8DI,RVVM1x7DI,RVVM1x6DI,RVVM1x5DI,\
+ RVVM2x4DI,RVVM1x4DI,\
+ RVVM2x3DI,RVVM1x3DI,\
+ RVVM4x2DI,RVVM2x2DI,RVVM1x2DI,\
+ RVVM1x8DF,RVVM1x7DF,RVVM1x6DF,RVVM1x5DF,\
+ RVVM2x4DF,RVVM1x4DF,\
+ RVVM2x3DF,RVVM1x3DF,\
+ RVVM4x2DF,RVVM2x2DF,RVVM1x2DF")
(const_int 64)]
(const_int INVALID_ATTRIBUTE)))
;; Ditto to LMUL.
(define_attr "vlmul" ""
- (cond [(eq_attr "mode" "VNx1QI,VNx1BI,VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx1QImode)")
- (eq_attr "mode" "VNx2QI,VNx2BI,VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx2QImode)")
- (eq_attr "mode" "VNx4QI,VNx4BI,VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx4QImode)")
- (eq_attr "mode" "VNx8QI,VNx8BI,VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx8QImode)")
- (eq_attr "mode" "VNx16QI,VNx16BI,VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx16QImode)")
- (eq_attr "mode" "VNx32QI,VNx32BI,VNx2x32QI,VNx3x32QI,VNx4x32QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx32QImode)")
- (eq_attr "mode" "VNx64QI,VNx64BI,VNx2x64QI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx64QImode)")
- (eq_attr "mode" "VNx128QI,VNx128BI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx128QImode)")
- (eq_attr "mode" "VNx1HI,VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx1HImode)")
- (eq_attr "mode" "VNx2HI,VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx2HImode)")
- (eq_attr "mode" "VNx4HI,VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx4HImode)")
- (eq_attr "mode" "VNx8HI,VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx8HImode)")
- (eq_attr "mode" "VNx16HI,VNx2x16HI,VNx3x16HI,VNx4x16HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx16HImode)")
- (eq_attr "mode" "VNx32HI,VNx2x32HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx32HImode)")
- (eq_attr "mode" "VNx64HI")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx64HImode)")
-
- ; Half float point
- (eq_attr "mode" "VNx1HF,VNx2x1HF,VNx3x1HF,VNx4x1HF,VNx5x1HF,VNx6x1HF,VNx7x1HF,VNx8x1HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx1HFmode)")
- (eq_attr "mode" "VNx2HF,VNx2x2HF,VNx3x2HF,VNx4x2HF,VNx5x2HF,VNx6x2HF,VNx7x2HF,VNx8x2HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx2HFmode)")
- (eq_attr "mode" "VNx4HF,VNx2x4HF,VNx3x4HF,VNx4x4HF,VNx5x4HF,VNx6x4HF,VNx7x4HF,VNx8x4HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx4HFmode)")
- (eq_attr "mode" "VNx8HF,VNx2x8HF,VNx3x8HF,VNx4x8HF,VNx5x8HF,VNx6x8HF,VNx7x8HF,VNx8x8HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx8HFmode)")
- (eq_attr "mode" "VNx16HF,VNx2x16HF,VNx3x16HF,VNx4x16HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx16HFmode)")
- (eq_attr "mode" "VNx32HF,VNx2x32HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx32HFmode)")
- (eq_attr "mode" "VNx64HF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx64HFmode)")
-
- (eq_attr "mode" "VNx1SI,VNx1SF,VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,\
- VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx1SImode)")
- (eq_attr "mode" "VNx2SI,VNx2SF,VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,\
- VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx2SImode)")
- (eq_attr "mode" "VNx4SI,VNx4SF,VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,\
- VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx4SImode)")
- (eq_attr "mode" "VNx8SI,VNx8SF,VNx2x8SI,VNx3x8SI,VNx4x8SI,VNx2x8SF,VNx3x8SF,VNx4x8SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx8SImode)")
- (eq_attr "mode" "VNx16SI,VNx16SF,VNx2x16SI,VNx2x16SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx16SImode)")
- (eq_attr "mode" "VNx32SI,VNx32SF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx32SImode)")
- (eq_attr "mode" "VNx1DI,VNx1DF,VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,\
- VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx1DImode)")
- (eq_attr "mode" "VNx2DI,VNx2DF,VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,\
- VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx2DImode)")
- (eq_attr "mode" "VNx4DI,VNx4DF,VNx2x4DI,VNx3x4DI,VNx4x4DI,VNx2x4DF,VNx3x4DF,VNx4x4DF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx4DImode)")
- (eq_attr "mode" "VNx8DI,VNx8DF,VNx2x8DI,VNx2x8DF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx8DImode)")
- (eq_attr "mode" "VNx16DI,VNx16DF")
- (symbol_ref "riscv_vector::get_vlmul(E_VNx16DImode)")]
+ (cond [(eq_attr "mode" "RVVM8QI,RVVM1BI") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM8HF") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4HF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2HF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM8SI") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4SI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2SI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM8SF") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4SF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2SF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM8DI") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4DI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2DI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM8DF") (symbol_ref "riscv_vector::LMUL_8")
+ (eq_attr "mode" "RVVM4DF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2DF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x8QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x8QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x8QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x8QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM1x7QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x7QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x7QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x7QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM1x6QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x6QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x6QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x6QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM1x5QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x5QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x5QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x5QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM2x4QI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x4QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x4QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x4QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM2x3QI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x3QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x3QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x3QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM4x2QI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2QI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2QI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x2QI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x2QI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8x2QI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVM1x8HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x8HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x8HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x7HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x7HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x7HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x6HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x6HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x6HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x5HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x5HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x5HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM2x4HI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x4HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x4HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM2x3HI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x3HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x3HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM4x2HI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2HI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2HI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x2HI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x2HI") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x8HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x8HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x8HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x7HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x7HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x7HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x6HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x6HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x6HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x5HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x5HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x5HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM2x4HF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x4HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x4HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM2x3HF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x3HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x3HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM4x2HF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2HF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2HF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x2HF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4x2HF") (symbol_ref "riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVM1x8SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x8SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x7SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x7SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x6SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x6SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x5SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x5SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM2x4SI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x4SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM2x3SI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x3SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM4x2SI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2SI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2SI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x2SI") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x8SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x8SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x7SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x7SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x6SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x6SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x5SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x5SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM2x4SF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x4SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM2x3SF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x3SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM4x2SF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2SF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2SF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVMF2x2SF") (symbol_ref "riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVM1x8DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x7DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x6DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x5DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM2x4DI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM2x3DI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM4x2DI") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2DI") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2DI") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x8DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x7DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x6DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM1x5DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM2x4DF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x4DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM2x3DF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x3DF") (symbol_ref "riscv_vector::LMUL_1")
+ (eq_attr "mode" "RVVM4x2DF") (symbol_ref "riscv_vector::LMUL_4")
+ (eq_attr "mode" "RVVM2x2DF") (symbol_ref "riscv_vector::LMUL_2")
+ (eq_attr "mode" "RVVM1x2DF") (symbol_ref "riscv_vector::LMUL_1")]
(const_int INVALID_ATTRIBUTE)))
;; It is valid for instruction that require sew/lmul ratio.
@@ -222,80 +338,183 @@
vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
(const_int INVALID_ATTRIBUTE)
- (eq_attr "mode" "VNx1QI,VNx1BI,VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx1QImode)")
- (eq_attr "mode" "VNx2QI,VNx2BI,VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx2QImode)")
- (eq_attr "mode" "VNx4QI,VNx4BI,VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx4QImode)")
- (eq_attr "mode" "VNx8QI,VNx8BI,VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx8QImode)")
- (eq_attr "mode" "VNx16QI,VNx16BI,VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx16QImode)")
- (eq_attr "mode" "VNx32QI,VNx32BI,VNx2x32QI,VNx3x32QI,VNx4x32QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx32QImode)")
- (eq_attr "mode" "VNx64QI,VNx64BI,VNx2x64QI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx64QImode)")
- (eq_attr "mode" "VNx128QI,VNx128BI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx128QImode)")
- (eq_attr "mode" "VNx1HI,VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx1HImode)")
- (eq_attr "mode" "VNx2HI,VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx2HImode)")
- (eq_attr "mode" "VNx4HI,VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx4HImode)")
- (eq_attr "mode" "VNx8HI,VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx8HImode)")
- (eq_attr "mode" "VNx16HI,VNx2x16HI,VNx3x16HI,VNx4x16HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx16HImode)")
- (eq_attr "mode" "VNx32HI,VNx2x32HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx32HImode)")
- (eq_attr "mode" "VNx64HI")
- (symbol_ref "riscv_vector::get_ratio(E_VNx64HImode)")
-
- ; Half float point.
- (eq_attr "mode" "VNx1HF,VNx2x1HF,VNx3x1HF,VNx4x1HF,VNx5x1HF,VNx6x1HF,VNx7x1HF,VNx8x1HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx1HFmode)")
- (eq_attr "mode" "VNx2HF,VNx2x2HF,VNx3x2HF,VNx4x2HF,VNx5x2HF,VNx6x2HF,VNx7x2HF,VNx8x2HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx2HFmode)")
- (eq_attr "mode" "VNx4HF,VNx2x4HF,VNx3x4HF,VNx4x4HF,VNx5x4HF,VNx6x4HF,VNx7x4HF,VNx8x4HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx4HFmode)")
- (eq_attr "mode" "VNx8HF,VNx2x8HF,VNx3x8HF,VNx4x8HF,VNx5x8HF,VNx6x8HF,VNx7x8HF,VNx8x8HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx8HFmode)")
- (eq_attr "mode" "VNx16HF,VNx2x16HF,VNx3x16HF,VNx4x16HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx16HFmode)")
- (eq_attr "mode" "VNx32HF,VNx2x32HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx32HFmode)")
- (eq_attr "mode" "VNx64HF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx64HFmode)")
-
- (eq_attr "mode" "VNx1SI,VNx1SF,VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,\
- VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx1SImode)")
- (eq_attr "mode" "VNx2SI,VNx2SF,VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,\
- VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx2SImode)")
- (eq_attr "mode" "VNx4SI,VNx4SF,VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,\
- VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx4SImode)")
- (eq_attr "mode" "VNx8SI,VNx8SF,VNx2x8SI,VNx3x8SI,VNx4x8SI,VNx2x8SF,VNx3x8SF,VNx4x8SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx8SImode)")
- (eq_attr "mode" "VNx16SI,VNx16SF,VNx2x16SI,VNx2x16SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx16SImode)")
- (eq_attr "mode" "VNx32SI,VNx32SF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx32SImode)")
- (eq_attr "mode" "VNx1DI,VNx1DF,VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,\
- VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx1DImode)")
- (eq_attr "mode" "VNx2DI,VNx2DF,VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,\
- VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx2DImode)")
- (eq_attr "mode" "VNx4DI,VNx4DF,VNx2x4DI,VNx3x4DI,VNx4x4DI,VNx2x4DF,VNx3x4DF,VNx4x4DF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx4DImode)")
- (eq_attr "mode" "VNx8DI,VNx8DF,VNx2x8DI,VNx2x8DF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx8DImode)")
- (eq_attr "mode" "VNx16DI,VNx16DF")
- (symbol_ref "riscv_vector::get_ratio(E_VNx16DImode)")]
+ (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
+ (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
+ (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
+ (eq_attr "mode" "RVVM1QI,RVVMF8BI") (const_int 8)
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (const_int 16)
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (const_int 32)
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (const_int 64)
+ (eq_attr "mode" "RVVM8HI") (const_int 2)
+ (eq_attr "mode" "RVVM4HI") (const_int 4)
+ (eq_attr "mode" "RVVM2HI") (const_int 8)
+ (eq_attr "mode" "RVVM1HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4HI") (const_int 64)
+ (eq_attr "mode" "RVVM8HF") (const_int 2)
+ (eq_attr "mode" "RVVM4HF") (const_int 4)
+ (eq_attr "mode" "RVVM2HF") (const_int 8)
+ (eq_attr "mode" "RVVM1HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4HF") (const_int 64)
+ (eq_attr "mode" "RVVM8SI") (const_int 4)
+ (eq_attr "mode" "RVVM4SI") (const_int 8)
+ (eq_attr "mode" "RVVM2SI") (const_int 16)
+ (eq_attr "mode" "RVVM1SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2SI") (const_int 64)
+ (eq_attr "mode" "RVVM8SF") (const_int 4)
+ (eq_attr "mode" "RVVM4SF") (const_int 8)
+ (eq_attr "mode" "RVVM2SF") (const_int 16)
+ (eq_attr "mode" "RVVM1SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2SF") (const_int 64)
+ (eq_attr "mode" "RVVM8DI") (const_int 8)
+ (eq_attr "mode" "RVVM4DI") (const_int 16)
+ (eq_attr "mode" "RVVM2DI") (const_int 32)
+ (eq_attr "mode" "RVVM1DI") (const_int 64)
+ (eq_attr "mode" "RVVM8DF") (const_int 8)
+ (eq_attr "mode" "RVVM4DF") (const_int 16)
+ (eq_attr "mode" "RVVM2DF") (const_int 32)
+ (eq_attr "mode" "RVVM1DF") (const_int 64)
+ (eq_attr "mode" "RVVM1x8QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x8QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x8QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x8QI") (const_int 64)
+ (eq_attr "mode" "RVVM1x7QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x7QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x7QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x7QI") (const_int 64)
+ (eq_attr "mode" "RVVM1x6QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x6QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x6QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x6QI") (const_int 64)
+ (eq_attr "mode" "RVVM1x5QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x5QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x5QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x5QI") (const_int 64)
+ (eq_attr "mode" "RVVM2x4QI") (const_int 4)
+ (eq_attr "mode" "RVVM1x4QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x4QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x4QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x4QI") (const_int 64)
+ (eq_attr "mode" "RVVM2x3QI") (const_int 4)
+ (eq_attr "mode" "RVVM1x3QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x3QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x3QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x3QI") (const_int 64)
+ (eq_attr "mode" "RVVM4x2QI") (const_int 2)
+ (eq_attr "mode" "RVVM2x2QI") (const_int 4)
+ (eq_attr "mode" "RVVM1x2QI") (const_int 8)
+ (eq_attr "mode" "RVVMF2x2QI") (const_int 16)
+ (eq_attr "mode" "RVVMF4x2QI") (const_int 32)
+ (eq_attr "mode" "RVVMF8x2QI") (const_int 64)
+ (eq_attr "mode" "RVVM1x8HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x8HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x8HI") (const_int 64)
+ (eq_attr "mode" "RVVM1x7HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x7HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x7HI") (const_int 64)
+ (eq_attr "mode" "RVVM1x6HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x6HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x6HI") (const_int 64)
+ (eq_attr "mode" "RVVM1x5HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x5HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x5HI") (const_int 64)
+ (eq_attr "mode" "RVVM2x4HI") (const_int 8)
+ (eq_attr "mode" "RVVM1x4HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x4HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x4HI") (const_int 64)
+ (eq_attr "mode" "RVVM2x3HI") (const_int 8)
+ (eq_attr "mode" "RVVM1x3HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x3HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x3HI") (const_int 64)
+ (eq_attr "mode" "RVVM4x2HI") (const_int 4)
+ (eq_attr "mode" "RVVM2x2HI") (const_int 8)
+ (eq_attr "mode" "RVVM1x2HI") (const_int 16)
+ (eq_attr "mode" "RVVMF2x2HI") (const_int 32)
+ (eq_attr "mode" "RVVMF4x2HI") (const_int 64)
+ (eq_attr "mode" "RVVM1x8HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x8HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x8HF") (const_int 64)
+ (eq_attr "mode" "RVVM1x7HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x7HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x7HF") (const_int 64)
+ (eq_attr "mode" "RVVM1x6HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x6HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x6HF") (const_int 64)
+ (eq_attr "mode" "RVVM1x5HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x5HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x5HF") (const_int 64)
+ (eq_attr "mode" "RVVM2x4HF") (const_int 8)
+ (eq_attr "mode" "RVVM1x4HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x4HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x4HF") (const_int 64)
+ (eq_attr "mode" "RVVM2x3HF") (const_int 8)
+ (eq_attr "mode" "RVVM1x3HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x3HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x3HF") (const_int 64)
+ (eq_attr "mode" "RVVM4x2HF") (const_int 4)
+ (eq_attr "mode" "RVVM2x2HF") (const_int 8)
+ (eq_attr "mode" "RVVM1x2HF") (const_int 16)
+ (eq_attr "mode" "RVVMF2x2HF") (const_int 32)
+ (eq_attr "mode" "RVVMF4x2HF") (const_int 64)
+ (eq_attr "mode" "RVVM1x8SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x8SI") (const_int 64)
+ (eq_attr "mode" "RVVM1x7SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x7SI") (const_int 64)
+ (eq_attr "mode" "RVVM1x6SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x6SI") (const_int 64)
+ (eq_attr "mode" "RVVM1x5SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x5SI") (const_int 64)
+ (eq_attr "mode" "RVVM2x4SI") (const_int 16)
+ (eq_attr "mode" "RVVM1x4SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x4SI") (const_int 64)
+ (eq_attr "mode" "RVVM2x3SI") (const_int 16)
+ (eq_attr "mode" "RVVM1x3SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x3SI") (const_int 64)
+ (eq_attr "mode" "RVVM4x2SI") (const_int 8)
+ (eq_attr "mode" "RVVM2x2SI") (const_int 16)
+ (eq_attr "mode" "RVVM1x2SI") (const_int 32)
+ (eq_attr "mode" "RVVMF2x2SI") (const_int 64)
+ (eq_attr "mode" "RVVM1x8SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x8SF") (const_int 64)
+ (eq_attr "mode" "RVVM1x7SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x7SF") (const_int 64)
+ (eq_attr "mode" "RVVM1x6SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x6SF") (const_int 64)
+ (eq_attr "mode" "RVVM1x5SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x5SF") (const_int 64)
+ (eq_attr "mode" "RVVM2x4SF") (const_int 16)
+ (eq_attr "mode" "RVVM1x4SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x4SF") (const_int 64)
+ (eq_attr "mode" "RVVM2x3SF") (const_int 16)
+ (eq_attr "mode" "RVVM1x3SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x3SF") (const_int 64)
+ (eq_attr "mode" "RVVM4x2SF") (const_int 8)
+ (eq_attr "mode" "RVVM2x2SF") (const_int 16)
+ (eq_attr "mode" "RVVM1x2SF") (const_int 32)
+ (eq_attr "mode" "RVVMF2x2SF") (const_int 64)
+ (eq_attr "mode" "RVVM1x8DI") (const_int 64)
+ (eq_attr "mode" "RVVM1x7DI") (const_int 64)
+ (eq_attr "mode" "RVVM1x6DI") (const_int 64)
+ (eq_attr "mode" "RVVM1x5DI") (const_int 64)
+ (eq_attr "mode" "RVVM2x4DI") (const_int 32)
+ (eq_attr "mode" "RVVM1x4DI") (const_int 64)
+ (eq_attr "mode" "RVVM2x3DI") (const_int 32)
+ (eq_attr "mode" "RVVM1x3DI") (const_int 64)
+ (eq_attr "mode" "RVVM4x2DI") (const_int 16)
+ (eq_attr "mode" "RVVM2x2DI") (const_int 32)
+ (eq_attr "mode" "RVVM1x2DI") (const_int 64)
+ (eq_attr "mode" "RVVM1x8DF") (const_int 64)
+ (eq_attr "mode" "RVVM1x7DF") (const_int 64)
+ (eq_attr "mode" "RVVM1x6DF") (const_int 64)
+ (eq_attr "mode" "RVVM1x5DF") (const_int 64)
+ (eq_attr "mode" "RVVM2x4DF") (const_int 32)
+ (eq_attr "mode" "RVVM1x4DF") (const_int 64)
+ (eq_attr "mode" "RVVM2x3DF") (const_int 32)
+ (eq_attr "mode" "RVVM1x3DF") (const_int 64)
+ (eq_attr "mode" "RVVM4x2DF") (const_int 16)
+ (eq_attr "mode" "RVVM2x2DF") (const_int 32)
+ (eq_attr "mode" "RVVM1x2DF") (const_int 64)]
(const_int INVALID_ATTRIBUTE)))
;; The index of operand[] to get the merge op.
@@ -648,8 +867,8 @@
vector load/store.
For example:
- [(set (match_operand:VNx1QI v24)
- (match_operand:VNx1QI (mem: a4)))
+ [(set (match_operand:RVVMF8QI v24)
+ (match_operand:RVVMF8QI (mem: a4)))
(clobber (scratch:SI a5))]
====>> vsetvl a5,zero,e8,mf8
====>> vle8.v v24,(a4)
@@ -682,22 +901,22 @@
;; create unexpected patterns in LRA.
;; For example:
;; ira rtl:
-;; (insn 20 19 9 2 (set (reg/v:VNx2QI 97 v1 [ v1 ])
-;; (reg:VNx2QI 134 [ _1 ])) "rvv.c":9:22 571 {*movvnx2qi_fract}
+;; (insn 20 19 9 2 (set (reg/v:RVVMF4QI 97 v1 [ v1 ])
+;; (reg:RVVMF4QI 134 [ _1 ])) "rvv.c":9:22 571 {*movvnx2qi_fract}
;; (nil))
;; When the value of pseudo register 134 of the insn above is discovered already
;; spilled in the memory during LRA.
;; LRA will reload this pattern into a memory load instruction pattern.
-;; Because VNx2QI is a fractional vector, we want LRA reload this pattern into
+;; Because RVVMF4QI is a fractional vector, we want LRA reload this pattern into
;; (insn 20 19 9 2 (parallel [
-;; (set (reg:VNx2QI 98 v2 [orig:134 _1 ] [134])
-;; (mem/c:VNx2QI (reg:SI 13 a3 [155]) [1 %sfp+[-2, -2] S[2, 2] A8]))
+;; (set (reg:RVVMF4QI 98 v2 [orig:134 _1 ] [134])
+;; (mem/c:RVVMF4QI (reg:SI 13 a3 [155]) [1 %sfp+[-2, -2] S[2, 2] A8]))
;; (clobber (reg:SI 14 a4 [149]))])
;; So that we could be able to emit vsetvl instruction using clobber sratch a4.
;; To let LRA generate the expected pattern, we should exclude fractional vector
;; load/store in "*mov<mode>_whole". Otherwise, it will reload this pattern into:
-;; (insn 20 19 9 2 (set (reg:VNx2QI 98 v2 [orig:134 _1 ] [134])
-;; (mem/c:VNx2QI (reg:SI 13 a3 [155]) [1 %sfp+[-2, -2] S[2, 2] A8])))
+;; (insn 20 19 9 2 (set (reg:RVVMF4QI 98 v2 [orig:134 _1 ] [134])
+;; (mem/c:RVVMF4QI (reg:SI 13 a3 [155]) [1 %sfp+[-2, -2] S[2, 2] A8])))
;; which is not the pattern we want.
;; According the facts above, we make "*mov<mode>_whole" includes load/store/move for whole
;; vector modes according to '-march' and "*mov<mode>_fract" only include fractional vector modes.
@@ -1090,28 +1309,28 @@
;; constraint alternative 3 match vmv.v.v.
;; constraint alternative 4 match vmv.v.i.
;; For vmv.v.i, we allow 2 following cases:
-;; 1. (const_vector:VNx1QI repeat [
+;; 1. (const_vector:RVVMF8QI repeat [
;; (const_int:QI N)]), -15 <= N < 16.
-;; 2. (const_vector:VNx1SF repeat [
+;; 2. (const_vector:RVVMF2SF repeat [
;; (const_double:SF 0.0 [0x0.0p+0])]).
;; We add "MEM_P (operands[0]) || MEM_P (operands[3]) || CONST_VECTOR_P (operands[1])" here to
;; make sure we don't want CSE to generate the following pattern:
-;; (insn 17 8 19 2 (set (reg:VNx1HI 134 [ _1 ])
-;; (if_then_else:VNx1HI (unspec:VNx1BI [
-;; (reg/v:VNx1BI 137 [ mask ])
+;; (insn 17 8 19 2 (set (reg:RVVMF4HI 134 [ _1 ])
+;; (if_then_else:RVVMF4HI (unspec:RVVM1BI [
+;; (reg/v:RVVM1BI 137 [ mask ])
;; (reg:DI 151)
;; (const_int 0 [0]) repeated x3
;; (reg:SI 66 vl)
;; (reg:SI 67 vtype)
;; ] UNSPEC_VPREDICATE)
-;; (const_vector:VNx1HI repeat [
+;; (const_vector:RVVMF4HI repeat [
;; (const_int 0 [0])
;; ])
-;; (reg/v:VNx1HI 140 [ merge ]))) "rvv.c":8:12 608 {pred_movvnx1hi}
+;; (reg/v:RVVMF4HI 140 [ merge ]))) "rvv.c":8:12 608 {pred_movvnx1hi}
;; (expr_list:REG_DEAD (reg:DI 151)
-;; (expr_list:REG_DEAD (reg/v:VNx1HI 140 [ merge ])
-;; (expr_list:REG_DEAD (reg/v:VNx1BI 137 [ mask ])
+;; (expr_list:REG_DEAD (reg/v:RVVMF4HI 140 [ merge ])
+;; (expr_list:REG_DEAD (reg/v:RVVM1BI 137 [ mask ])
;; (nil)))))
;; Since both vmv.v.v and vmv.v.i doesn't have mask operand.
(define_insn_and_split "@pred_mov<mode>"
@@ -1713,7 +1932,7 @@
[(set_attr "type" "vld<order>x")
(set_attr "mode" "<MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO64:mode><RATIO64I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1723,14 +1942,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX1_QHSDI 2 "register_operand" " vr")
- (match_operand:VNX1_QHSD 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:RATIO64 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX1_QHSDI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX1_QHSD:MODE>")])
+ (set_attr "mode" "<RATIO64:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO32:mode><RATIO32I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1740,14 +1959,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX2_QHSDI 2 "register_operand" " vr")
- (match_operand:VNX2_QHSD 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:RATIO32 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX2_QHSDI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX2_QHSD:MODE>")])
+ (set_attr "mode" "<RATIO32:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO16:mode><RATIO16I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1757,14 +1976,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX4_QHSDI 2 "register_operand" " vr")
- (match_operand:VNX4_QHSD 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:RATIO16 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX4_QHSDI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX4_QHSD:MODE>")])
+ (set_attr "mode" "<RATIO16:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO8:mode><RATIO8I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1774,14 +1993,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX8_QHSDI 2 "register_operand" " vr")
- (match_operand:VNX8_QHSD 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:RATIO8 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX8_QHSDI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX8_QHSD:MODE>")])
+ (set_attr "mode" "<RATIO8:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO4:mode><RATIO4I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1791,14 +2010,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX16_QHSDI 2 "register_operand" " vr")
- (match_operand:VNX16_QHSD 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:RATIO4 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX16_QHSDI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX16_QHSD:MODE>")])
+ (set_attr "mode" "<RATIO4:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX32_QHS:mode><VNX32_QHSI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO2:mode><RATIO2I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1808,14 +2027,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX32_QHSI 2 "register_operand" " vr")
- (match_operand:VNX32_QHS 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:RATIO2 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX32_QHSI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX32_QHS:MODE>")])
+ (set_attr "mode" "<RATIO2:MODE>")])
-(define_insn "@pred_indexed_<order>store<VNX64_QH:mode><VNX64_QHI:mode>"
+(define_insn "@pred_indexed_<order>store<RATIO1:mode><RATIO1:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -1825,29 +2044,12 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX64_QHI 2 "register_operand" " vr")
- (match_operand:VNX64_QH 3 "register_operand" " vr")] ORDER))]
+ (match_operand:RATIO1 2 "register_operand" " vr")
+ (match_operand:RATIO1 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xei<VNX64_QHI:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xei<RATIO1:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX64_QH:MODE>")])
-
-(define_insn "@pred_indexed_<order>store<VNX128_Q:mode><VNX128_Q:mode>"
- [(set (mem:BLK (scratch))
- (unspec:BLK
- [(unspec:<VM>
- [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
- (match_operand 4 "vector_length_operand" " rK")
- (match_operand 5 "const_int_operand" " i")
- (reg:SI VL_REGNUM)
- (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:VNX128_Q 2 "register_operand" " vr")
- (match_operand:VNX128_Q 3 "register_operand" " vr")] ORDER))]
- "TARGET_VECTOR"
- "vs<order>xei<VNX128_Q:sew>.v\t%3,(%z1),%2%p0"
- [(set_attr "type" "vst<order>x")
- (set_attr "mode" "<VNX128_Q:MODE>")])
+ (set_attr "mode" "<RATIO1:MODE>")])
;; -------------------------------------------------------------------------------
;; ---- Predicated integer binary operations
@@ -7426,8 +7628,6 @@
;; and the MIN_VLEN >= 128 from the well defined iterators.
;; Since reduction need LMUL = 1 scalar operand as the input operand
;; and they are different.
-;; For example, The LMUL = 1 corresponding mode of VNx16QImode is VNx4QImode
-;; for -march=rv*zve32* wheras VNx8QImode for -march=rv*zve64*
;; Integer Reduction for QI
(define_insn "@pred_reduc_<reduc><VQI:mode><VQI_LMUL1:mode>"
@@ -8481,7 +8681,7 @@
[(set_attr "type" "vlsegdff")
(set_attr "mode" "<MODE>")])
-(define_insn "@pred_indexed_<order>load<V1T:mode><V1I:mode>"
+(define_insn "@pred_indexed_<order>load<V1T:mode><RATIO64I:mode>"
[(set (match_operand:V1T 0 "register_operand" "=&vr, &vr")
(if_then_else:V1T
(unspec:<VM>
@@ -8495,14 +8695,14 @@
(unspec:V1T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V1I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO64I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V1T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V1I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO64I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V1T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V2T:mode><V2I:mode>"
+(define_insn "@pred_indexed_<order>load<V2T:mode><RATIO32I:mode>"
[(set (match_operand:V2T 0 "register_operand" "=&vr, &vr")
(if_then_else:V2T
(unspec:<VM>
@@ -8516,14 +8716,14 @@
(unspec:V2T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V2I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO32I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V2T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V2I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO32I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V2T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V4T:mode><V4I:mode>"
+(define_insn "@pred_indexed_<order>load<V4T:mode><RATIO16I:mode>"
[(set (match_operand:V4T 0 "register_operand" "=&vr, &vr")
(if_then_else:V4T
(unspec:<VM>
@@ -8537,14 +8737,14 @@
(unspec:V4T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V4I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO16I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V4T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V4I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO16I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V4T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V8T:mode><V8I:mode>"
+(define_insn "@pred_indexed_<order>load<V8T:mode><RATIO8I:mode>"
[(set (match_operand:V8T 0 "register_operand" "=&vr, &vr")
(if_then_else:V8T
(unspec:<VM>
@@ -8558,14 +8758,14 @@
(unspec:V8T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V8I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO8I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V8T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V8I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO8I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V8T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V16T:mode><V16I:mode>"
+(define_insn "@pred_indexed_<order>load<V16T:mode><RATIO4I:mode>"
[(set (match_operand:V16T 0 "register_operand" "=&vr, &vr")
(if_then_else:V16T
(unspec:<VM>
@@ -8579,14 +8779,14 @@
(unspec:V16T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V16I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO4I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V16T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V16I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO4I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V16T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V32T:mode><V32I:mode>"
+(define_insn "@pred_indexed_<order>load<V32T:mode><RATIO2I:mode>"
[(set (match_operand:V32T 0 "register_operand" "=&vr, &vr")
(if_then_else:V32T
(unspec:<VM>
@@ -8600,35 +8800,14 @@
(unspec:V32T
[(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
(mem:BLK (scratch))
- (match_operand:V32I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:RATIO2I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V32T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V32I:sew>.v\t%0,(%z3),%4%p1"
+ "vl<order>xseg<nf>ei<RATIO2I:sew>.v\t%0,(%z3),%4%p1"
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V32T:MODE>")])
-(define_insn "@pred_indexed_<order>load<V64T:mode><V64I:mode>"
- [(set (match_operand:V64T 0 "register_operand" "=&vr, &vr")
- (if_then_else:V64T
- (unspec:<VM>
- [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
- (match_operand 5 "vector_length_operand" " rK, rK")
- (match_operand 6 "const_int_operand" " i, i")
- (match_operand 7 "const_int_operand" " i, i")
- (match_operand 8 "const_int_operand" " i, i")
- (reg:SI VL_REGNUM)
- (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (unspec:V64T
- [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
- (mem:BLK (scratch))
- (match_operand:V64I 4 "register_operand" " vr, vr")] ORDER)
- (match_operand:V64T 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
- "vl<order>xseg<nf>ei<V64I:sew>.v\t%0,(%z3),%4%p1"
- [(set_attr "type" "vlsegd<order>x")
- (set_attr "mode" "<V64T:MODE>")])
-
-(define_insn "@pred_indexed_<order>store<V1T:mode><V1I:mode>"
+(define_insn "@pred_indexed_<order>store<V1T:mode><RATIO64I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8638,14 +8817,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V1I 2 "register_operand" " vr")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
(match_operand:V1T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V1I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V1T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V2T:mode><V2I:mode>"
+(define_insn "@pred_indexed_<order>store<V2T:mode><RATIO32I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8655,14 +8834,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V2I 2 "register_operand" " vr")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
(match_operand:V2T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V2I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V2T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V4T:mode><V4I:mode>"
+(define_insn "@pred_indexed_<order>store<V4T:mode><RATIO16I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8672,14 +8851,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V4I 2 "register_operand" " vr")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
(match_operand:V4T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V4I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V4T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V8T:mode><V8I:mode>"
+(define_insn "@pred_indexed_<order>store<V8T:mode><RATIO8I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8689,14 +8868,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V8I 2 "register_operand" " vr")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
(match_operand:V8T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V8I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V8T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V16T:mode><V16I:mode>"
+(define_insn "@pred_indexed_<order>store<V16T:mode><RATIO4I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8706,14 +8885,14 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V16I 2 "register_operand" " vr")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
(match_operand:V16T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V16I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V16T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V32T:mode><V32I:mode>"
+(define_insn "@pred_indexed_<order>store<V32T:mode><RATIO2I:mode>"
[(set (mem:BLK (scratch))
(unspec:BLK
[(unspec:<VM>
@@ -8723,29 +8902,12 @@
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V32I 2 "register_operand" " vr")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
(match_operand:V32T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V32I:sew>.v\t%3,(%z1),%2%p0"
+ "vs<order>xseg<nf>ei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V32T:MODE>")])
-(define_insn "@pred_indexed_<order>store<V64T:mode><V64I:mode>"
- [(set (mem:BLK (scratch))
- (unspec:BLK
- [(unspec:<VM>
- [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
- (match_operand 4 "vector_length_operand" " rK")
- (match_operand 5 "const_int_operand" " i")
- (reg:SI VL_REGNUM)
- (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (match_operand 1 "pmode_reg_or_0_operand" " rJ")
- (match_operand:V64I 2 "register_operand" " vr")
- (match_operand:V64T 3 "register_operand" " vr")] ORDER))]
- "TARGET_VECTOR"
- "vs<order>xseg<nf>ei<V64I:sew>.v\t%3,(%z1),%2%p0"
- [(set_attr "type" "vssegt<order>x")
- (set_attr "mode" "<V64T:MODE>")])
-
(include "autovec.md")
(include "autovec-opt.md")
@@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
-
+/* For some reason we exceed
+ the default code model's +-2 GiB limits. We should investigate why and
+ add a proper description here. For now just make sure the test case
+ compiles properly. */
+/* { dg-additional-options "-mcmodel=medany" } */
#include "gather_load-7.c"
#include <assert.h>
@@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
-
+/* For some reason we exceed
+ the default code model's +-2 GiB limits. We should investigate why and
+ add a proper description here. For now just make sure the test case
+ compiles properly. */
+/* { dg-additional-options "-mcmodel=medany" } */
#include "gather_load-8.c"
#include <assert.h>
@@ -1,5 +1,10 @@
/* { dg-do compile } */
/* { dg-additional-options "-march=rv32gcv_zvfh -mabi=ilp32d -fdump-tree-vect-details" } */
+/* For some reason we exceed
+ the default code model's +-2 GiB limits. We should investigate why and
+ add a proper description here. For now just make sure the test case
+ compiles properly. */
+/* { dg-additional-options "-mcmodel=medany" } */
#include <stdint-gcc.h>
@@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
-
+/* For some reason we exceed
+ the default code model's +-2 GiB limits. We should investigate why and
+ add a proper description here. For now just make sure the test case
+ compiles properly. */
+/* { dg-additional-options "-mcmodel=medany" } */
#include "mask_scatter_store-8.c"
#include <assert.h>
@@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
-
+/* For some reason we exceed
+ the default code model's +-2 GiB limits. We should investigate why and
+ add a proper description here. For now just make sure the test case
+ compiles properly. */
+/* { dg-additional-options "-mcmodel=medany" } */
#include "scatter_store-8.c"
#include <assert.h>