RISC-V: Add crypto machine descriptions
Checks
Commit Message
Patch v8: Remove unused iterator and add newline at the end.
Patch v7: Remove mode of const_int_operand and typo. Add
newline at the end and comment at the beginning.
Patch v6: Swap the operator order of vandn.vv
Patch v5: Add vec_duplicate operator.
Patch v4: Add process of SEW=64 in RV32 system.
Patch v3: Moidfy constrains for crypto vector.
Patch v2: Add crypto vector ins into RATIO attr and use vr as
destination register.
This patch add the crypto machine descriptions(vector-crypto.md) and
some new iterators which are used by crypto vector ext.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:
* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
---
gcc/config/riscv/iterators.md | 4 +-
gcc/config/riscv/riscv.md | 33 +-
gcc/config/riscv/vector-crypto.md | 654 +++++++++++++++++++++++++++
gcc/config/riscv/vector-iterators.md | 36 ++
gcc/config/riscv/vector.md | 55 ++-
5 files changed, 761 insertions(+), 21 deletions(-)
create mode 100755 gcc/config/riscv/vector-crypto.md
Comments
Machine description part is ok from my side.
But I don't know the plan of vector crypto.
I'd like to wait kito or Jeff to make sure we allow vector-crypto intrinsics as part of GCC-14 release.
Thanks.
juzhe.zhong@rivai.ai
From: Feng Wang
Date: 2023-12-22 09:59
To: gcc-patches
CC: kito.cheng; jeffreyalaw; juzhe.zhong; Feng Wang
Subject: [PATCH] RISC-V: Add crypto machine descriptions
Patch v8: Remove unused iterator and add newline at the end.
Patch v7: Remove mode of const_int_operand and typo. Add
newline at the end and comment at the beginning.
Patch v6: Swap the operator order of vandn.vv
Patch v5: Add vec_duplicate operator.
Patch v4: Add process of SEW=64 in RV32 system.
Patch v3: Moidfy constrains for crypto vector.
Patch v2: Add crypto vector ins into RATIO attr and use vr as
destination register.
This patch add the crypto machine descriptions(vector-crypto.md) and
some new iterators which are used by crypto vector ext.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:
* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
---
gcc/config/riscv/iterators.md | 4 +-
gcc/config/riscv/riscv.md | 33 +-
gcc/config/riscv/vector-crypto.md | 654 +++++++++++++++++++++++++++
gcc/config/riscv/vector-iterators.md | 36 ++
gcc/config/riscv/vector.md | 55 ++-
5 files changed, 761 insertions(+), 21 deletions(-)
create mode 100755 gcc/config/riscv/vector-crypto.md
diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md
index ecf033f2fa7..f332fba7031 100644
--- a/gcc/config/riscv/iterators.md
+++ b/gcc/config/riscv/iterators.md
@@ -304,7 +304,9 @@
(umax "maxu")
(clz "clz")
(ctz "ctz")
- (popcount "cpop")])
+ (popcount "cpop")
+ (rotate "rol")
+ (rotatert "ror")])
;; -------------------------------------------------------------------
;; Int Iterators.
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index ee8b71c22aa..88019a46a53 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -427,6 +427,34 @@
;; vcompress vector compress instruction
;; vmov whole vector register move
;; vector unknown vector instruction
+;; 17. Crypto Vector instructions
+;; vandn crypto vector bitwise and-not instructions
+;; vbrev crypto vector reverse bits in elements instructions
+;; vbrev8 crypto vector reverse bits in bytes instructions
+;; vrev8 crypto vector reverse bytes instructions
+;; vclz crypto vector count leading Zeros instructions
+;; vctz crypto vector count lrailing Zeros instructions
+;; vrol crypto vector rotate left instructions
+;; vror crypto vector rotate right instructions
+;; vwsll crypto vector widening shift left logical instructions
+;; vclmul crypto vector carry-less multiply - return low half instructions
+;; vclmulh crypto vector carry-less multiply - return high half instructions
+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
+;; vaesef crypto vector AES final-round encryption instructions
+;; vaesem crypto vector AES middle-round encryption instructions
+;; vaesdf crypto vector AES final-round decryption instructions
+;; vaesdm crypto vector AES middle-round decryption instructions
+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
+;; vaesz crypto vector AES round zero encryption/decryption instructions
+;; vsha2ms crypto vector SHA-2 message schedule instructions
+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
+;; vsm4k crypto vector SM4 KeyExpansion instructions
+;; vsm4r crypto vector SM4 Rounds instructions
+;; vsm3me crypto vector SM3 Message Expansion instructions
+;; vsm3c crypto vector SM3 Compression instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -446,7 +474,9 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
- vgather,vcompress,vmov,vector"
+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll,
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
@@ -3777,6 +3807,7 @@
(include "thead.md")
(include "generic-ooo.md")
(include "vector.md")
+(include "vector-crypto.md")
(include "zicond.md")
(include "sfb.md")
(include "zc.md")
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
new file mode 100755
index 00000000000..9235bdac548
--- /dev/null
+++ b/gcc/config/riscv/vector-crypto.md
@@ -0,0 +1,654 @@
+;; Machine description for the RISC-V Vector Crypto extensions.
+;; Copyright (C) 2023 Free Software Foundation, Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3. If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_c_enum "unspec" [
+ ;; Zvbb unspecs
+ UNSPEC_VBREV
+ UNSPEC_VBREV8
+ UNSPEC_VREV8
+ UNSPEC_VCLMUL
+ UNSPEC_VCLMULH
+ UNSPEC_VGHSH
+ UNSPEC_VGMUL
+ UNSPEC_VAESEF
+ UNSPEC_VAESEFVV
+ UNSPEC_VAESEFVS
+ UNSPEC_VAESEM
+ UNSPEC_VAESEMVV
+ UNSPEC_VAESEMVS
+ UNSPEC_VAESDF
+ UNSPEC_VAESDFVV
+ UNSPEC_VAESDFVS
+ UNSPEC_VAESDM
+ UNSPEC_VAESDMVV
+ UNSPEC_VAESDMVS
+ UNSPEC_VAESZ
+ UNSPEC_VAESZVVNULL
+ UNSPEC_VAESZVS
+ UNSPEC_VAESKF1
+ UNSPEC_VAESKF2
+ UNSPEC_VSHA2MS
+ UNSPEC_VSHA2CH
+ UNSPEC_VSHA2CL
+ UNSPEC_VSM4K
+ UNSPEC_VSM4R
+ UNSPEC_VSM4RVV
+ UNSPEC_VSM4RVS
+ UNSPEC_VSM3ME
+ UNSPEC_VSM3C
+])
+
+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+
+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
+
+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
+
+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+
+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
+
+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
+
+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
+
+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+
+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+(define_insn "@pred_vandn<mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vr, vd, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1, vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI
+ (not:VI (match_operand:VI 4 "register_operand" "vr, vr, vr, vr"))
+ (match_operand:VI 3 "register_operand" "vr, vr, vr, vr"))
+ (match_operand:VI 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_QHS 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_QHS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_QHS
+ (not:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
+ (match_operand:VI_QHS 3 "register_operand" "vr, vr,vr, vr"))
+ (match_operand:VI_QHS 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+;; Handle GET_MODE_INNER (mode) = DImode. We need to split them since
+;; we need to deal with SEW = 64 in RV32 system.
+(define_expand "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 5 "vector_length_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_int_operand")))
+ (match_operand:VI_D 3 "register_operand"))
+ (match_operand:VI_D 2 "vector_merge_operand")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[4],
+ /* vl */operands[5],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_vandn<mode> (operands[0], operands[1],
+ operands[2], operands[3], boardcast_scalar, operands[5],
+ operands[6], operands[7], operands[8]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[8])))
+ DONE;
+})
+
+(define_insn "*pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%z4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vandn<mode>_extended_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ"))))
+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%z4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand:VI 4 "register_operand" " vr,vr, vr, vr"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand 4 "pmode_register_operand" " r, r, r, r"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr,vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (rotatert:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand 4 "const_csr_operand" " K, K, K, K"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vror.vi\t%0,%3,%4%p1"
+ [(set_attr "type" "vror")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, ?&vr, ?&vr")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
+ "TARGET_ZVBB"
+ "vwsll.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set_attr "group_overlap" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,none,none")])
+
+;; vbrev.v vbrev8.v vrev8.v
+(define_insn "@pred_v<rev><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vr,vd,vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" "i, i, i, i")
+ (match_operand 6 "const_int_operand" "i, i, i, i")
+ (match_operand 7 "const_int_operand" "i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr, vr, vr")]UNSPEC_VRBB8)
+ (match_operand:VI 2 "vector_merge_operand" "vu,vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<rev>.v\t%0,%3%p1"
+ [(set_attr "type" "v<rev>")
+ (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vr")
+ (clz_ctz_pcnt:VI
+ (parallel
+ [(match_operand:VI 2 "register_operand" "vr, vr")
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1")
+ (match_operand 3 "vector_length_operand" "rK, rK")
+ (match_operand 4 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))]
+ "TARGET_ZVBB"
+ "v<bitmanip_insn>.v\t%0,%2%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvbc instructions patterns.
+;; vclmul.vv vclmul.vx
+;; vclmulh.vv vclmulh.vx
+(define_insn "@pred_vclmul<h><mode>"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")
+ (match_operand:VI_D 4 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+;; Deal with SEW = 64 in RV32 system.
+(define_expand "@pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 5 "vector_length_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "register_operand"))
+ (match_operand:VI_D 3 "register_operand")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand")))]
+ "TARGET_ZVBC"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[4],
+ /* vl */operands[5],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_vclmul<h><mode> (operands[0], operands[1],
+ operands[2], operands[3], boardcast_scalar, operands[5],
+ operands[6], operands[7], operands[8]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[8])))
+ DONE;
+})
+
+(define_insn "*pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_0_operand" "rJ, rJ,rJ, rJ"))
+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vclmul<h><mode>_extend_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvknh[ab] and zvkg instructions patterns.
+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
+(define_insn "@pred_v<vv_ins1_name><mode>"
+ [(set (match_operand:VQEXTI 0 "register_operand" "=vr")
+ (if_then_else:VQEXTI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VQEXTI
+ [(match_operand:VQEXTI 1 "register_operand" " 0")
+ (match_operand:VQEXTI 2 "register_operand" "vr")
+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
+ (match_dup 1)))]
+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
+ "v<vv_ins1_name>.vv\t%0,%2,%3"
+ [(set_attr "type" "v<vv_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvkned and zvksed amd zvkg instructions patterns.
+;; vgmul.vv vaesz.vs
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
+;; vsm4r.[vv,vs]
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=&vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+ [(set (match_operand:<VSIX2> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX2>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX2>
+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX4>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX4>
+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX8>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX8>
+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
+ (match_operand:VLMULX8_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX16>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX16>
+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
+ (match_operand:VLMULX16_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf1.vi vsm4k.vi
+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vi_ins_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi vsm3c.vi
+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")
+ (match_operand 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSH"
+ "v<vi_ins1_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvksh instructions patterns.
+;; vsm3me.vv
+(define_insn "@pred_vsm3me<mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" " vr, vr")
+ (match_operand:VSI 3 "register_operand" " vr, vr")] UNSPEC_VSM3ME)
+ (match_operand:VSI 1 "vector_merge_operand" " vu, 0")))]
+ "TARGET_ZVKSH"
+ "vsm3me.vv\t%0,%2,%3"
+ [(set_attr "type" "vsm3me")
+ (set_attr "mode" "<MODE>")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..317dc9de253 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3916,3 +3916,39 @@
(V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
(V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
(V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+
+(define_mode_iterator VSI [
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX2_SI [
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_attr VSIX2 [
+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+ (RVVMF2SI "RVVM8SI")
+])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index f607d768b26..caf1b88ba5e 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -52,7 +52,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -74,7 +76,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -426,7 +430,11 @@
viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\
vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
+ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox,\
+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,\
+ vsm3me,vsm3c")
(const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
@@ -698,10 +706,12 @@
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
- (eq_attr "type" "vimerge,vfmerge,vcompress")
+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -740,7 +750,8 @@
vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +766,15 @@
vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaesz,vsm4r")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -770,7 +783,8 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,13 +800,13 @@
vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ta(operands[6])")
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx")
+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -800,7 +814,7 @@
(define_attr "ma" ""
(cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(symbol_ref "riscv_vector::get_ma(operands[6])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +829,8 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ma(operands[7])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -831,9 +846,10 @@
vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
- vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota")
+ vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
+ vsm4r")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -848,18 +864,19 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(const_int 8)
- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
+ vsm4k,vsm3me,vsm3c")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
(const_int 4)]
(const_int INVALID_ATTRIBUTE)))
--
2.17.1
2023-12-22 09:59 Feng Wang <wangfeng@eswincomputing.com> wrote:
Sorry for forgetting to add the patch version number. It should be [PATCH v8 2/3]
>Patch v8: Remove unused iterator and add newline at the end.
>Patch v7: Remove mode of const_int_operand and typo. Add
> newline at the end and comment at the beginning.
>Patch v6: Swap the operator order of vandn.vv
>Patch v5: Add vec_duplicate operator.
>Patch v4: Add process of SEW=64 in RV32 system.
>Patch v3: Moidfy constrains for crypto vector.
>Patch v2: Add crypto vector ins into RATIO attr and use vr as
>destination register.
>
>This patch add the crypto machine descriptions(vector-crypto.md) and
>some new iterators which are used by crypto vector ext.
>
>Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
>Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
>gcc/ChangeLog:
>
> * config/riscv/iterators.md: Add rotate insn name.
> * config/riscv/riscv.md: Add new insns name for crypto vector.
> * config/riscv/vector-iterators.md: Add new iterators for crypto vector.
> * config/riscv/vector.md: Add the corresponding attr for crypto vector.
> * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
>---
> gcc/config/riscv/iterators.md | 4 +-
> gcc/config/riscv/riscv.md | 33 +-
> gcc/config/riscv/vector-crypto.md | 654 +++++++++++++++++++++++++++
> gcc/config/riscv/vector-iterators.md | 36 ++
> gcc/config/riscv/vector.md | 55 ++-
> 5 files changed, 761 insertions(+), 21 deletions(-)
> create mode 100755 gcc/config/riscv/vector-crypto.md
>
>diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md
>index ecf033f2fa7..f332fba7031 100644
>--- a/gcc/config/riscv/iterators.md
>+++ b/gcc/config/riscv/iterators.md
>@@ -304,7 +304,9 @@
> (umax "maxu")
> (clz "clz")
> (ctz "ctz")
>- (popcount "cpop")])
>+ (popcount "cpop")
>+ (rotate "rol")
>+ (rotatert "ror")])
>
> ;; -------------------------------------------------------------------
> ;; Int Iterators.
>diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
>index ee8b71c22aa..88019a46a53 100644
>--- a/gcc/config/riscv/riscv.md
>+++ b/gcc/config/riscv/riscv.md
>@@ -427,6 +427,34 @@
> ;; vcompress vector compress instruction
> ;; vmov whole vector register move
> ;; vector unknown vector instruction
>+;; 17. Crypto Vector instructions
>+;; vandn crypto vector bitwise and-not instructions
>+;; vbrev crypto vector reverse bits in elements instructions
>+;; vbrev8 crypto vector reverse bits in bytes instructions
>+;; vrev8 crypto vector reverse bytes instructions
>+;; vclz crypto vector count leading Zeros instructions
>+;; vctz crypto vector count lrailing Zeros instructions
>+;; vrol crypto vector rotate left instructions
>+;; vror crypto vector rotate right instructions
>+;; vwsll crypto vector widening shift left logical instructions
>+;; vclmul crypto vector carry-less multiply - return low half instructions
>+;; vclmulh crypto vector carry-less multiply - return high half instructions
>+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
>+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
>+;; vaesef crypto vector AES final-round encryption instructions
>+;; vaesem crypto vector AES middle-round encryption instructions
>+;; vaesdf crypto vector AES final-round decryption instructions
>+;; vaesdm crypto vector AES middle-round decryption instructions
>+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
>+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
>+;; vaesz crypto vector AES round zero encryption/decryption instructions
>+;; vsha2ms crypto vector SHA-2 message schedule instructions
>+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
>+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
>+;; vsm4k crypto vector SM4 KeyExpansion instructions
>+;; vsm4r crypto vector SM4 Rounds instructions
>+;; vsm3me crypto vector SM3 Message Expansion instructions
>+;; vsm3c crypto vector SM3 Compression instructions
> (define_attr "type"
> "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
> mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
>@@ -446,7 +474,9 @@
> vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
> vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
> vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
>- vgather,vcompress,vmov,vector"
>+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll,
>+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
>+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
> (cond [(eq_attr "got" "load") (const_string "load")
>
> ;; If a doubleword move uses these expensive instructions,
>@@ -3777,6 +3807,7 @@
> (include "thead.md")
> (include "generic-ooo.md")
> (include "vector.md")
>+(include "vector-crypto.md")
> (include "zicond.md")
> (include "sfb.md")
> (include "zc.md")
>diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
>new file mode 100755
>index 00000000000..9235bdac548
>--- /dev/null
>+++ b/gcc/config/riscv/vector-crypto.md
>@@ -0,0 +1,654 @@
>+;; Machine description for the RISC-V Vector Crypto extensions.
>+;; Copyright (C) 2023 Free Software Foundation, Inc.
>+
>+;; This file is part of GCC.
>+
>+;; GCC is free software; you can redistribute it and/or modify
>+;; it under the terms of the GNU General Public License as published by
>+;; the Free Software Foundation; either version 3, or (at your option)
>+;; any later version.
>+
>+;; GCC is distributed in the hope that it will be useful,
>+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
>+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>+;; GNU General Public License for more details.
>+
>+;; You should have received a copy of the GNU General Public License
>+;; along with GCC; see the file COPYING3. If not see
>+;; <http://www.gnu.org/licenses/>.
>+
>+(define_c_enum "unspec" [
>+ ;; Zvbb unspecs
>+ UNSPEC_VBREV
>+ UNSPEC_VBREV8
>+ UNSPEC_VREV8
>+ UNSPEC_VCLMUL
>+ UNSPEC_VCLMULH
>+ UNSPEC_VGHSH
>+ UNSPEC_VGMUL
>+ UNSPEC_VAESEF
>+ UNSPEC_VAESEFVV
>+ UNSPEC_VAESEFVS
>+ UNSPEC_VAESEM
>+ UNSPEC_VAESEMVV
>+ UNSPEC_VAESEMVS
>+ UNSPEC_VAESDF
>+ UNSPEC_VAESDFVV
>+ UNSPEC_VAESDFVS
>+ UNSPEC_VAESDM
>+ UNSPEC_VAESDMVV
>+ UNSPEC_VAESDMVS
>+ UNSPEC_VAESZ
>+ UNSPEC_VAESZVVNULL
>+ UNSPEC_VAESZVS
>+ UNSPEC_VAESKF1
>+ UNSPEC_VAESKF2
>+ UNSPEC_VSHA2MS
>+ UNSPEC_VSHA2CH
>+ UNSPEC_VSHA2CL
>+ UNSPEC_VSM4K
>+ UNSPEC_VSM4R
>+ UNSPEC_VSM4RVV
>+ UNSPEC_VSM4RVS
>+ UNSPEC_VSM3ME
>+ UNSPEC_VSM3C
>+])
>+
>+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
>+
>+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
>+
>+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
>+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
>+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
>+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
>+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
>+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
>+
>+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
>+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
>+
>+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
>+
>+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
>+
>+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
>+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
>+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
>+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
>+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
>+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
>+
>+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
>+
>+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
>+
>+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
>+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
>+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
>+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
>+
>+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
>+
>+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
>+
>+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
>+
>+;; zvbb instructions patterns.
>+;; vandn.vv vandn.vx vrol.vv vrol.vx
>+;; vror.vv vror.vx vror.vi
>+;; vwsll.vv vwsll.vx vwsll.vi
>+(define_insn "@pred_vandn<mode>"
>+ [(set (match_operand:VI 0 "register_operand" "=vd, vr, vd, vr")
>+ (if_then_else:VI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1, vm,Wc1")
>+ (match_operand 5 "vector_length_operand" "rK, rK, rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (and:VI
>+ (not:VI (match_operand:VI 4 "register_operand" "vr, vr, vr, vr"))
>+ (match_operand:VI 3 "register_operand" "vr, vr, vr, vr"))
>+ (match_operand:VI 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "vandn.vv\t%0,%3,%4%p1"
>+ [(set_attr "type" "vandn")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_vandn<mode>_scalar"
>+ [(set (match_operand:VI_QHS 0 "register_operand" "=vd, vr,vd, vr")
>+ (if_then_else:VI_QHS
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (and:VI_QHS
>+ (not:VI_QHS
>+ (vec_duplicate:VI_QHS
>+ (match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
>+ (match_operand:VI_QHS 3 "register_operand" "vr, vr,vr, vr"))
>+ (match_operand:VI_QHS 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "vandn.vx\t%0,%3,%4%p1"
>+ [(set_attr "type" "vandn")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; Handle GET_MODE_INNER (mode) = DImode. We need to split them since
>+;; we need to deal with SEW = 64 in RV32 system.
>+(define_expand "@pred_vandn<mode>_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand")
>+ (match_operand 5 "vector_length_operand")
>+ (match_operand 6 "const_int_operand")
>+ (match_operand 7 "const_int_operand")
>+ (match_operand 8 "const_int_operand")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (and:VI_D
>+ (not:VI_D
>+ (vec_duplicate:VI_D
>+ (match_operand:<VEL> 4 "reg_or_int_operand")))
>+ (match_operand:VI_D 3 "register_operand"))
>+ (match_operand:VI_D 2 "vector_merge_operand")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+{
>+ if (riscv_vector::sew64_scalar_helper (
>+ operands,
>+ /* scalar op */&operands[4],
>+ /* vl */operands[5],
>+ <MODE>mode,
>+ false,
>+ [] (rtx *operands, rtx boardcast_scalar) {
>+ emit_insn (gen_pred_vandn<mode> (operands[0], operands[1],
>+ operands[2], operands[3], boardcast_scalar, operands[5],
>+ operands[6], operands[7], operands[8]));
>+ },
>+ (riscv_vector::avl_type) INTVAL (operands[8])))
>+ DONE;
>+})
>+
>+(define_insn "*pred_vandn<mode>_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (and:VI_D
>+ (not:VI_D
>+ (vec_duplicate:VI_D
>+ (match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
>+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
>+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "vandn.vx\t%0,%3,%z4%p1"
>+ [(set_attr "type" "vandn")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "*pred_vandn<mode>_extended_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (and:VI_D
>+ (not:VI_D
>+ (vec_duplicate:VI_D
>+ (sign_extend:<VEL>
>+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ"))))
>+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
>+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "vandn.vx\t%0,%3,%z4%p1"
>+ [(set_attr "type" "vandn")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_v<bitmanip_optab><mode>"
>+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
>+ (if_then_else:VI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (bitmanip_rotate:VI
>+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>+ (match_operand:VI 4 "register_operand" " vr,vr, vr, vr"))
>+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1"
>+ [(set_attr "type" "v<bitmanip_insn>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_v<bitmanip_optab><mode>_scalar"
>+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
>+ (if_then_else:VI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (bitmanip_rotate:VI
>+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>+ (match_operand 4 "pmode_register_operand" " r, r, r, r"))
>+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1"
>+ [(set_attr "type" "v<bitmanip_insn>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "*pred_vror<mode>_scalar"
>+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr,vr")
>+ (if_then_else:VI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (rotatert:VI
>+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>+ (match_operand 4 "const_csr_operand" " K, K, K, K"))
>+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "vror.vi\t%0,%3,%4%p1"
>+ [(set_attr "type" "vror")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_vwsll<mode>"
>+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vr")
>+ (if_then_else:VWEXTI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
>+ (match_operand 5 "vector_length_operand" " rK")
>+ (match_operand 6 "const_int_operand" " i")
>+ (match_operand 7 "const_int_operand" " i")
>+ (match_operand 8 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (ashift:VWEXTI
>+ (zero_extend:VWEXTI
>+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
>+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr"))
>+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
>+ "TARGET_ZVBB"
>+ "vwsll.vv\t%0,%3,%4%p1"
>+ [(set_attr "type" "vwsll")
>+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
>+
>+(define_insn "@pred_vwsll<mode>_scalar"
>+ [(set (match_operand:VWEXTI 0 "register_operand" "=vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, ?&vr, ?&vr")
>+ (if_then_else:VWEXTI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1,vmWc1,vmWc1")
>+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (ashift:VWEXTI
>+ (zero_extend:VWEXTI
>+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
>+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK"))
>+ (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
>+ "TARGET_ZVBB"
>+ "vwsll.v%o4\t%0,%3,%4%p1"
>+ [(set_attr "type" "vwsll")
>+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
>+ (set_attr "group_overlap" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,none,none")])
>+
>+;; vbrev.v vbrev8.v vrev8.v
>+(define_insn "@pred_v<rev><mode>"
>+ [(set (match_operand:VI 0 "register_operand" "=vd,vr,vd,vr")
>+ (if_then_else:VI
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
>+ (match_operand 5 "const_int_operand" "i, i, i, i")
>+ (match_operand 6 "const_int_operand" "i, i, i, i")
>+ (match_operand 7 "const_int_operand" "i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VI
>+ [(match_operand:VI 3 "register_operand" "vr,vr, vr, vr")]UNSPEC_VRBB8)
>+ (match_operand:VI 2 "vector_merge_operand" "vu,vu, 0, 0")))]
>+ "TARGET_ZVBB || TARGET_ZVKB"
>+ "v<rev>.v\t%0,%3%p1"
>+ [(set_attr "type" "v<rev>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; vclz.v vctz.v
>+(define_insn "@pred_v<bitmanip_optab><mode>"
>+ [(set (match_operand:VI 0 "register_operand" "=vd, vr")
>+ (clz_ctz_pcnt:VI
>+ (parallel
>+ [(match_operand:VI 2 "register_operand" "vr, vr")
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1")
>+ (match_operand 3 "vector_length_operand" "rK, rK")
>+ (match_operand 4 "const_int_operand" " i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))]
>+ "TARGET_ZVBB"
>+ "v<bitmanip_insn>.v\t%0,%2%p1"
>+ [(set_attr "type" "v<bitmanip_insn>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; zvbc instructions patterns.
>+;; vclmul.vv vclmul.vx
>+;; vclmulh.vv vclmulh.vx
>+(define_insn "@pred_vclmul<h><mode>"
>+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VI_D
>+ [(match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")
>+ (match_operand:VI_D 4 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>+ "TARGET_ZVBC"
>+ "vclmul<h>.vv\t%0,%3,%4%p1"
>+ [(set_attr "type" "vclmul<h>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; Deal with SEW = 64 in RV32 system.
>+(define_expand "@pred_vclmul<h><mode>_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand")
>+ (match_operand 5 "vector_length_operand")
>+ (match_operand 6 "const_int_operand")
>+ (match_operand 7 "const_int_operand")
>+ (match_operand 8 "const_int_operand")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VI_D
>+ [(vec_duplicate:VI_D
>+ (match_operand:<VEL> 4 "register_operand"))
>+ (match_operand:VI_D 3 "register_operand")]UNSPEC_CLMUL)
>+ (match_operand:VI_D 2 "vector_merge_operand")))]
>+ "TARGET_ZVBC"
>+{
>+ if (riscv_vector::sew64_scalar_helper (
>+ operands,
>+ /* scalar op */&operands[4],
>+ /* vl */operands[5],
>+ <MODE>mode,
>+ false,
>+ [] (rtx *operands, rtx boardcast_scalar) {
>+ emit_insn (gen_pred_vclmul<h><mode> (operands[0], operands[1],
>+ operands[2], operands[3], boardcast_scalar, operands[5],
>+ operands[6], operands[7], operands[8]));
>+ },
>+ (riscv_vector::avl_type) INTVAL (operands[8])))
>+ DONE;
>+})
>+
>+(define_insn "*pred_vclmul<h><mode>_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VI_D
>+ [(vec_duplicate:VI_D
>+ (match_operand:<VEL> 4 "reg_or_0_operand" "rJ, rJ,rJ, rJ"))
>+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>+ "TARGET_ZVBC"
>+ "vclmul<h>.vx\t%0,%3,%4%p1"
>+ [(set_attr "type" "vclmul<h>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "*pred_vclmul<h><mode>_extend_scalar"
>+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>+ (if_then_else:VI_D
>+ (unspec:<VM>
>+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>+ (match_operand 6 "const_int_operand" " i, i, i, i")
>+ (match_operand 7 "const_int_operand" " i, i, i, i")
>+ (match_operand 8 "const_int_operand" " i, i, i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VI_D
>+ [(vec_duplicate:VI_D
>+ (sign_extend:<VEL>
>+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
>+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>+ "TARGET_ZVBC"
>+ "vclmul<h>.vx\t%0,%3,%4%p1"
>+ [(set_attr "type" "vclmul<h>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; zvknh[ab] and zvkg instructions patterns.
>+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
>+(define_insn "@pred_v<vv_ins1_name><mode>"
>+ [(set (match_operand:VQEXTI 0 "register_operand" "=vr")
>+ (if_then_else:VQEXTI
>+ (unspec:<VM>
>+ [(match_operand 4 "vector_length_operand" "rK")
>+ (match_operand 5 "const_int_operand" " i")
>+ (match_operand 6 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VQEXTI
>+ [(match_operand:VQEXTI 1 "register_operand" " 0")
>+ (match_operand:VQEXTI 2 "register_operand" "vr")
>+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
>+ "v<vv_ins1_name>.vv\t%0,%2,%3"
>+ [(set_attr "type" "v<vv_ins1_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; zvkned and zvksed amd zvkg instructions patterns.
>+;; vgmul.vv vaesz.vs
>+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
>+;; vsm4r.[vv,vs]
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
>+ [(set (match_operand:VSI 0 "register_operand" "=vr")
>+ (if_then_else:VSI
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" " rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VSI
>+ [(match_operand:VSI 1 "register_operand" " 0")
>+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
>+ [(set (match_operand:VSI 0 "register_operand" "=&vr")
>+ (if_then_else:VSI
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" " rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VSI
>+ [(match_operand:VSI 1 "register_operand" " 0")
>+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
>+ [(set (match_operand:<VSIX2> 0 "register_operand" "=&vr")
>+ (if_then_else:<VSIX2>
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" "rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:<VSIX2>
>+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
>+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
>+ [(set (match_operand:<VSIX4> 0 "register_operand" "=&vr")
>+ (if_then_else:<VSIX4>
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" " rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:<VSIX4>
>+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
>+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
>+ [(set (match_operand:<VSIX8> 0 "register_operand" "=&vr")
>+ (if_then_else:<VSIX8>
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" " rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:<VSIX8>
>+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
>+ (match_operand:VLMULX8_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
>+ [(set (match_operand:<VSIX16> 0 "register_operand" "=&vr")
>+ (if_then_else:<VSIX16>
>+ (unspec:<VM>
>+ [(match_operand 3 "vector_length_operand" " rK")
>+ (match_operand 4 "const_int_operand" " i")
>+ (match_operand 5 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:<VSIX16>
>+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
>+ (match_operand:VLMULX16_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>+ [(set_attr "type" "v<vv_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; vaeskf1.vi vsm4k.vi
>+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
>+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
>+ (if_then_else:VSI
>+ (unspec:<VM>
>+ [(match_operand 4 "vector_length_operand" "rK, rK")
>+ (match_operand 5 "const_int_operand" " i, i")
>+ (match_operand 6 "const_int_operand" " i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VSI
>+ [(match_operand:VSI 2 "register_operand" "vr, vr")
>+ (match_operand 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
>+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
>+ "TARGET_ZVKNED || TARGET_ZVKSED"
>+ "v<vi_ins_name>.vi\t%0,%2,%3"
>+ [(set_attr "type" "v<vi_ins_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; vaeskf2.vi vsm3c.vi
>+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
>+ [(set (match_operand:VSI 0 "register_operand" "=vr")
>+ (if_then_else:VSI
>+ (unspec:<VM>
>+ [(match_operand 4 "vector_length_operand" "rK")
>+ (match_operand 5 "const_int_operand" " i")
>+ (match_operand 6 "const_int_operand" " i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VSI
>+ [(match_operand:VSI 1 "register_operand" " 0")
>+ (match_operand:VSI 2 "register_operand" "vr")
>+ (match_operand 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
>+ (match_dup 1)))]
>+ "TARGET_ZVKNED || TARGET_ZVKSH"
>+ "v<vi_ins1_name>.vi\t%0,%2,%3"
>+ [(set_attr "type" "v<vi_ins1_name>")
>+ (set_attr "mode" "<MODE>")])
>+
>+;; zvksh instructions patterns.
>+;; vsm3me.vv
>+(define_insn "@pred_vsm3me<mode>"
>+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
>+ (if_then_else:VSI
>+ (unspec:<VM>
>+ [(match_operand 4 "vector_length_operand" " rK, rK")
>+ (match_operand 5 "const_int_operand" " i, i")
>+ (match_operand 6 "const_int_operand" " i, i")
>+ (reg:SI VL_REGNUM)
>+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>+ (unspec:VSI
>+ [(match_operand:VSI 2 "register_operand" " vr, vr")
>+ (match_operand:VSI 3 "register_operand" " vr, vr")] UNSPEC_VSM3ME)
>+ (match_operand:VSI 1 "vector_merge_operand" " vu, 0")))]
>+ "TARGET_ZVKSH"
>+ "vsm3me.vv\t%0,%2,%3"
>+ [(set_attr "type" "vsm3me")
>+ (set_attr "mode" "<MODE>")])
>diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
>index 5f5f7b5b986..317dc9de253 100644
>--- a/gcc/config/riscv/vector-iterators.md
>+++ b/gcc/config/riscv/vector-iterators.md
>@@ -3916,3 +3916,39 @@
> (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
> (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
> (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
>+
>+(define_mode_iterator VSI [
>+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>+])
>+
>+(define_mode_iterator VLMULX2_SI [
>+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>+])
>+
>+(define_mode_iterator VLMULX4_SI [
>+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>+])
>+
>+(define_mode_iterator VLMULX8_SI [
>+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>+])
>+
>+(define_mode_iterator VLMULX16_SI [
>+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
>+])
>+
>+(define_mode_attr VSIX2 [
>+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
>+])
>+
>+(define_mode_attr VSIX4 [
>+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
>+])
>+
>+(define_mode_attr VSIX8 [
>+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
>+])
>+
>+(define_mode_attr VSIX16 [
>+ (RVVMF2SI "RVVM8SI")
>+])
>diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
>index f607d768b26..caf1b88ba5e 100644
>--- a/gcc/config/riscv/vector.md
>+++ b/gcc/config/riscv/vector.md
>@@ -52,7 +52,9 @@
> vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
> vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
> vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
>- vssegtux,vssegtox,vlsegdff")
>+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
>+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
> (const_string "true")]
> (const_string "false")))
>
>@@ -74,7 +76,9 @@
> vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
> vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
> vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
>- vssegtux,vssegtox,vlsegdff")
>+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
>+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
> (const_string "true")]
> (const_string "false")))
>
>@@ -426,7 +430,11 @@
> viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\
> vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\
> vislide1up,vislide1down,vfslide1up,vfslide1down,\
>- vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
>+ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox,\
>+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\
>+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,\
>+ vsm3me,vsm3c")
> (const_int INVALID_ATTRIBUTE)
> (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
> (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
>@@ -698,10 +706,12 @@
> vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
> vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
> vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
>+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
>+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh")
> (const_int 2)
>
>- (eq_attr "type" "vimerge,vfmerge,vcompress")
>+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
> (const_int 1)
>
> (eq_attr "type" "vimuladd,vfmuladd")
>@@ -740,7 +750,8 @@
> vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
> vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
> vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
>- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
>+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
>+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
> (const_int 4)
>
> ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>@@ -755,13 +766,15 @@
> vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
> vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
> vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
>+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
>+ vror,vwsll,vclmul,vclmulh")
> (const_int 5)
>
> (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
> (const_int 6)
>
>- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
>+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>+ vaesz,vsm4r")
> (const_int 3)]
> (const_int INVALID_ATTRIBUTE)))
>
>@@ -770,7 +783,8 @@
> (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
> vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
> vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
>- vcompress,vldff,vlsegde,vlsegdff")
>+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
>+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
> (symbol_ref "riscv_vector::get_ta(operands[5])")
>
> ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>@@ -786,13 +800,13 @@
> vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
> vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
> vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
>- vlsegds,vlsegdux,vlsegdox")
>+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
> (symbol_ref "riscv_vector::get_ta(operands[6])")
>
> (eq_attr "type" "vimuladd,vfmuladd")
> (symbol_ref "riscv_vector::get_ta(operands[7])")
>
>- (eq_attr "type" "vmidx")
>+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
> (symbol_ref "riscv_vector::get_ta(operands[4])")]
> (const_int INVALID_ATTRIBUTE)))
>
>@@ -800,7 +814,7 @@
> (define_attr "ma" ""
> (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
> vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
>- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
>+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
> (symbol_ref "riscv_vector::get_ma(operands[6])")
>
> ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>@@ -815,7 +829,8 @@
> vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
> vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
> vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
>- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
>+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
>+ vror,vwsll,vclmul,vclmulh")
> (symbol_ref "riscv_vector::get_ma(operands[7])")
>
> (eq_attr "type" "vimuladd,vfmuladd")
>@@ -831,9 +846,10 @@
> vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
> vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
> vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
>- vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota")
>+ vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota,vbrev,vbrev8,vrev8")
> (const_int 7)
>- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
>+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
>+ vsm4r")
> (const_int 5)
>
> ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>@@ -848,18 +864,19 @@
> vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
> vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
> vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
>- vlsegds,vlsegdux,vlsegdox")
>+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
> (const_int 8)
>- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
>+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
> (const_int 5)
>
> (eq_attr "type" "vimuladd,vfmuladd")
> (const_int 9)
>
>- (eq_attr "type" "vmsfs,vmidx,vcompress")
>+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
>+ vsm4k,vsm3me,vsm3c")
> (const_int 6)
>
>- (eq_attr "type" "vmpop,vmffs,vssegte")
>+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
> (const_int 4)]
> (const_int INVALID_ATTRIBUTE)))
>
>--
>2.17.1
Thanks Feng, the patch is LGTM from my side, I am happy to accept
vector crypto stuffs for GCC 14, it's mostly intrinsic stuff, and the
only few non-intrinsic stuff also low risk enough (e.g. vrol, vctz)
On Fri, Dec 22, 2023 at 10:04 AM Feng Wang <wangfeng@eswincomputing.com> wrote:
>
> 2023-12-22 09:59 Feng Wang <wangfeng@eswincomputing.com> wrote:
>
> Sorry for forgetting to add the patch version number. It should be [PATCH v8 2/3]
>
> >Patch v8: Remove unused iterator and add newline at the end.
>
>
>
> >Patch v7: Remove mode of const_int_operand and typo. Add
>
>
>
> > newline at the end and comment at the beginning.
>
>
>
> >Patch v6: Swap the operator order of vandn.vv
>
>
>
> >Patch v5: Add vec_duplicate operator.
>
>
>
> >Patch v4: Add process of SEW=64 in RV32 system.
>
>
>
> >Patch v3: Moidfy constrains for crypto vector.
>
>
>
> >Patch v2: Add crypto vector ins into RATIO attr and use vr as
>
>
>
> >destination register.
>
>
>
> >
>
>
>
> >This patch add the crypto machine descriptions(vector-crypto.md) and
>
>
>
> >some new iterators which are used by crypto vector ext.
>
>
>
> >
>
>
>
> >Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
>
>
>
> >Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
>
>
>
> >gcc/ChangeLog:
>
>
>
> >
>
>
>
> > * config/riscv/iterators.md: Add rotate insn name.
>
>
>
> > * config/riscv/riscv.md: Add new insns name for crypto vector.
>
>
>
> > * config/riscv/vector-iterators.md: Add new iterators for crypto vector.
>
>
>
> > * config/riscv/vector.md: Add the corresponding attr for crypto vector.
>
>
>
> > * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
>
>
>
> >---
>
>
>
> > gcc/config/riscv/iterators.md | 4 +-
>
>
>
> > gcc/config/riscv/riscv.md | 33 +-
>
>
>
> > gcc/config/riscv/vector-crypto.md | 654 +++++++++++++++++++++++++++
>
>
>
> > gcc/config/riscv/vector-iterators.md | 36 ++
>
>
>
> > gcc/config/riscv/vector.md | 55 ++-
>
>
>
> > 5 files changed, 761 insertions(+), 21 deletions(-)
>
>
>
> > create mode 100755 gcc/config/riscv/vector-crypto.md
>
>
>
> >
>
>
>
> >diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md
>
>
>
> >index ecf033f2fa7..f332fba7031 100644
>
>
>
> >--- a/gcc/config/riscv/iterators.md
>
>
>
> >+++ b/gcc/config/riscv/iterators.md
>
>
>
> >@@ -304,7 +304,9 @@
>
>
>
> > (umax "maxu")
>
>
>
> > (clz "clz")
>
>
>
> > (ctz "ctz")
>
>
>
> >- (popcount "cpop")])
>
>
>
> >+ (popcount "cpop")
>
>
>
> >+ (rotate "rol")
>
>
>
> >+ (rotatert "ror")])
>
>
>
> >
>
>
>
> > ;; -------------------------------------------------------------------
>
>
>
> > ;; Int Iterators.
>
>
>
> >diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
>
>
>
> >index ee8b71c22aa..88019a46a53 100644
>
>
>
> >--- a/gcc/config/riscv/riscv.md
>
>
>
> >+++ b/gcc/config/riscv/riscv.md
>
>
>
> >@@ -427,6 +427,34 @@
>
>
>
> > ;; vcompress vector compress instruction
>
>
>
> > ;; vmov whole vector register move
>
>
>
> > ;; vector unknown vector instruction
>
>
>
> >+;; 17. Crypto Vector instructions
>
>
>
> >+;; vandn crypto vector bitwise and-not instructions
>
>
>
> >+;; vbrev crypto vector reverse bits in elements instructions
>
>
>
> >+;; vbrev8 crypto vector reverse bits in bytes instructions
>
>
>
> >+;; vrev8 crypto vector reverse bytes instructions
>
>
>
> >+;; vclz crypto vector count leading Zeros instructions
>
>
>
> >+;; vctz crypto vector count lrailing Zeros instructions
>
>
>
> >+;; vrol crypto vector rotate left instructions
>
>
>
> >+;; vror crypto vector rotate right instructions
>
>
>
> >+;; vwsll crypto vector widening shift left logical instructions
>
>
>
> >+;; vclmul crypto vector carry-less multiply - return low half instructions
>
>
>
> >+;; vclmulh crypto vector carry-less multiply - return high half instructions
>
>
>
> >+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
>
>
>
> >+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
>
>
>
> >+;; vaesef crypto vector AES final-round encryption instructions
>
>
>
> >+;; vaesem crypto vector AES middle-round encryption instructions
>
>
>
> >+;; vaesdf crypto vector AES final-round decryption instructions
>
>
>
> >+;; vaesdm crypto vector AES middle-round decryption instructions
>
>
>
> >+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
>
>
>
> >+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
>
>
>
> >+;; vaesz crypto vector AES round zero encryption/decryption instructions
>
>
>
> >+;; vsha2ms crypto vector SHA-2 message schedule instructions
>
>
>
> >+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
>
>
>
> >+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
>
>
>
> >+;; vsm4k crypto vector SM4 KeyExpansion instructions
>
>
>
> >+;; vsm4r crypto vector SM4 Rounds instructions
>
>
>
> >+;; vsm3me crypto vector SM3 Message Expansion instructions
>
>
>
> >+;; vsm3c crypto vector SM3 Compression instructions
>
>
>
> > (define_attr "type"
>
>
>
> > "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
>
>
>
> > mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
>
>
>
> >@@ -446,7 +474,9 @@
>
>
>
> > vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
>
>
>
> > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
>
>
>
> > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
>
>
>
> >- vgather,vcompress,vmov,vector"
>
>
>
> >+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll,
>
>
>
> >+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
>
>
>
> >+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
>
>
>
> > (cond [(eq_attr "got" "load") (const_string "load")
>
>
>
> >
>
>
>
> > ;; If a doubleword move uses these expensive instructions,
>
>
>
> >@@ -3777,6 +3807,7 @@
>
>
>
> > (include "thead.md")
>
>
>
> > (include "generic-ooo.md")
>
>
>
> > (include "vector.md")
>
>
>
> >+(include "vector-crypto.md")
>
>
>
> > (include "zicond.md")
>
>
>
> > (include "sfb.md")
>
>
>
> > (include "zc.md")
>
>
>
> >diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
>
>
>
> >new file mode 100755
>
>
>
> >index 00000000000..9235bdac548
>
>
>
> >--- /dev/null
>
>
>
> >+++ b/gcc/config/riscv/vector-crypto.md
>
>
>
> >@@ -0,0 +1,654 @@
>
>
>
> >+;; Machine description for the RISC-V Vector Crypto extensions.
>
>
>
> >+;; Copyright (C) 2023 Free Software Foundation, Inc.
>
>
>
> >+
>
>
>
> >+;; This file is part of GCC.
>
>
>
> >+
>
>
>
> >+;; GCC is free software; you can redistribute it and/or modify
>
>
>
> >+;; it under the terms of the GNU General Public License as published by
>
>
>
> >+;; the Free Software Foundation; either version 3, or (at your option)
>
>
>
> >+;; any later version.
>
>
>
> >+
>
>
>
> >+;; GCC is distributed in the hope that it will be useful,
>
>
>
> >+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
>
>
>
> >+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>
>
>
> >+;; GNU General Public License for more details.
>
>
>
> >+
>
>
>
> >+;; You should have received a copy of the GNU General Public License
>
>
>
> >+;; along with GCC; see the file COPYING3. If not see
>
>
>
> >+;; <http://www.gnu.org/licenses/>.
>
>
>
> >+
>
>
>
> >+(define_c_enum "unspec" [
>
>
>
> >+ ;; Zvbb unspecs
>
>
>
> >+ UNSPEC_VBREV
>
>
>
> >+ UNSPEC_VBREV8
>
>
>
> >+ UNSPEC_VREV8
>
>
>
> >+ UNSPEC_VCLMUL
>
>
>
> >+ UNSPEC_VCLMULH
>
>
>
> >+ UNSPEC_VGHSH
>
>
>
> >+ UNSPEC_VGMUL
>
>
>
> >+ UNSPEC_VAESEF
>
>
>
> >+ UNSPEC_VAESEFVV
>
>
>
> >+ UNSPEC_VAESEFVS
>
>
>
> >+ UNSPEC_VAESEM
>
>
>
> >+ UNSPEC_VAESEMVV
>
>
>
> >+ UNSPEC_VAESEMVS
>
>
>
> >+ UNSPEC_VAESDF
>
>
>
> >+ UNSPEC_VAESDFVV
>
>
>
> >+ UNSPEC_VAESDFVS
>
>
>
> >+ UNSPEC_VAESDM
>
>
>
> >+ UNSPEC_VAESDMVV
>
>
>
> >+ UNSPEC_VAESDMVS
>
>
>
> >+ UNSPEC_VAESZ
>
>
>
> >+ UNSPEC_VAESZVVNULL
>
>
>
> >+ UNSPEC_VAESZVS
>
>
>
> >+ UNSPEC_VAESKF1
>
>
>
> >+ UNSPEC_VAESKF2
>
>
>
> >+ UNSPEC_VSHA2MS
>
>
>
> >+ UNSPEC_VSHA2CH
>
>
>
> >+ UNSPEC_VSHA2CL
>
>
>
> >+ UNSPEC_VSM4K
>
>
>
> >+ UNSPEC_VSM4R
>
>
>
> >+ UNSPEC_VSM4RVV
>
>
>
> >+ UNSPEC_VSM4RVS
>
>
>
> >+ UNSPEC_VSM3ME
>
>
>
> >+ UNSPEC_VSM3C
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
>
>
>
> >+
>
>
>
> >+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
>
>
>
> >+
>
>
>
> >+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
>
>
>
> >+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
>
>
>
> >+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
>
>
>
> >+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
>
>
>
> >+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
>
>
>
> >+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
>
>
>
> >+
>
>
>
> >+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
>
>
>
> >+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
>
>
>
> >+
>
>
>
> >+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
>
>
>
> >+
>
>
>
> >+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
>
>
>
> >+
>
>
>
> >+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
>
>
>
> >+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
>
>
>
> >+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
>
>
>
> >+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
>
>
>
> >+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
>
>
>
> >+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
>
>
>
> >+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
>
>
>
> >+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
>
>
>
> >+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
>
>
>
> >+
>
>
>
> >+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
>
>
>
> >+
>
>
>
> >+;; zvbb instructions patterns.
>
>
>
> >+;; vandn.vv vandn.vx vrol.vv vrol.vx
>
>
>
> >+;; vror.vv vror.vx vror.vi
>
>
>
> >+;; vwsll.vv vwsll.vx vwsll.vi
>
>
>
> >+(define_insn "@pred_vandn<mode>"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd, vr, vd, vr")
>
>
>
> >+ (if_then_else:VI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1, vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" "rK, rK, rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (and:VI
>
>
>
> >+ (not:VI (match_operand:VI 4 "register_operand" "vr, vr, vr, vr"))
>
>
>
> >+ (match_operand:VI 3 "register_operand" "vr, vr, vr, vr"))
>
>
>
> >+ (match_operand:VI 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "vandn.vv\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vandn")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_vandn<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI_QHS 0 "register_operand" "=vd, vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_QHS
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (and:VI_QHS
>
>
>
> >+ (not:VI_QHS
>
>
>
> >+ (vec_duplicate:VI_QHS
>
>
>
> >+ (match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
>
>
>
> >+ (match_operand:VI_QHS 3 "register_operand" "vr, vr,vr, vr"))
>
>
>
> >+ (match_operand:VI_QHS 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "vandn.vx\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vandn")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; Handle GET_MODE_INNER (mode) = DImode. We need to split them since
>
>
>
> >+;; we need to deal with SEW = 64 in RV32 system.
>
>
>
> >+(define_expand "@pred_vandn<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand")
>
>
>
> >+ (match_operand 5 "vector_length_operand")
>
>
>
> >+ (match_operand 6 "const_int_operand")
>
>
>
> >+ (match_operand 7 "const_int_operand")
>
>
>
> >+ (match_operand 8 "const_int_operand")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (and:VI_D
>
>
>
> >+ (not:VI_D
>
>
>
> >+ (vec_duplicate:VI_D
>
>
>
> >+ (match_operand:<VEL> 4 "reg_or_int_operand")))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand"))
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+{
>
>
>
> >+ if (riscv_vector::sew64_scalar_helper (
>
>
>
> >+ operands,
>
>
>
> >+ /* scalar op */&operands[4],
>
>
>
> >+ /* vl */operands[5],
>
>
>
> >+ <MODE>mode,
>
>
>
> >+ false,
>
>
>
> >+ [] (rtx *operands, rtx boardcast_scalar) {
>
>
>
> >+ emit_insn (gen_pred_vandn<mode> (operands[0], operands[1],
>
>
>
> >+ operands[2], operands[3], boardcast_scalar, operands[5],
>
>
>
> >+ operands[6], operands[7], operands[8]));
>
>
>
> >+ },
>
>
>
> >+ (riscv_vector::avl_type) INTVAL (operands[8])))
>
>
>
> >+ DONE;
>
>
>
> >+})
>
>
>
> >+
>
>
>
> >+(define_insn "*pred_vandn<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (and:VI_D
>
>
>
> >+ (not:VI_D
>
>
>
> >+ (vec_duplicate:VI_D
>
>
>
> >+ (match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "vandn.vx\t%0,%3,%z4%p1"
>
>
>
> >+ [(set_attr "type" "vandn")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "*pred_vandn<mode>_extended_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (and:VI_D
>
>
>
> >+ (not:VI_D
>
>
>
> >+ (vec_duplicate:VI_D
>
>
>
> >+ (sign_extend:<VEL>
>
>
>
> >+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ"))))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "vandn.vx\t%0,%3,%z4%p1"
>
>
>
> >+ [(set_attr "type" "vandn")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_v<bitmanip_optab><mode>"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
>
>
>
> >+ (if_then_else:VI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (bitmanip_rotate:VI
>
>
>
> >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>
>
>
> >+ (match_operand:VI 4 "register_operand" " vr,vr, vr, vr"))
>
>
>
> >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "v<bitmanip_insn>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_v<bitmanip_optab><mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
>
>
>
> >+ (if_then_else:VI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (bitmanip_rotate:VI
>
>
>
> >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>
>
>
> >+ (match_operand 4 "pmode_register_operand" " r, r, r, r"))
>
>
>
> >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "v<bitmanip_insn>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "*pred_vror<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr,vr")
>
>
>
> >+ (if_then_else:VI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (rotatert:VI
>
>
>
> >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
>
>
>
> >+ (match_operand 4 "const_csr_operand" " K, K, K, K"))
>
>
>
> >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "vror.vi\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vror")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_vwsll<mode>"
>
>
>
> >+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:VWEXTI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (ashift:VWEXTI
>
>
>
> >+ (zero_extend:VWEXTI
>
>
>
> >+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
>
>
>
> >+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr"))
>
>
>
> >+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
>
>
>
> >+ "TARGET_ZVBB"
>
>
>
> >+ "vwsll.vv\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vwsll")
>
>
>
> >+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_vwsll<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VWEXTI 0 "register_operand" "=vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, ?&vr, ?&vr")
>
>
>
> >+ (if_then_else:VWEXTI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1,vmWc1,vmWc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (ashift:VWEXTI
>
>
>
> >+ (zero_extend:VWEXTI
>
>
>
> >+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
>
>
>
> >+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK"))
>
>
>
> >+ (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
>
>
>
> >+ "TARGET_ZVBB"
>
>
>
> >+ "vwsll.v%o4\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vwsll")
>
>
>
> >+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
>
>
>
> >+ (set_attr "group_overlap" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,none,none")])
>
>
>
> >+
>
>
>
> >+;; vbrev.v vbrev8.v vrev8.v
>
>
>
> >+(define_insn "@pred_v<rev><mode>"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd,vr,vd,vr")
>
>
>
> >+ (if_then_else:VI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
>
>
>
> >+ (match_operand 5 "const_int_operand" "i, i, i, i")
>
>
>
> >+ (match_operand 6 "const_int_operand" "i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" "i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VI
>
>
>
> >+ [(match_operand:VI 3 "register_operand" "vr,vr, vr, vr")]UNSPEC_VRBB8)
>
>
>
> >+ (match_operand:VI 2 "vector_merge_operand" "vu,vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBB || TARGET_ZVKB"
>
>
>
> >+ "v<rev>.v\t%0,%3%p1"
>
>
>
> >+ [(set_attr "type" "v<rev>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; vclz.v vctz.v
>
>
>
> >+(define_insn "@pred_v<bitmanip_optab><mode>"
>
>
>
> >+ [(set (match_operand:VI 0 "register_operand" "=vd, vr")
>
>
>
> >+ (clz_ctz_pcnt:VI
>
>
>
> >+ (parallel
>
>
>
> >+ [(match_operand:VI 2 "register_operand" "vr, vr")
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1")
>
>
>
> >+ (match_operand 3 "vector_length_operand" "rK, rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))]
>
>
>
> >+ "TARGET_ZVBB"
>
>
>
> >+ "v<bitmanip_insn>.v\t%0,%2%p1"
>
>
>
> >+ [(set_attr "type" "v<bitmanip_insn>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; zvbc instructions patterns.
>
>
>
> >+;; vclmul.vv vclmul.vx
>
>
>
> >+;; vclmulh.vv vclmulh.vx
>
>
>
> >+(define_insn "@pred_vclmul<h><mode>"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VI_D
>
>
>
> >+ [(match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")
>
>
>
> >+ (match_operand:VI_D 4 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBC"
>
>
>
> >+ "vclmul<h>.vv\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vclmul<h>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; Deal with SEW = 64 in RV32 system.
>
>
>
> >+(define_expand "@pred_vclmul<h><mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand")
>
>
>
> >+ (match_operand 5 "vector_length_operand")
>
>
>
> >+ (match_operand 6 "const_int_operand")
>
>
>
> >+ (match_operand 7 "const_int_operand")
>
>
>
> >+ (match_operand 8 "const_int_operand")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VI_D
>
>
>
> >+ [(vec_duplicate:VI_D
>
>
>
> >+ (match_operand:<VEL> 4 "register_operand"))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand")]UNSPEC_CLMUL)
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand")))]
>
>
>
> >+ "TARGET_ZVBC"
>
>
>
> >+{
>
>
>
> >+ if (riscv_vector::sew64_scalar_helper (
>
>
>
> >+ operands,
>
>
>
> >+ /* scalar op */&operands[4],
>
>
>
> >+ /* vl */operands[5],
>
>
>
> >+ <MODE>mode,
>
>
>
> >+ false,
>
>
>
> >+ [] (rtx *operands, rtx boardcast_scalar) {
>
>
>
> >+ emit_insn (gen_pred_vclmul<h><mode> (operands[0], operands[1],
>
>
>
> >+ operands[2], operands[3], boardcast_scalar, operands[5],
>
>
>
> >+ operands[6], operands[7], operands[8]));
>
>
>
> >+ },
>
>
>
> >+ (riscv_vector::avl_type) INTVAL (operands[8])))
>
>
>
> >+ DONE;
>
>
>
> >+})
>
>
>
> >+
>
>
>
> >+(define_insn "*pred_vclmul<h><mode>_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VI_D
>
>
>
> >+ [(vec_duplicate:VI_D
>
>
>
> >+ (match_operand:<VEL> 4 "reg_or_0_operand" "rJ, rJ,rJ, rJ"))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBC"
>
>
>
> >+ "vclmul<h>.vx\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vclmul<h>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "*pred_vclmul<h><mode>_extend_scalar"
>
>
>
> >+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
>
>
>
> >+ (if_then_else:VI_D
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
>
>
>
> >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 7 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (match_operand 8 "const_int_operand" " i, i, i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VI_D
>
>
>
> >+ [(vec_duplicate:VI_D
>
>
>
> >+ (sign_extend:<VEL>
>
>
>
> >+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
>
>
>
> >+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
>
>
>
> >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
>
>
>
> >+ "TARGET_ZVBC"
>
>
>
> >+ "vclmul<h>.vx\t%0,%3,%4%p1"
>
>
>
> >+ [(set_attr "type" "vclmul<h>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; zvknh[ab] and zvkg instructions patterns.
>
>
>
> >+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
>
>
>
> >+(define_insn "@pred_v<vv_ins1_name><mode>"
>
>
>
> >+ [(set (match_operand:VQEXTI 0 "register_operand" "=vr")
>
>
>
> >+ (if_then_else:VQEXTI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 4 "vector_length_operand" "rK")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VQEXTI
>
>
>
> >+ [(match_operand:VQEXTI 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VQEXTI 2 "register_operand" "vr")
>
>
>
> >+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
>
>
>
> >+ "v<vv_ins1_name>.vv\t%0,%2,%3"
>
>
>
> >+ [(set_attr "type" "v<vv_ins1_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; zvkned and zvksed amd zvkg instructions patterns.
>
>
>
> >+;; vgmul.vv vaesz.vs
>
>
>
> >+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
>
>
>
> >+;; vsm4r.[vv,vs]
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
>
>
>
> >+ [(set (match_operand:VSI 0 "register_operand" "=vr")
>
>
>
> >+ (if_then_else:VSI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VSI
>
>
>
> >+ [(match_operand:VSI 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
>
>
>
> >+ [(set (match_operand:VSI 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:VSI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VSI
>
>
>
> >+ [(match_operand:VSI 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
>
>
>
> >+ [(set (match_operand:<VSIX2> 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:<VSIX2>
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" "rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:<VSIX2>
>
>
>
> >+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
>
>
>
> >+ [(set (match_operand:<VSIX4> 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:<VSIX4>
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:<VSIX4>
>
>
>
> >+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
>
>
>
> >+ [(set (match_operand:<VSIX8> 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:<VSIX8>
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:<VSIX8>
>
>
>
> >+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VLMULX8_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
>
>
>
> >+ [(set (match_operand:<VSIX16> 0 "register_operand" "=&vr")
>
>
>
> >+ (if_then_else:<VSIX16>
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 3 "vector_length_operand" " rK")
>
>
>
> >+ (match_operand 4 "const_int_operand" " i")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:<VSIX16>
>
>
>
> >+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VLMULX16_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vv_ins_name>.<ins_type>\t%0,%2"
>
>
>
> >+ [(set_attr "type" "v<vv_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; vaeskf1.vi vsm4k.vi
>
>
>
> >+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
>
>
>
> >+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
>
>
>
> >+ (if_then_else:VSI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 4 "vector_length_operand" "rK, rK")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i, i")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VSI
>
>
>
> >+ [(match_operand:VSI 2 "register_operand" "vr, vr")
>
>
>
> >+ (match_operand 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
>
>
>
> >+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSED"
>
>
>
> >+ "v<vi_ins_name>.vi\t%0,%2,%3"
>
>
>
> >+ [(set_attr "type" "v<vi_ins_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; vaeskf2.vi vsm3c.vi
>
>
>
> >+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
>
>
>
> >+ [(set (match_operand:VSI 0 "register_operand" "=vr")
>
>
>
> >+ (if_then_else:VSI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 4 "vector_length_operand" "rK")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VSI
>
>
>
> >+ [(match_operand:VSI 1 "register_operand" " 0")
>
>
>
> >+ (match_operand:VSI 2 "register_operand" "vr")
>
>
>
> >+ (match_operand 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
>
>
>
> >+ (match_dup 1)))]
>
>
>
> >+ "TARGET_ZVKNED || TARGET_ZVKSH"
>
>
>
> >+ "v<vi_ins1_name>.vi\t%0,%2,%3"
>
>
>
> >+ [(set_attr "type" "v<vi_ins1_name>")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >+
>
>
>
> >+;; zvksh instructions patterns.
>
>
>
> >+;; vsm3me.vv
>
>
>
> >+(define_insn "@pred_vsm3me<mode>"
>
>
>
> >+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
>
>
>
> >+ (if_then_else:VSI
>
>
>
> >+ (unspec:<VM>
>
>
>
> >+ [(match_operand 4 "vector_length_operand" " rK, rK")
>
>
>
> >+ (match_operand 5 "const_int_operand" " i, i")
>
>
>
> >+ (match_operand 6 "const_int_operand" " i, i")
>
>
>
> >+ (reg:SI VL_REGNUM)
>
>
>
> >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
>
>
>
> >+ (unspec:VSI
>
>
>
> >+ [(match_operand:VSI 2 "register_operand" " vr, vr")
>
>
>
> >+ (match_operand:VSI 3 "register_operand" " vr, vr")] UNSPEC_VSM3ME)
>
>
>
> >+ (match_operand:VSI 1 "vector_merge_operand" " vu, 0")))]
>
>
>
> >+ "TARGET_ZVKSH"
>
>
>
> >+ "vsm3me.vv\t%0,%2,%3"
>
>
>
> >+ [(set_attr "type" "vsm3me")
>
>
>
> >+ (set_attr "mode" "<MODE>")])
>
>
>
> >diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
>
>
>
> >index 5f5f7b5b986..317dc9de253 100644
>
>
>
> >--- a/gcc/config/riscv/vector-iterators.md
>
>
>
> >+++ b/gcc/config/riscv/vector-iterators.md
>
>
>
> >@@ -3916,3 +3916,39 @@
>
>
>
> > (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
>
>
>
> > (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
>
>
>
> > (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
>
>
>
> >+
>
>
>
> >+(define_mode_iterator VSI [
>
>
>
> >+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_iterator VLMULX2_SI [
>
>
>
> >+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_iterator VLMULX4_SI [
>
>
>
> >+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_iterator VLMULX8_SI [
>
>
>
> >+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_iterator VLMULX16_SI [
>
>
>
> >+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_attr VSIX2 [
>
>
>
> >+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_attr VSIX4 [
>
>
>
> >+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_attr VSIX8 [
>
>
>
> >+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
>
>
>
> >+])
>
>
>
> >+
>
>
>
> >+(define_mode_attr VSIX16 [
>
>
>
> >+ (RVVMF2SI "RVVM8SI")
>
>
>
> >+])
>
>
>
> >diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
>
>
>
> >index f607d768b26..caf1b88ba5e 100644
>
>
>
> >--- a/gcc/config/riscv/vector.md
>
>
>
> >+++ b/gcc/config/riscv/vector.md
>
>
>
> >@@ -52,7 +52,9 @@
>
>
>
> > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
>
>
>
> > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>
>
>
> > vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
>
>
>
> >- vssegtux,vssegtox,vlsegdff")
>
>
>
> >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
>
>
>
> >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>
>
>
> >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
>
>
>
> > (const_string "true")]
>
>
>
> > (const_string "false")))
>
>
>
> >
>
>
>
> >@@ -74,7 +76,9 @@
>
>
>
> > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
>
>
>
> > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>
>
>
> > vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
>
>
>
> >- vssegtux,vssegtox,vlsegdff")
>
>
>
> >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
>
>
>
> >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>
>
>
> >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
>
>
>
> > (const_string "true")]
>
>
>
> > (const_string "false")))
>
>
>
> >
>
>
>
> >@@ -426,7 +430,11 @@
>
>
>
> > viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\
>
>
>
> > vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\
>
>
>
> > vislide1up,vislide1down,vfslide1up,vfslide1down,\
>
>
>
> >- vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
>
>
>
> >+ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox,\
>
>
>
> >+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\
>
>
>
> >+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>
>
>
> >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,\
>
>
>
> >+ vsm3me,vsm3c")
>
>
>
> > (const_int INVALID_ATTRIBUTE)
>
>
>
> > (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
>
>
>
> > (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
>
>
>
> >@@ -698,10 +706,12 @@
>
>
>
> > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
>
>
>
> > vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
>
>
>
> > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>
>
>
> >- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
>
>
>
> >+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
>
>
>
> >+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh")
>
>
>
> > (const_int 2)
>
>
>
> >
>
>
>
> >- (eq_attr "type" "vimerge,vfmerge,vcompress")
>
>
>
> >+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>
>
>
> >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
>
>
>
> > (const_int 1)
>
>
>
> >
>
>
>
> > (eq_attr "type" "vimuladd,vfmuladd")
>
>
>
> >@@ -740,7 +750,8 @@
>
>
>
> > vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
>
>
>
> > vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
>
>
>
> > vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
>
>
>
> >- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
>
>
>
> >+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
>
>
>
> >+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
>
>
>
> > (const_int 4)
>
>
>
> >
>
>
>
> > ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>
>
>
> >@@ -755,13 +766,15 @@
>
>
>
> > vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
>
>
>
> > vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
>
>
>
> > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
>
>
>
> >- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
>
>
>
> >+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
>
>
>
> >+ vror,vwsll,vclmul,vclmulh")
>
>
>
> > (const_int 5)
>
>
>
> >
>
>
>
> > (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
>
>
>
> > (const_int 6)
>
>
>
> >
>
>
>
> >- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
>
>
>
> >+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
>
>
>
> >+ vaesz,vsm4r")
>
>
>
> > (const_int 3)]
>
>
>
> > (const_int INVALID_ATTRIBUTE)))
>
>
>
> >
>
>
>
> >@@ -770,7 +783,8 @@
>
>
>
> > (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
>
>
>
> > vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
>
>
>
> > vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
>
>
>
> >- vcompress,vldff,vlsegde,vlsegdff")
>
>
>
> >+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
>
>
>
> >+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
>
>
>
> > (symbol_ref "riscv_vector::get_ta(operands[5])")
>
>
>
> >
>
>
>
> > ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>
>
>
> >@@ -786,13 +800,13 @@
>
>
>
> > vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
>
>
>
> > vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
>
>
>
> > vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
>
>
>
> >- vlsegds,vlsegdux,vlsegdox")
>
>
>
> >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
>
>
>
> > (symbol_ref "riscv_vector::get_ta(operands[6])")
>
>
>
> >
>
>
>
> > (eq_attr "type" "vimuladd,vfmuladd")
>
>
>
> > (symbol_ref "riscv_vector::get_ta(operands[7])")
>
>
>
> >
>
>
>
> >- (eq_attr "type" "vmidx")
>
>
>
> >+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
>
>
>
> > (symbol_ref "riscv_vector::get_ta(operands[4])")]
>
>
>
> > (const_int INVALID_ATTRIBUTE)))
>
>
>
> >
>
>
>
> >@@ -800,7 +814,7 @@
>
>
>
> > (define_attr "ma" ""
>
>
>
> > (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
>
>
>
> > vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
>
>
>
> >- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
>
>
>
> >+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
>
>
>
> > (symbol_ref "riscv_vector::get_ma(operands[6])")
>
>
>
> >
>
>
>
> > ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>
>
>
> >@@ -815,7 +829,8 @@
>
>
>
> > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
>
>
>
> > vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
>
>
>
> > vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
>
>
>
> >- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
>
>
>
> >+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
>
>
>
> >+ vror,vwsll,vclmul,vclmulh")
>
>
>
> > (symbol_ref "riscv_vector::get_ma(operands[7])")
>
>
>
> >
>
>
>
> > (eq_attr "type" "vimuladd,vfmuladd")
>
>
>
> >@@ -831,9 +846,10 @@
>
>
>
> > vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
>
>
>
> > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
>
>
>
> > vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
>
>
>
> >- vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota")
>
>
>
> >+ vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota,vbrev,vbrev8,vrev8")
>
>
>
> > (const_int 7)
>
>
>
> >- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
>
>
>
> >+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
>
>
>
> >+ vsm4r")
>
>
>
> > (const_int 5)
>
>
>
> >
>
>
>
> > ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
>
>
>
> >@@ -848,18 +864,19 @@
>
>
>
> > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
>
>
>
> > vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
>
>
>
> > vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
>
>
>
> >- vlsegds,vlsegdux,vlsegdox")
>
>
>
> >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
>
>
>
> > (const_int 8)
>
>
>
> >- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
>
>
>
> >+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
>
>
>
> > (const_int 5)
>
>
>
> >
>
>
>
> > (eq_attr "type" "vimuladd,vfmuladd")
>
>
>
> > (const_int 9)
>
>
>
> >
>
>
>
> >- (eq_attr "type" "vmsfs,vmidx,vcompress")
>
>
>
> >+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
>
>
>
> >+ vsm4k,vsm3me,vsm3c")
>
>
>
> > (const_int 6)
>
>
>
> >
>
>
>
> >- (eq_attr "type" "vmpop,vmffs,vssegte")
>
>
>
> >+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
>
>
>
> > (const_int 4)]
>
>
>
> > (const_int INVALID_ATTRIBUTE)))
>
>
>
> >
>
>
>
> >--
>
>
>
> >2.17.1
>
>
On 12/26/23 19:47, Kito Cheng wrote:
> Thanks Feng, the patch is LGTM from my side, I am happy to accept
> vector crypto stuffs for GCC 14, it's mostly intrinsic stuff, and the
> only few non-intrinsic stuff also low risk enough (e.g. vrol, vctz)
I won't object. I'm disappointed that we're in a similar situation as
last year, but at least the scope is smaller.
jeff
@@ -304,7 +304,9 @@
(umax "maxu")
(clz "clz")
(ctz "ctz")
- (popcount "cpop")])
+ (popcount "cpop")
+ (rotate "rol")
+ (rotatert "ror")])
;; -------------------------------------------------------------------
;; Int Iterators.
@@ -427,6 +427,34 @@
;; vcompress vector compress instruction
;; vmov whole vector register move
;; vector unknown vector instruction
+;; 17. Crypto Vector instructions
+;; vandn crypto vector bitwise and-not instructions
+;; vbrev crypto vector reverse bits in elements instructions
+;; vbrev8 crypto vector reverse bits in bytes instructions
+;; vrev8 crypto vector reverse bytes instructions
+;; vclz crypto vector count leading Zeros instructions
+;; vctz crypto vector count lrailing Zeros instructions
+;; vrol crypto vector rotate left instructions
+;; vror crypto vector rotate right instructions
+;; vwsll crypto vector widening shift left logical instructions
+;; vclmul crypto vector carry-less multiply - return low half instructions
+;; vclmulh crypto vector carry-less multiply - return high half instructions
+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
+;; vaesef crypto vector AES final-round encryption instructions
+;; vaesem crypto vector AES middle-round encryption instructions
+;; vaesdf crypto vector AES final-round decryption instructions
+;; vaesdm crypto vector AES middle-round decryption instructions
+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
+;; vaesz crypto vector AES round zero encryption/decryption instructions
+;; vsha2ms crypto vector SHA-2 message schedule instructions
+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
+;; vsm4k crypto vector SM4 KeyExpansion instructions
+;; vsm4r crypto vector SM4 Rounds instructions
+;; vsm3me crypto vector SM3 Message Expansion instructions
+;; vsm3c crypto vector SM3 Compression instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -446,7 +474,9 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
- vgather,vcompress,vmov,vector"
+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll,
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
@@ -3777,6 +3807,7 @@
(include "thead.md")
(include "generic-ooo.md")
(include "vector.md")
+(include "vector-crypto.md")
(include "zicond.md")
(include "sfb.md")
(include "zc.md")
new file mode 100755
@@ -0,0 +1,654 @@
+;; Machine description for the RISC-V Vector Crypto extensions.
+;; Copyright (C) 2023 Free Software Foundation, Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3. If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_c_enum "unspec" [
+ ;; Zvbb unspecs
+ UNSPEC_VBREV
+ UNSPEC_VBREV8
+ UNSPEC_VREV8
+ UNSPEC_VCLMUL
+ UNSPEC_VCLMULH
+ UNSPEC_VGHSH
+ UNSPEC_VGMUL
+ UNSPEC_VAESEF
+ UNSPEC_VAESEFVV
+ UNSPEC_VAESEFVS
+ UNSPEC_VAESEM
+ UNSPEC_VAESEMVV
+ UNSPEC_VAESEMVS
+ UNSPEC_VAESDF
+ UNSPEC_VAESDFVV
+ UNSPEC_VAESDFVS
+ UNSPEC_VAESDM
+ UNSPEC_VAESDMVV
+ UNSPEC_VAESDMVS
+ UNSPEC_VAESZ
+ UNSPEC_VAESZVVNULL
+ UNSPEC_VAESZVS
+ UNSPEC_VAESKF1
+ UNSPEC_VAESKF2
+ UNSPEC_VSHA2MS
+ UNSPEC_VSHA2CH
+ UNSPEC_VSHA2CL
+ UNSPEC_VSM4K
+ UNSPEC_VSM4R
+ UNSPEC_VSM4RVV
+ UNSPEC_VSM4RVS
+ UNSPEC_VSM3ME
+ UNSPEC_VSM3C
+])
+
+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+
+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
+
+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
+
+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+
+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
+
+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
+
+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
+
+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+
+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+(define_insn "@pred_vandn<mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vr, vd, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1, vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI
+ (not:VI (match_operand:VI 4 "register_operand" "vr, vr, vr, vr"))
+ (match_operand:VI 3 "register_operand" "vr, vr, vr, vr"))
+ (match_operand:VI 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_QHS 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_QHS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_QHS
+ (not:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
+ (match_operand:VI_QHS 3 "register_operand" "vr, vr,vr, vr"))
+ (match_operand:VI_QHS 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+;; Handle GET_MODE_INNER (mode) = DImode. We need to split them since
+;; we need to deal with SEW = 64 in RV32 system.
+(define_expand "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 5 "vector_length_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_int_operand")))
+ (match_operand:VI_D 3 "register_operand"))
+ (match_operand:VI_D 2 "vector_merge_operand")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[4],
+ /* vl */operands[5],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_vandn<mode> (operands[0], operands[1],
+ operands[2], operands[3], boardcast_scalar, operands[5],
+ operands[6], operands[7], operands[8]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[8])))
+ DONE;
+})
+
+(define_insn "*pred_vandn<mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%z4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vandn<mode>_extended_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI_D
+ (not:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ"))))
+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"))
+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%z4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand:VI 4 "register_operand" " vr,vr, vr, vr"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand 4 "pmode_register_operand" " r, r, r, r"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd, vr,vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (rotatert:VI
+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr")
+ (match_operand 4 "const_csr_operand" " K, K, K, K"))
+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vror.vi\t%0,%3,%4%p1"
+ [(set_attr "type" "vror")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, vd, vr, ?&vr, ?&vr")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
+ "TARGET_ZVBB"
+ "vwsll.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set_attr "group_overlap" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,none,none")])
+
+;; vbrev.v vbrev8.v vrev8.v
+(define_insn "@pred_v<rev><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vr,vd,vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" "i, i, i, i")
+ (match_operand 6 "const_int_operand" "i, i, i, i")
+ (match_operand 7 "const_int_operand" "i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr, vr, vr")]UNSPEC_VRBB8)
+ (match_operand:VI 2 "vector_merge_operand" "vu,vu, 0, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<rev>.v\t%0,%3%p1"
+ [(set_attr "type" "v<rev>")
+ (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vr")
+ (clz_ctz_pcnt:VI
+ (parallel
+ [(match_operand:VI 2 "register_operand" "vr, vr")
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1")
+ (match_operand 3 "vector_length_operand" "rK, rK")
+ (match_operand 4 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))]
+ "TARGET_ZVBB"
+ "v<bitmanip_insn>.v\t%0,%2%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvbc instructions patterns.
+;; vclmul.vv vclmul.vx
+;; vclmulh.vv vclmulh.vx
+(define_insn "@pred_vclmul<h><mode>"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")
+ (match_operand:VI_D 4 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+;; Deal with SEW = 64 in RV32 system.
+(define_expand "@pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 5 "vector_length_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "register_operand"))
+ (match_operand:VI_D 3 "register_operand")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand")))]
+ "TARGET_ZVBC"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[4],
+ /* vl */operands[5],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_vclmul<h><mode> (operands[0], operands[1],
+ operands[2], operands[3], boardcast_scalar, operands[5],
+ operands[6], operands[7], operands[8]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[8])))
+ DONE;
+})
+
+(define_insn "*pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (match_operand:<VEL> 4 "reg_or_0_operand" "rJ, rJ,rJ, rJ"))
+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vclmul<h><mode>_extend_scalar"
+ [(set (match_operand:VI_D 0 "register_operand" "=vd,vr,vd, vr")
+ (if_then_else:VI_D
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI_D
+ [(vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ")))
+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")]UNSPEC_CLMUL)
+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")))]
+ "TARGET_ZVBC"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvknh[ab] and zvkg instructions patterns.
+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
+(define_insn "@pred_v<vv_ins1_name><mode>"
+ [(set (match_operand:VQEXTI 0 "register_operand" "=vr")
+ (if_then_else:VQEXTI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VQEXTI
+ [(match_operand:VQEXTI 1 "register_operand" " 0")
+ (match_operand:VQEXTI 2 "register_operand" "vr")
+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
+ (match_dup 1)))]
+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
+ "v<vv_ins1_name>.vv\t%0,%2,%3"
+ [(set_attr "type" "v<vv_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvkned and zvksed amd zvkg instructions patterns.
+;; vgmul.vv vaesz.vs
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
+;; vsm4r.[vv,vs]
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=&vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+ [(set (match_operand:<VSIX2> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX2>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX2>
+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX4>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX4>
+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX8>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX8>
+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
+ (match_operand:VLMULX8_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand" "=&vr")
+ (if_then_else:<VSIX16>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX16>
+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
+ (match_operand:VLMULX16_SI 2 "register_operand" " vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf1.vi vsm4k.vi
+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vi_ins_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi vsm3c.vi
+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")
+ (match_operand 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSH"
+ "v<vi_ins1_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvksh instructions patterns.
+;; vsm3me.vv
+(define_insn "@pred_vsm3me<mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vr, vr")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" " vr, vr")
+ (match_operand:VSI 3 "register_operand" " vr, vr")] UNSPEC_VSM3ME)
+ (match_operand:VSI 1 "vector_merge_operand" " vu, 0")))]
+ "TARGET_ZVKSH"
+ "vsm3me.vv\t%0,%2,%3"
+ [(set_attr "type" "vsm3me")
+ (set_attr "mode" "<MODE>")])
@@ -3916,3 +3916,39 @@
(V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
(V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
(V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+
+(define_mode_iterator VSI [
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX2_SI [
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_attr VSIX2 [
+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+ (RVVMF2SI "RVVM8SI")
+])
@@ -52,7 +52,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -74,7 +76,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -426,7 +430,11 @@
viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\
vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
+ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox,\
+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,\
+ vsm3me,vsm3c")
(const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
@@ -698,10 +706,12 @@
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
- (eq_attr "type" "vimerge,vfmerge,vcompress")
+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -740,7 +750,8 @@
vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +766,15 @@
vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaesz,vsm4r")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -770,7 +783,8 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,13 +800,13 @@
vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ta(operands[6])")
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx")
+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -800,7 +814,7 @@
(define_attr "ma" ""
(cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(symbol_ref "riscv_vector::get_ma(operands[6])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +829,8 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ma(operands[7])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -831,9 +846,10 @@
vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
- vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota")
+ vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
+ vsm4r")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -848,18 +864,19 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(const_int 8)
- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
+ vsm4k,vsm3me,vsm3c")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
(const_int 4)]
(const_int INVALID_ATTRIBUTE)))