[4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension

Message ID 20231204025709.3783-4-wangfeng@eswincomputing.com
State Unresolved
Headers
Series None |

Checks

Context Check Description
snail/gcc-patch-check warning Git am fail log

Commit Message

Feng Wang Dec. 4, 2023, 2:57 a.m. UTC
  This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvkned extension. And all
the test cases are added for api-testing.

gcc/ChangeLog:

        * common/config/riscv/riscv-common.cc: Add Zvkned in riscv_implied_info.
        * config/riscv/riscv-vector-builtins-bases.cc (class crypto_vv): Add new function_base for Zvkned.
        (class vaeskf1): Ditto.
        (class vgmul): Ditto.
        (class vaeskf2): Ditto.
        (BASE): Add Zvkned BASE declaration.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.
        * config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_zvbc_def): Add new function_builder for Zvkned.
        (struct crypto_vi_def): Ditto. 
        (SHAPE): Add Zvkned SHAPE declaration.
        * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
        * config/riscv/riscv-vector-builtins.cc (registered_function::overloaded_hash): Process the overloaded of size_t.
        * config/riscv/riscv-vector-builtins.def (vi): Add new operator type.
        * config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
        * config/riscv/riscv-vector-crypto-builtins-functions.def (vgmul): Optimize vgmul.
        (vaesef): Add intrinsc def.
        (vaesem): Ditto.
        (vaesdf): Ditto.
        (vaesdm): Ditto.
        (vaesz):  Ditto.
        (vaeskf1) Ditto.
        (vaeskf2) Ditto.
        * config/riscv/riscv.md: Add Zvkned ins name.
        * config/riscv/vector-crypto.md (aesef): Add Zvkned md patterns.
        (vv): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type><mode>): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar): Ditto.
        (@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar): Ditto.
        (@pred_vaeskf1<mode>_scalar): Ditto.
        (@pred_vaeskf2<mode>_scalar): Ditto.
        * config/riscv/vector-iterators.md: Add new iterators for Zvkned.
        * config/riscv/vector.md: Add the corresponding attribute for Zvkned.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/zvk/zvk.exp:
        * gcc.target/riscv/zvk/zvkned/vaesdf.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdm.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesef.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesem.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf1.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf2.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesz.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c: New test.
---
 gcc/common/config/riscv/riscv-common.cc       |   1 +
 .../riscv/riscv-vector-builtins-bases.cc      |  80 +++++++-
 .../riscv/riscv-vector-builtins-bases.h       |   7 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  41 +++-
 .../riscv/riscv-vector-builtins-shapes.h      |   1 +
 gcc/config/riscv/riscv-vector-builtins.cc     |  62 +++++-
 gcc/config/riscv/riscv-vector-builtins.def    |   1 +
 .../riscv-vector-crypto-builtins-avail.h      |   1 +
 ...riscv-vector-crypto-builtins-functions.def |  34 +++-
 gcc/config/riscv/riscv.md                     |   8 +-
 gcc/config/riscv/vector-crypto.md             | 184 ++++++++++++++++++
 gcc/config/riscv/vector-iterators.md          |  32 +++
 gcc/config/riscv/vector.md                    |  23 ++-
 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp    |   2 +
 .../gcc.target/riscv/zvk/zvkned/vaesdf.c      | 169 ++++++++++++++++
 .../riscv/zvk/zvkned/vaesdf_overloaded.c      | 169 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvkned/vaesdm.c      | 170 ++++++++++++++++
 .../riscv/zvk/zvkned/vaesdm_overloaded.c      | 170 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvkned/vaesef.c      | 170 ++++++++++++++++
 .../riscv/zvk/zvkned/vaesef_overloaded.c      | 170 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvkned/vaesem.c      | 170 ++++++++++++++++
 .../riscv/zvk/zvkned/vaesem_overloaded.c      | 170 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvkned/vaeskf1.c     |  50 +++++
 .../riscv/zvk/zvkned/vaeskf1_overloaded.c     |  50 +++++
 .../gcc.target/riscv/zvk/zvkned/vaeskf2.c     |  50 +++++
 .../riscv/zvk/zvkned/vaeskf2_overloaded.c     |  50 +++++
 .../gcc.target/riscv/zvk/zvkned/vaesz.c       | 130 +++++++++++++
 .../riscv/zvk/zvkned/vaesz_overloaded.c       | 130 +++++++++++++
 28 files changed, 2278 insertions(+), 17 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
  

Patch

diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 3eefd0263f9..60a174d4801 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -124,6 +124,7 @@  static const riscv_implied_info_t riscv_implied_info[] =
   {"zvbc",     "v"},
   {"zvkb",     "v"},
   {"zvkg",     "v"},
+  {"zvkned",   "v"},
 
   {"zfh", "zfhmin"},
   {"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 0cb9b2925af..61167c8d4e4 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2209,6 +2209,7 @@  public:
   }
 };
 
+/* Implements clmul */
 template<int UNSPEC>
 class clmul : public function_base
 {
@@ -2239,16 +2240,75 @@  public:
   }
 };
 
+/* Implements vgmul/vaes*. */
+template<int UNSPEC>
+class crypto_vv : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    poly_uint64 nunits = 0U;
+    switch (e.op_info->op)
+    {
+      case OP_TYPE_vv:
+        if (UNSPEC == UNSPEC_VGMUL)
+          return e.use_exact_insn (code_for_pred_crypto_vv (UNSPEC, UNSPEC, e.vector_mode ()));
+        else
+          return e.use_exact_insn (code_for_pred_crypto_vv
+                                 (UNSPEC + 1, UNSPEC + 1, e.vector_mode ()));
+      case OP_TYPE_vs:
+        /* Calculate the ratio between arg0 and arg1*/
+        multiple_p (GET_MODE_BITSIZE (e.arg_mode (0)),
+                    GET_MODE_BITSIZE (e.arg_mode (1)), &nunits);
+        if (maybe_eq (nunits, 1U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx1_scalar
+                                   (UNSPEC +2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 2U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx2_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 4U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx4_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 8U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx8_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else
+          return e.use_exact_insn (code_for_pred_crypto_vvx16_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+    }
+  }
+};
+
+/* Implements vaeskf1. */
+class vaeskf1 : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
 
-class vgmul : public function_base
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_vaeskf1_scalar (e.vector_mode ()));
+  }
+};
+
+/* Implements vaeskf2. */
+class vaeskf2 : public function_base
 {
 public:
   bool apply_mask_policy_p () const override { return false; }
   bool use_mask_predication_p () const override { return false; }
   bool has_merge_operand_p () const override { return false; }
+
   rtx expand (function_expander &e) const override
   {
-      return e.use_exact_insn (code_for_pred_vgmul (e.vector_mode ()));
+    return e.use_exact_insn (code_for_pred_vaeskf2_scalar (e.vector_mode ()));
   }
 };
 
@@ -2522,7 +2582,14 @@  static CONSTEXPR const vwsll vwsll_obj;
 static CONSTEXPR const clmul<UNSPEC_VCLMUL>      vclmul_obj;
 static CONSTEXPR const clmul<UNSPEC_VCLMULH>     vclmulh_obj;
 static CONSTEXPR const vghsh vghsh_obj;
-static CONSTEXPR const vgmul vgmul_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VGMUL>   vgmul_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEF>  vaesef_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEM>  vaesem_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDF>  vaesdf_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDM>  vaesdm_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESZ>   vaesz_obj;
+static CONSTEXPR const vaeskf1 vaeskf1_obj;
+static CONSTEXPR const vaeskf2 vaeskf2_obj;
 
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
@@ -2799,4 +2866,11 @@  BASE (vclmul)
 BASE (vclmulh)
 BASE (vghsh)
 BASE (vgmul)
+BASE (vaesef)
+BASE (vaesem)
+BASE (vaesdf)
+BASE (vaesdm)
+BASE (vaesz)
+BASE (vaeskf1)
+BASE (vaeskf2)
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 6a389113e1f..a420d9acd2c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -294,6 +294,13 @@  extern const function_base *const vclmul;
 extern const function_base *const vclmulh;
 extern const function_base *const vghsh;
 extern const function_base *const vgmul;
+extern const function_base *const vaesef;
+extern const function_base *const vaesem;
+extern const function_base *const vaesdf;
+extern const function_base *const vaesdm;
+extern const function_base *const vaesz;
+extern const function_base *const vaeskf1;
+extern const function_base *const vaeskf2;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index dd62d8b11b6..22a2689eae5 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1009,7 +1009,7 @@  struct zvbb_zvbc_def : public build_base
   }
 };
 
-/* vghsh/vgmul class.  */
+/* vghsh/vgmul/vaes* class.  */
 struct crypto_vv_def : public build_base
 {
   char *get_name (function_builder &b, const function_instance &instance,
@@ -1019,13 +1019,49 @@  struct crypto_vv_def : public build_base
     if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
       return nullptr;
     b.append_base_name (instance.base_name);
+    /* There is no op_type name in vghsh/vgmul/vaesz overloaded intrinsic */
+    if (!((strcmp (instance.base_name, "vghsh") == 0
+          || strcmp (instance.base_name, "vgmul") == 0
+          || strcmp (instance.base_name, "vaesz") == 0)
+          && overloaded_p))
+      b.append_name (operand_suffixes[instance.op_info->op]);
+    if (!overloaded_p)
+    {
+      if (instance.op_info->op == OP_TYPE_vv)
+        b.append_name (type_suffixes[instance.type.index].vector);
+      else
+      {
+        vector_type_index arg0_type_idx
+          = instance.op_info->args[1].get_function_type_index
+            (instance.type.index);
+        b.append_name (type_suffixes[arg0_type_idx].vector);
+        vector_type_index ret_type_idx
+          = instance.op_info->ret.get_function_type_index
+            (instance.type.index);
+        b.append_name (type_suffixes[ret_type_idx].vector);
+      }
+    }
+
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
 
+/* vaeskf1/vaeskf2 class.  */
+struct crypto_vi_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+                  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+    b.append_base_name (instance.base_name);
     if (!overloaded_p)
     {
       b.append_name (operand_suffixes[instance.op_info->op]);
       b.append_name (type_suffixes[instance.type.index].vector);
     }
-
     b.append_name (predication_suffixes[instance.pred]);
     return b.finish_name ();
   }
@@ -1061,4 +1097,5 @@  SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
 SHAPE(seg_fault_load, seg_fault_load)
 SHAPE(zvbb_zvbc, zvbb_zvbc)
 SHAPE(crypto_vv, crypto_vv)
+SHAPE(crypto_vi, crypto_vi)
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index 37b7077a3b1..3bb89b575ac 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -55,6 +55,7 @@  extern const function_shape *const seg_fault_load;
 /* Below function_shape are Vectro Crypto*/
 extern const function_shape *const zvbb_zvbc;
 extern const function_shape *const crypto_vv;
+extern const function_shape *const crypto_vi;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index eaefb0f18cc..45162b289ec 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -2642,6 +2642,22 @@  static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
 /* A static operand information for vector_type func (vector_type).
    Some ins just supports SEW=32, such as crypto vectol Zvkg extension.
  * function registration.  */
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x2_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x4_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x8_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x16_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x16),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
 static CONSTEXPR const rvv_op_info u_vvv_crypto_sew32_ops
   = {crypto_sew32_ops,			   /* Types */
      OP_TYPE_vv,					   /* Suffix */
@@ -2654,6 +2670,48 @@  static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew32_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      vvv_args /* Args */};
 
+static CONSTEXPR const rvv_op_info u_vvv_size_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vi,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vv_size_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vi,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x2_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     vs_lmul_x2_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x4_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+     vs_lmul_x4_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x8_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), /* Return type */
+     vs_lmul_x8_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x16_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x16), /* Return type */
+     vs_lmul_x16_args /* Args */};
+
 /* A static operand information for vector_type func (vector_type).
    Some ins just supports SEW=64, such as crypto vectol Zvbc extension
    vclmul.vv, vclmul.vx.
@@ -4250,7 +4308,9 @@  registered_function::overloaded_hash (const vec<tree, va_gc> &arglist)
        __riscv_vset(vint8m2_t dest, size_t index, vint8m1_t value); The reason
        is the same as above. */
       if ((instance.base == bases::vget && (i == (len - 1)))
-	  || (instance.base == bases::vset && (i == (len - 2))))
+	  || ((instance.base == bases::vset
+               || instance.shape == shapes::crypto_vi)
+             && (i == (len - 2))))
 	argument_types.safe_push (size_type_node);
       /* Vector fixed-point arithmetic instructions requiring argument vxrm.
 	     For example: vuint32m4_t __riscv_vaaddu(vuint32m4_t vs2,
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 6661629aad8..0c3ee3b2986 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -558,6 +558,7 @@  DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, RVVM8DF, _f64m8,
 
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
+DEF_RVV_OP_TYPE (vi)
 DEF_RVV_OP_TYPE (v)
 DEF_RVV_OP_TYPE (wv)
 DEF_RVV_OP_TYPE (wx)
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index fb1f195bf9b..8b993fb31f5 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -16,5 +16,6 @@  AVAIL (zvbb, TARGET_ZVBB)
 AVAIL (zvbc, TARGET_ZVBC)
 AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
 AVAIL (zvkg, TARGET_ZVKG)
+AVAIL (zvkned, TARGET_ZVKNED)
 }
 #endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index c2ed9353e24..9fea9f1a757 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -24,4 +24,36 @@  DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_o
 DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
 //ZVKG
 DEF_VECTOR_CRYPTO_FUNCTION(vghsh, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvkg)
-DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops,  zvkg)
\ No newline at end of file
+DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops,  zvkg)
+//ZVKNED
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf1,  crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf2,  crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvkned)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 1ead762e552..39b4e4b2f6a 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -441,6 +441,12 @@ 
 ;; vclmulh      vector crypto carry-less multiply - return high half instructions
 ;; vghsh        vector crypto add-multiply over GHASH Galois-Field instructions
 ;; vgmul        vector crypto multiply over GHASH Galois-Field instrumctions
+;; vaesef       vector crypto AES final-round encryption instructions
+;; vaesem       vector crypto AES middle-round encryption instructions
+;; vaesdf       vector crypto AES final-round decryption instructions
+;; vaesdm       vector crypto AES middle-round decryption instructions
+;; vaeskf1      vector crypto AES-128 Forward KeySchedule generation instructions
+;; vaeskf2      vector crypto AES-256 Forward KeySchedule generation instructions
 (define_attr "type"
   "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -461,7 +467,7 @@ 
    vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
    vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
    vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
-   vclmul,vclmulh,vghsh,vgmul"
+   vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz"
   (cond [(eq_attr "got" "load") (const_string "load")
 
 	 ;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index edc7dc9d432..e84aaf50fc0 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -13,6 +13,23 @@ 
     UNSPEC_VCLMULH
     UNSPEC_VGHSH
     UNSPEC_VGMUL
+    UNSPEC_VAESEF
+    UNSPEC_VAESEFVV
+    UNSPEC_VAESEFVS
+    UNSPEC_VAESEM
+    UNSPEC_VAESEMVV
+    UNSPEC_VAESEMVS
+    UNSPEC_VAESDF
+    UNSPEC_VAESDFVV
+    UNSPEC_VAESDFVS
+    UNSPEC_VAESDM
+    UNSPEC_VAESDMVV
+    UNSPEC_VAESDMVS
+    UNSPEC_VAESZ
+    UNSPEC_VAESZVVNULL
+    UNSPEC_VAESZVS
+    UNSPEC_VAESKF1
+    UNSPEC_VAESKF2
 ])
 
 (define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -23,6 +40,18 @@ 
 
 (define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
 
+(define_int_attr vv_ins_name [(UNSPEC_VGMUL    "gmul" ) (UNSPEC_VAESEFVV "aesef")
+                              (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
+                              (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
+                              (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
+                              (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS  "aesz" )])
+
+(define_int_attr ins_type [(UNSPEC_VGMUL    "vv") (UNSPEC_VAESEFVV "vv")
+                           (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
+                           (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
+                           (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
+                           (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS  "vs")])
+
 (define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
 
 (define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
@@ -31,6 +60,11 @@ 
 
 (define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
 
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL    UNSPEC_VAESEFVV UNSPEC_VAESEMVV
+                                       UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
+                                       UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
+                                       UNSPEC_VAESZVS])
+
 ;; zvbb instructions patterns.
 ;; vandn.vv vandn.vx vrol.vv vrol.vx
 ;; vror.vv vror.vx vror.vi
@@ -296,3 +330,153 @@ 
   "vgmul.vv\t%0,%2"
   [(set_attr "type" "vgmul")
    (set_attr "mode" "<VSI:MODE>")])
+
+;; zvkg and zvkned instructions patterns.
+;; vgmul.vv       vaesz.vs
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1 "register_operand" " 0")
+              (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1 "register_operand" " 0")
+              (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+  [(set (match_operand:<VSIX2> 0 "register_operand"           "=vd")
+        (if_then_else:<VSIX2>
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:<VSIX2>
+             [(match_operand:<VSIX2> 1  "register_operand" " 0")
+              (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VLMULX2_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX4>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX4>
+            [(match_operand:<VSIX4> 1 "register_operand" " 0")
+             (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX4_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX8>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX8>
+            [(match_operand:<VSIX8> 1 "register_operand" " 0")
+             (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX8_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX16>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX16>
+            [(match_operand:<VSIX16> 1 "register_operand" " 0")
+             (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX16_SI:MODE>")])
+
+;; vaeskf1.vi
+(define_insn "@pred_vaeskf1<mode>_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd, vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 4 "vector_length_operand"     "rK, rK")
+             (match_operand 5 "const_int_operand"         " i,  i")
+             (match_operand 6 "const_int_operand"         " i,  i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 2       "register_operand"  "vr, vr")
+              (match_operand:<VEL> 3 "const_int_operand" " i,  i")] UNSPEC_VAESKF1)
+          (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVKNED"
+  "vaeskf1.vi\t%0,%2,%3"
+  [(set_attr "type" "vaeskf1")
+   (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi
+(define_insn "@pred_vaeskf2<mode>_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 4 "vector_length_operand"     "rK")
+             (match_operand 5 "const_int_operand"         " i")
+             (match_operand 6 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1   "register_operand"  "0")
+              (match_operand:VSI 2   "register_operand"  "vr")
+              (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_VAESKF2)
+          (match_dup 1)))]
+  "TARGET_ZVKNED"
+  "vaeskf2.vi\t%0,%2,%3"
+  [(set_attr "type" "vaeskf2")
+   (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index fea84a3f54c..1b16b476035 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3921,6 +3921,38 @@ 
   RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
 ])
 
+(define_mode_iterator VLMULX2_SI [
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+  RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+  RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_attr VSIX2 [
+  (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+  (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+  (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+  (RVVMF2SI "RVVM8SI")
+])
+
 (define_mode_iterator VDI [
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index aa529d6378f..66a2e9358cb 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -53,7 +53,8 @@ 
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
 			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
-                          vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
+                          vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+                          vaeskf1,vaeskf2,vaesz")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -76,7 +77,8 @@ 
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
 			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
-                          vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
+                          vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+                          vaeskf1,vaeskf2,vaesz")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -704,7 +706,8 @@ 
                                 vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,vclmul,vclmulh")
 	       (const_int 2)
 
-	       (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul")
+	       (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+                                vaeskf1,vaeskf2,vaesz")
 	       (const_int 1)
 
 	       (eq_attr "type" "vimuladd,vfmuladd")
@@ -744,7 +747,7 @@ 
 			  vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
 			  vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
 			  vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
-                          vghsh")
+                          vghsh,vaeskf1,vaeskf2")
 	   (const_int 4)
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -766,7 +769,8 @@ 
 	 (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
 	   (const_int 6)
 
-	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul")
+	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+                          vaesz")
 	   (const_int 3)]
   (const_int INVALID_ATTRIBUTE)))
 
@@ -775,7 +779,8 @@ 
   (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
 			  vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
 			  vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
-			  vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh")
+			  vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
+                          vaeskf1,vaeskf2")
 	   (symbol_ref "riscv_vector::get_ta(operands[5])")
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -797,7 +802,7 @@ 
 	 (eq_attr "type" "vimuladd,vfmuladd")
 	   (symbol_ref "riscv_vector::get_ta(operands[7])")
 
-	 (eq_attr "type" "vmidx,vgmul")
+	 (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
 	   (symbol_ref "riscv_vector::get_ta(operands[4])")]
 	(const_int INVALID_ATTRIBUTE)))
 
@@ -839,7 +844,7 @@ 
 			  vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
 			  vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (const_int 7)
-	 (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul")
+	 (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
 	   (const_int 5)
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -862,7 +867,7 @@ 
 	 (eq_attr "type" "vimuladd,vfmuladd")
 	   (const_int 9)
 
-	 (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh")
+	 (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2")
 	   (const_int 6)
 
 	 (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index c1b9eede6ba..b47602e1c83 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -40,6 +40,8 @@  dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbc/*.\[cS\]]] \
         "" $DEFAULT_CFLAGS
 dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkg/*.\[cS\]]] \
         "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkned/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
 
 # All done.
 dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
new file mode 100644
index 00000000000..8fcfd493f2f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
@@ -0,0 +1,169 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
new file mode 100644
index 00000000000..b8570818358
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
@@ -0,0 +1,169 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
new file mode 100644
index 00000000000..1d4a1711cc9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
new file mode 100644
index 00000000000..4247ba3901b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
new file mode 100644
index 00000000000..93a79ffa51c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
new file mode 100644
index 00000000000..9e3998ef055
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
new file mode 100644
index 00000000000..43e468c6f0e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
new file mode 100644
index 00000000000..bb2e7dea733
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
@@ -0,0 +1,170 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
new file mode 100644
index 00000000000..0edbb6d9108
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
@@ -0,0 +1,50 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
new file mode 100644
index 00000000000..63e3537a06b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
@@ -0,0 +1,50 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
new file mode 100644
index 00000000000..06fed681d6a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
@@ -0,0 +1,50 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m1(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m4(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
new file mode 100644
index 00000000000..da7f42aef88
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
@@ -0,0 +1,50 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
new file mode 100644
index 00000000000..fbbbeaa78ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
@@ -0,0 +1,130 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
new file mode 100644
index 00000000000..9130fbdc4ef
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
@@ -0,0 +1,130 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file