[14/23] arm: [MVE intrinsics] rework vmaxq vminq

Message ID 20230505083930.101210-14-christophe.lyon@arm.com
State Accepted
Headers
Series [01/23] arm: [MVE intrinsics] add binary_round_lshift shape |

Checks

Context Check Description
snail/gcc-patch-check success Github commit url

Commit Message

Christophe Lyon May 5, 2023, 8:39 a.m. UTC
  Implement vmaxq and vminq using the new MVE builtins framework.

2022-09-08  Christophe Lyon  <christophe.lyon@arm.com>

	gcc/
	* config/arm/arm-mve-builtins-base.cc (FUNCTION_WITH_RTX_M_NO_F): New.
	(vmaxq, vminq): New.
	* config/arm/arm-mve-builtins-base.def (vmaxq, vminq): New.
	* config/arm/arm-mve-builtins-base.h (vmaxq, vminq): New.
	* config/arm/arm_mve.h (vminq): Remove.
	(vmaxq): Remove.
	(vmaxq_m): Remove.
	(vminq_m): Remove.
	(vminq_x): Remove.
	(vmaxq_x): Remove.
	(vminq_u8): Remove.
	(vmaxq_u8): Remove.
	(vminq_s8): Remove.
	(vmaxq_s8): Remove.
	(vminq_u16): Remove.
	(vmaxq_u16): Remove.
	(vminq_s16): Remove.
	(vmaxq_s16): Remove.
	(vminq_u32): Remove.
	(vmaxq_u32): Remove.
	(vminq_s32): Remove.
	(vmaxq_s32): Remove.
	(vmaxq_m_s8): Remove.
	(vmaxq_m_s32): Remove.
	(vmaxq_m_s16): Remove.
	(vmaxq_m_u8): Remove.
	(vmaxq_m_u32): Remove.
	(vmaxq_m_u16): Remove.
	(vminq_m_s8): Remove.
	(vminq_m_s32): Remove.
	(vminq_m_s16): Remove.
	(vminq_m_u8): Remove.
	(vminq_m_u32): Remove.
	(vminq_m_u16): Remove.
	(vminq_x_s8): Remove.
	(vminq_x_s16): Remove.
	(vminq_x_s32): Remove.
	(vminq_x_u8): Remove.
	(vminq_x_u16): Remove.
	(vminq_x_u32): Remove.
	(vmaxq_x_s8): Remove.
	(vmaxq_x_s16): Remove.
	(vmaxq_x_s32): Remove.
	(vmaxq_x_u8): Remove.
	(vmaxq_x_u16): Remove.
	(vmaxq_x_u32): Remove.
	(__arm_vminq_u8): Remove.
	(__arm_vmaxq_u8): Remove.
	(__arm_vminq_s8): Remove.
	(__arm_vmaxq_s8): Remove.
	(__arm_vminq_u16): Remove.
	(__arm_vmaxq_u16): Remove.
	(__arm_vminq_s16): Remove.
	(__arm_vmaxq_s16): Remove.
	(__arm_vminq_u32): Remove.
	(__arm_vmaxq_u32): Remove.
	(__arm_vminq_s32): Remove.
	(__arm_vmaxq_s32): Remove.
	(__arm_vmaxq_m_s8): Remove.
	(__arm_vmaxq_m_s32): Remove.
	(__arm_vmaxq_m_s16): Remove.
	(__arm_vmaxq_m_u8): Remove.
	(__arm_vmaxq_m_u32): Remove.
	(__arm_vmaxq_m_u16): Remove.
	(__arm_vminq_m_s8): Remove.
	(__arm_vminq_m_s32): Remove.
	(__arm_vminq_m_s16): Remove.
	(__arm_vminq_m_u8): Remove.
	(__arm_vminq_m_u32): Remove.
	(__arm_vminq_m_u16): Remove.
	(__arm_vminq_x_s8): Remove.
	(__arm_vminq_x_s16): Remove.
	(__arm_vminq_x_s32): Remove.
	(__arm_vminq_x_u8): Remove.
	(__arm_vminq_x_u16): Remove.
	(__arm_vminq_x_u32): Remove.
	(__arm_vmaxq_x_s8): Remove.
	(__arm_vmaxq_x_s16): Remove.
	(__arm_vmaxq_x_s32): Remove.
	(__arm_vmaxq_x_u8): Remove.
	(__arm_vmaxq_x_u16): Remove.
	(__arm_vmaxq_x_u32): Remove.
	(__arm_vminq): Remove.
	(__arm_vmaxq): Remove.
	(__arm_vmaxq_m): Remove.
	(__arm_vminq_m): Remove.
	(__arm_vminq_x): Remove.
	(__arm_vmaxq_x): Remove.
---
 gcc/config/arm/arm-mve-builtins-base.cc  |  11 +
 gcc/config/arm/arm-mve-builtins-base.def |   2 +
 gcc/config/arm/arm-mve-builtins-base.h   |   2 +
 gcc/config/arm/arm_mve.h                 | 628 -----------------------
 4 files changed, 15 insertions(+), 628 deletions(-)
  

Comments

Kyrylo Tkachov May 5, 2023, 10:59 a.m. UTC | #1
> -----Original Message-----
> From: Christophe Lyon <christophe.lyon@arm.com>
> Sent: Friday, May 5, 2023 9:39 AM
> To: gcc-patches@gcc.gnu.org; Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>;
> Richard Earnshaw <Richard.Earnshaw@arm.com>; Richard Sandiford
> <Richard.Sandiford@arm.com>
> Cc: Christophe Lyon <Christophe.Lyon@arm.com>
> Subject: [PATCH 14/23] arm: [MVE intrinsics] rework vmaxq vminq
> 
> Implement vmaxq and vminq using the new MVE builtins framework.

Ok.
Thanks,
Kyrill

> 
> 2022-09-08  Christophe Lyon  <christophe.lyon@arm.com>
> 
> 	gcc/
> 	* config/arm/arm-mve-builtins-base.cc
> (FUNCTION_WITH_RTX_M_NO_F): New.
> 	(vmaxq, vminq): New.
> 	* config/arm/arm-mve-builtins-base.def (vmaxq, vminq): New.
> 	* config/arm/arm-mve-builtins-base.h (vmaxq, vminq): New.
> 	* config/arm/arm_mve.h (vminq): Remove.
> 	(vmaxq): Remove.
> 	(vmaxq_m): Remove.
> 	(vminq_m): Remove.
> 	(vminq_x): Remove.
> 	(vmaxq_x): Remove.
> 	(vminq_u8): Remove.
> 	(vmaxq_u8): Remove.
> 	(vminq_s8): Remove.
> 	(vmaxq_s8): Remove.
> 	(vminq_u16): Remove.
> 	(vmaxq_u16): Remove.
> 	(vminq_s16): Remove.
> 	(vmaxq_s16): Remove.
> 	(vminq_u32): Remove.
> 	(vmaxq_u32): Remove.
> 	(vminq_s32): Remove.
> 	(vmaxq_s32): Remove.
> 	(vmaxq_m_s8): Remove.
> 	(vmaxq_m_s32): Remove.
> 	(vmaxq_m_s16): Remove.
> 	(vmaxq_m_u8): Remove.
> 	(vmaxq_m_u32): Remove.
> 	(vmaxq_m_u16): Remove.
> 	(vminq_m_s8): Remove.
> 	(vminq_m_s32): Remove.
> 	(vminq_m_s16): Remove.
> 	(vminq_m_u8): Remove.
> 	(vminq_m_u32): Remove.
> 	(vminq_m_u16): Remove.
> 	(vminq_x_s8): Remove.
> 	(vminq_x_s16): Remove.
> 	(vminq_x_s32): Remove.
> 	(vminq_x_u8): Remove.
> 	(vminq_x_u16): Remove.
> 	(vminq_x_u32): Remove.
> 	(vmaxq_x_s8): Remove.
> 	(vmaxq_x_s16): Remove.
> 	(vmaxq_x_s32): Remove.
> 	(vmaxq_x_u8): Remove.
> 	(vmaxq_x_u16): Remove.
> 	(vmaxq_x_u32): Remove.
> 	(__arm_vminq_u8): Remove.
> 	(__arm_vmaxq_u8): Remove.
> 	(__arm_vminq_s8): Remove.
> 	(__arm_vmaxq_s8): Remove.
> 	(__arm_vminq_u16): Remove.
> 	(__arm_vmaxq_u16): Remove.
> 	(__arm_vminq_s16): Remove.
> 	(__arm_vmaxq_s16): Remove.
> 	(__arm_vminq_u32): Remove.
> 	(__arm_vmaxq_u32): Remove.
> 	(__arm_vminq_s32): Remove.
> 	(__arm_vmaxq_s32): Remove.
> 	(__arm_vmaxq_m_s8): Remove.
> 	(__arm_vmaxq_m_s32): Remove.
> 	(__arm_vmaxq_m_s16): Remove.
> 	(__arm_vmaxq_m_u8): Remove.
> 	(__arm_vmaxq_m_u32): Remove.
> 	(__arm_vmaxq_m_u16): Remove.
> 	(__arm_vminq_m_s8): Remove.
> 	(__arm_vminq_m_s32): Remove.
> 	(__arm_vminq_m_s16): Remove.
> 	(__arm_vminq_m_u8): Remove.
> 	(__arm_vminq_m_u32): Remove.
> 	(__arm_vminq_m_u16): Remove.
> 	(__arm_vminq_x_s8): Remove.
> 	(__arm_vminq_x_s16): Remove.
> 	(__arm_vminq_x_s32): Remove.
> 	(__arm_vminq_x_u8): Remove.
> 	(__arm_vminq_x_u16): Remove.
> 	(__arm_vminq_x_u32): Remove.
> 	(__arm_vmaxq_x_s8): Remove.
> 	(__arm_vmaxq_x_s16): Remove.
> 	(__arm_vmaxq_x_s32): Remove.
> 	(__arm_vmaxq_x_u8): Remove.
> 	(__arm_vmaxq_x_u16): Remove.
> 	(__arm_vmaxq_x_u32): Remove.
> 	(__arm_vminq): Remove.
> 	(__arm_vmaxq): Remove.
> 	(__arm_vmaxq_m): Remove.
> 	(__arm_vminq_m): Remove.
> 	(__arm_vminq_x): Remove.
> 	(__arm_vmaxq_x): Remove.
> ---
>  gcc/config/arm/arm-mve-builtins-base.cc  |  11 +
>  gcc/config/arm/arm-mve-builtins-base.def |   2 +
>  gcc/config/arm/arm-mve-builtins-base.h   |   2 +
>  gcc/config/arm/arm_mve.h                 | 628 -----------------------
>  4 files changed, 15 insertions(+), 628 deletions(-)
> 
> diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-
> mve-builtins-base.cc
> index 4bebf86f784..1839d5cb1a5 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.cc
> +++ b/gcc/config/arm/arm-mve-builtins-base.cc
> @@ -110,6 +110,15 @@ namespace arm_mve {
>      UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F,
> 	\
>      UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1))
> 
> +  /* Helper for builtins with RTX codes, _m predicated override, but
> +     no floating-point versions.  */
> +#define FUNCTION_WITH_RTX_M_NO_F(NAME, RTX_S, RTX_U, UNSPEC)
> FUNCTION	\
> +  (NAME, unspec_based_mve_function_exact_insn,
> 	\
> +   (RTX_S, RTX_U, UNKNOWN,						\
> +    -1, -1, -1,								\
> +    UNSPEC##_M_S, UNSPEC##_M_U, -1,
> 	\
> +    -1, -1, -1))
> +
>    /* Helper for builtins without RTX codes, no _m predicated and no _n
>       overrides.  */
>  #define FUNCTION_WITHOUT_M_N(NAME, UNSPEC) FUNCTION
> 		\
> @@ -173,6 +182,8 @@ FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ)
>  FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ)
>  FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ)
>  FUNCTION_WITH_M_N_NO_F (vhsubq, VHSUBQ)
> +FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ)
> +FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ)
>  FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ)
>  FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ)
>  FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ)
> diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-
> mve-builtins-base.def
> index f2e40cda2af..3b42bf46e81 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.def
> +++ b/gcc/config/arm/arm-mve-builtins-base.def
> @@ -25,6 +25,8 @@ DEF_MVE_FUNCTION (vcreateq, create,
> all_integer_with_64, none)
>  DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vhsubq, binary_opt_n, all_integer, mx_or_none)
> +DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none)
> +DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none)
> diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-
> mve-builtins-base.h
> index 5b62de6a922..81d10f4a8f4 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.h
> +++ b/gcc/config/arm/arm-mve-builtins-base.h
> @@ -30,6 +30,8 @@ extern const function_base *const vcreateq;
>  extern const function_base *const veorq;
>  extern const function_base *const vhaddq;
>  extern const function_base *const vhsubq;
> +extern const function_base *const vmaxq;
> +extern const function_base *const vminq;
>  extern const function_base *const vmulhq;
>  extern const function_base *const vmulq;
>  extern const function_base *const vorrq;
> diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h
> index ad67dcfd024..5fbea52c8ef 100644
> --- a/gcc/config/arm/arm_mve.h
> +++ b/gcc/config/arm/arm_mve.h
> @@ -65,9 +65,7 @@
>  #define vmullbq_int(__a, __b) __arm_vmullbq_int(__a, __b)
>  #define vmladavq(__a, __b) __arm_vmladavq(__a, __b)
>  #define vminvq(__a, __b) __arm_vminvq(__a, __b)
> -#define vminq(__a, __b) __arm_vminq(__a, __b)
>  #define vmaxvq(__a, __b) __arm_vmaxvq(__a, __b)
> -#define vmaxq(__a, __b) __arm_vmaxq(__a, __b)
>  #define vcmphiq(__a, __b) __arm_vcmphiq(__a, __b)
>  #define vcmpeqq(__a, __b) __arm_vcmpeqq(__a, __b)
>  #define vcmpcsq(__a, __b) __arm_vcmpcsq(__a, __b)
> @@ -214,8 +212,6 @@
>  #define vcaddq_rot90_m(__inactive, __a, __b, __p)
> __arm_vcaddq_rot90_m(__inactive, __a, __b, __p)
>  #define vhcaddq_rot270_m(__inactive, __a, __b, __p)
> __arm_vhcaddq_rot270_m(__inactive, __a, __b, __p)
>  #define vhcaddq_rot90_m(__inactive, __a, __b, __p)
> __arm_vhcaddq_rot90_m(__inactive, __a, __b, __p)
> -#define vmaxq_m(__inactive, __a, __b, __p) __arm_vmaxq_m(__inactive,
> __a, __b, __p)
> -#define vminq_m(__inactive, __a, __b, __p) __arm_vminq_m(__inactive,
> __a, __b, __p)
>  #define vmladavaq_p(__a, __b, __c, __p) __arm_vmladavaq_p(__a, __b, __c,
> __p)
>  #define vmladavaxq_p(__a, __b, __c, __p) __arm_vmladavaxq_p(__a, __b,
> __c, __p)
>  #define vmlaq_m(__a, __b, __c, __p) __arm_vmlaq_m(__a, __b, __c, __p)
> @@ -339,8 +335,6 @@
>  #define viwdupq_x_u8(__a, __b, __imm, __p) __arm_viwdupq_x_u8(__a,
> __b, __imm, __p)
>  #define viwdupq_x_u16(__a, __b, __imm, __p) __arm_viwdupq_x_u16(__a,
> __b, __imm, __p)
>  #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a,
> __b, __imm, __p)
> -#define vminq_x(__a, __b, __p) __arm_vminq_x(__a, __b, __p)
> -#define vmaxq_x(__a, __b, __p) __arm_vmaxq_x(__a, __b, __p)
>  #define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p)
>  #define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p)
>  #define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p)
> @@ -614,9 +608,7 @@
>  #define vmullbq_int_u8(__a, __b) __arm_vmullbq_int_u8(__a, __b)
>  #define vmladavq_u8(__a, __b) __arm_vmladavq_u8(__a, __b)
>  #define vminvq_u8(__a, __b) __arm_vminvq_u8(__a, __b)
> -#define vminq_u8(__a, __b) __arm_vminq_u8(__a, __b)
>  #define vmaxvq_u8(__a, __b) __arm_vmaxvq_u8(__a, __b)
> -#define vmaxq_u8(__a, __b) __arm_vmaxq_u8(__a, __b)
>  #define vcmpneq_n_u8(__a, __b) __arm_vcmpneq_n_u8(__a, __b)
>  #define vcmphiq_u8(__a, __b) __arm_vcmphiq_u8(__a, __b)
>  #define vcmphiq_n_u8(__a, __b) __arm_vcmphiq_n_u8(__a, __b)
> @@ -656,9 +648,7 @@
>  #define vmladavxq_s8(__a, __b) __arm_vmladavxq_s8(__a, __b)
>  #define vmladavq_s8(__a, __b) __arm_vmladavq_s8(__a, __b)
>  #define vminvq_s8(__a, __b) __arm_vminvq_s8(__a, __b)
> -#define vminq_s8(__a, __b) __arm_vminq_s8(__a, __b)
>  #define vmaxvq_s8(__a, __b) __arm_vmaxvq_s8(__a, __b)
> -#define vmaxq_s8(__a, __b) __arm_vmaxq_s8(__a, __b)
>  #define vhcaddq_rot90_s8(__a, __b) __arm_vhcaddq_rot90_s8(__a, __b)
>  #define vhcaddq_rot270_s8(__a, __b) __arm_vhcaddq_rot270_s8(__a, __b)
>  #define vcaddq_rot90_s8(__a, __b) __arm_vcaddq_rot90_s8(__a, __b)
> @@ -672,9 +662,7 @@
>  #define vmullbq_int_u16(__a, __b) __arm_vmullbq_int_u16(__a, __b)
>  #define vmladavq_u16(__a, __b) __arm_vmladavq_u16(__a, __b)
>  #define vminvq_u16(__a, __b) __arm_vminvq_u16(__a, __b)
> -#define vminq_u16(__a, __b) __arm_vminq_u16(__a, __b)
>  #define vmaxvq_u16(__a, __b) __arm_vmaxvq_u16(__a, __b)
> -#define vmaxq_u16(__a, __b) __arm_vmaxq_u16(__a, __b)
>  #define vcmpneq_n_u16(__a, __b) __arm_vcmpneq_n_u16(__a, __b)
>  #define vcmphiq_u16(__a, __b) __arm_vcmphiq_u16(__a, __b)
>  #define vcmphiq_n_u16(__a, __b) __arm_vcmphiq_n_u16(__a, __b)
> @@ -714,9 +702,7 @@
>  #define vmladavxq_s16(__a, __b) __arm_vmladavxq_s16(__a, __b)
>  #define vmladavq_s16(__a, __b) __arm_vmladavq_s16(__a, __b)
>  #define vminvq_s16(__a, __b) __arm_vminvq_s16(__a, __b)
> -#define vminq_s16(__a, __b) __arm_vminq_s16(__a, __b)
>  #define vmaxvq_s16(__a, __b) __arm_vmaxvq_s16(__a, __b)
> -#define vmaxq_s16(__a, __b) __arm_vmaxq_s16(__a, __b)
>  #define vhcaddq_rot90_s16(__a, __b) __arm_vhcaddq_rot90_s16(__a, __b)
>  #define vhcaddq_rot270_s16(__a, __b) __arm_vhcaddq_rot270_s16(__a,
> __b)
>  #define vcaddq_rot90_s16(__a, __b) __arm_vcaddq_rot90_s16(__a, __b)
> @@ -730,9 +716,7 @@
>  #define vmullbq_int_u32(__a, __b) __arm_vmullbq_int_u32(__a, __b)
>  #define vmladavq_u32(__a, __b) __arm_vmladavq_u32(__a, __b)
>  #define vminvq_u32(__a, __b) __arm_vminvq_u32(__a, __b)
> -#define vminq_u32(__a, __b) __arm_vminq_u32(__a, __b)
>  #define vmaxvq_u32(__a, __b) __arm_vmaxvq_u32(__a, __b)
> -#define vmaxq_u32(__a, __b) __arm_vmaxq_u32(__a, __b)
>  #define vcmpneq_n_u32(__a, __b) __arm_vcmpneq_n_u32(__a, __b)
>  #define vcmphiq_u32(__a, __b) __arm_vcmphiq_u32(__a, __b)
>  #define vcmphiq_n_u32(__a, __b) __arm_vcmphiq_n_u32(__a, __b)
> @@ -772,9 +756,7 @@
>  #define vmladavxq_s32(__a, __b) __arm_vmladavxq_s32(__a, __b)
>  #define vmladavq_s32(__a, __b) __arm_vmladavq_s32(__a, __b)
>  #define vminvq_s32(__a, __b) __arm_vminvq_s32(__a, __b)
> -#define vminq_s32(__a, __b) __arm_vminq_s32(__a, __b)
>  #define vmaxvq_s32(__a, __b) __arm_vmaxvq_s32(__a, __b)
> -#define vmaxq_s32(__a, __b) __arm_vmaxq_s32(__a, __b)
>  #define vhcaddq_rot90_s32(__a, __b) __arm_vhcaddq_rot90_s32(__a, __b)
>  #define vhcaddq_rot270_s32(__a, __b) __arm_vhcaddq_rot270_s32(__a,
> __b)
>  #define vcaddq_rot90_s32(__a, __b) __arm_vcaddq_rot90_s32(__a, __b)
> @@ -1411,18 +1393,6 @@
>  #define vhcaddq_rot90_m_s8(__inactive, __a, __b, __p)
> __arm_vhcaddq_rot90_m_s8(__inactive, __a, __b, __p)
>  #define vhcaddq_rot90_m_s32(__inactive, __a, __b, __p)
> __arm_vhcaddq_rot90_m_s32(__inactive, __a, __b, __p)
>  #define vhcaddq_rot90_m_s16(__inactive, __a, __b, __p)
> __arm_vhcaddq_rot90_m_s16(__inactive, __a, __b, __p)
> -#define vmaxq_m_s8(__inactive, __a, __b, __p)
> __arm_vmaxq_m_s8(__inactive, __a, __b, __p)
> -#define vmaxq_m_s32(__inactive, __a, __b, __p)
> __arm_vmaxq_m_s32(__inactive, __a, __b, __p)
> -#define vmaxq_m_s16(__inactive, __a, __b, __p)
> __arm_vmaxq_m_s16(__inactive, __a, __b, __p)
> -#define vmaxq_m_u8(__inactive, __a, __b, __p)
> __arm_vmaxq_m_u8(__inactive, __a, __b, __p)
> -#define vmaxq_m_u32(__inactive, __a, __b, __p)
> __arm_vmaxq_m_u32(__inactive, __a, __b, __p)
> -#define vmaxq_m_u16(__inactive, __a, __b, __p)
> __arm_vmaxq_m_u16(__inactive, __a, __b, __p)
> -#define vminq_m_s8(__inactive, __a, __b, __p)
> __arm_vminq_m_s8(__inactive, __a, __b, __p)
> -#define vminq_m_s32(__inactive, __a, __b, __p)
> __arm_vminq_m_s32(__inactive, __a, __b, __p)
> -#define vminq_m_s16(__inactive, __a, __b, __p)
> __arm_vminq_m_s16(__inactive, __a, __b, __p)
> -#define vminq_m_u8(__inactive, __a, __b, __p)
> __arm_vminq_m_u8(__inactive, __a, __b, __p)
> -#define vminq_m_u32(__inactive, __a, __b, __p)
> __arm_vminq_m_u32(__inactive, __a, __b, __p)
> -#define vminq_m_u16(__inactive, __a, __b, __p)
> __arm_vminq_m_u16(__inactive, __a, __b, __p)
>  #define vmladavaq_p_s8(__a, __b, __c, __p) __arm_vmladavaq_p_s8(__a,
> __b, __c, __p)
>  #define vmladavaq_p_s32(__a, __b, __c, __p) __arm_vmladavaq_p_s32(__a,
> __b, __c, __p)
>  #define vmladavaq_p_s16(__a, __b, __c, __p) __arm_vmladavaq_p_s16(__a,
> __b, __c, __p)
> @@ -1943,18 +1913,6 @@
>  #define vdupq_x_n_u8(__a, __p) __arm_vdupq_x_n_u8(__a, __p)
>  #define vdupq_x_n_u16(__a, __p) __arm_vdupq_x_n_u16(__a, __p)
>  #define vdupq_x_n_u32(__a, __p) __arm_vdupq_x_n_u32(__a, __p)
> -#define vminq_x_s8(__a, __b, __p) __arm_vminq_x_s8(__a, __b, __p)
> -#define vminq_x_s16(__a, __b, __p) __arm_vminq_x_s16(__a, __b, __p)
> -#define vminq_x_s32(__a, __b, __p) __arm_vminq_x_s32(__a, __b, __p)
> -#define vminq_x_u8(__a, __b, __p) __arm_vminq_x_u8(__a, __b, __p)
> -#define vminq_x_u16(__a, __b, __p) __arm_vminq_x_u16(__a, __b, __p)
> -#define vminq_x_u32(__a, __b, __p) __arm_vminq_x_u32(__a, __b, __p)
> -#define vmaxq_x_s8(__a, __b, __p) __arm_vmaxq_x_s8(__a, __b, __p)
> -#define vmaxq_x_s16(__a, __b, __p) __arm_vmaxq_x_s16(__a, __b, __p)
> -#define vmaxq_x_s32(__a, __b, __p) __arm_vmaxq_x_s32(__a, __b, __p)
> -#define vmaxq_x_u8(__a, __b, __p) __arm_vmaxq_x_u8(__a, __b, __p)
> -#define vmaxq_x_u16(__a, __b, __p) __arm_vmaxq_x_u16(__a, __b, __p)
> -#define vmaxq_x_u32(__a, __b, __p) __arm_vmaxq_x_u32(__a, __b, __p)
>  #define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p)
>  #define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p)
>  #define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p)
> @@ -2937,13 +2895,6 @@ __arm_vminvq_u8 (uint8_t __a, uint8x16_t __b)
>    return __builtin_mve_vminvq_uv16qi (__a, __b);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_u8 (uint8x16_t __a, uint8x16_t __b)
> -{
> -  return __builtin_mve_vminq_uv16qi (__a, __b);
> -}
> -
>  __extension__ extern __inline uint8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b)
> @@ -2951,13 +2902,6 @@ __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b)
>    return __builtin_mve_vmaxvq_uv16qi (__a, __b);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_u8 (uint8x16_t __a, uint8x16_t __b)
> -{
> -  return __builtin_mve_vmaxq_uv16qi (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq_n_u8 (uint8x16_t __a, uint8_t __b)
> @@ -3233,13 +3177,6 @@ __arm_vminvq_s8 (int8_t __a, int8x16_t __b)
>    return __builtin_mve_vminvq_sv16qi (__a, __b);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_s8 (int8x16_t __a, int8x16_t __b)
> -{
> -  return __builtin_mve_vminq_sv16qi (__a, __b);
> -}
> -
>  __extension__ extern __inline int8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b)
> @@ -3247,13 +3184,6 @@ __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b)
>    return __builtin_mve_vmaxvq_sv16qi (__a, __b);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_s8 (int8x16_t __a, int8x16_t __b)
> -{
> -  return __builtin_mve_vmaxq_sv16qi (__a, __b);
> -}
> -
>  __extension__ extern __inline int8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90_s8 (int8x16_t __a, int8x16_t __b)
> @@ -3345,13 +3275,6 @@ __arm_vminvq_u16 (uint16_t __a, uint16x8_t
> __b)
>    return __builtin_mve_vminvq_uv8hi (__a, __b);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_u16 (uint16x8_t __a, uint16x8_t __b)
> -{
> -  return __builtin_mve_vminq_uv8hi (__a, __b);
> -}
> -
>  __extension__ extern __inline uint16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b)
> @@ -3359,13 +3282,6 @@ __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t
> __b)
>    return __builtin_mve_vmaxvq_uv8hi (__a, __b);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_u16 (uint16x8_t __a, uint16x8_t __b)
> -{
> -  return __builtin_mve_vmaxq_uv8hi (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq_n_u16 (uint16x8_t __a, uint16_t __b)
> @@ -3641,13 +3557,6 @@ __arm_vminvq_s16 (int16_t __a, int16x8_t __b)
>    return __builtin_mve_vminvq_sv8hi (__a, __b);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_s16 (int16x8_t __a, int16x8_t __b)
> -{
> -  return __builtin_mve_vminq_sv8hi (__a, __b);
> -}
> -
>  __extension__ extern __inline int16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b)
> @@ -3655,13 +3564,6 @@ __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b)
>    return __builtin_mve_vmaxvq_sv8hi (__a, __b);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_s16 (int16x8_t __a, int16x8_t __b)
> -{
> -  return __builtin_mve_vmaxq_sv8hi (__a, __b);
> -}
> -
>  __extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90_s16 (int16x8_t __a, int16x8_t __b)
> @@ -3753,13 +3655,6 @@ __arm_vminvq_u32 (uint32_t __a, uint32x4_t
> __b)
>    return __builtin_mve_vminvq_uv4si (__a, __b);
>  }
> 
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_u32 (uint32x4_t __a, uint32x4_t __b)
> -{
> -  return __builtin_mve_vminq_uv4si (__a, __b);
> -}
> -
>  __extension__ extern __inline uint32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b)
> @@ -3767,13 +3662,6 @@ __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t
> __b)
>    return __builtin_mve_vmaxvq_uv4si (__a, __b);
>  }
> 
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_u32 (uint32x4_t __a, uint32x4_t __b)
> -{
> -  return __builtin_mve_vmaxq_uv4si (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq_n_u32 (uint32x4_t __a, uint32_t __b)
> @@ -4049,13 +3937,6 @@ __arm_vminvq_s32 (int32_t __a, int32x4_t __b)
>    return __builtin_mve_vminvq_sv4si (__a, __b);
>  }
> 
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_s32 (int32x4_t __a, int32x4_t __b)
> -{
> -  return __builtin_mve_vminq_sv4si (__a, __b);
> -}
> -
>  __extension__ extern __inline int32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b)
> @@ -4063,13 +3944,6 @@ __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b)
>    return __builtin_mve_vmaxvq_sv4si (__a, __b);
>  }
> 
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_s32 (int32x4_t __a, int32x4_t __b)
> -{
> -  return __builtin_mve_vmaxq_sv4si (__a, __b);
> -}
> -
>  __extension__ extern __inline int32x4_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90_s32 (int32x4_t __a, int32x4_t __b)
> @@ -7380,90 +7254,6 @@ __arm_vhcaddq_rot90_m_s16 (int16x8_t
> __inactive, int16x8_t __a, int16x8_t __b, m
>    return __builtin_mve_vhcaddq_rot90_m_sv8hi (__inactive, __a, __b, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv16qi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv4si (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv8hi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv16qi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv4si (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t
> __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv8hi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv16qi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv4si (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv8hi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv16qi (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv4si (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t
> __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv8hi (__inactive, __a, __b, __p);
> -}
> -
>  __extension__ extern __inline int32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmladavaq_p_s8 (int32_t __a, int8x16_t __b, int8x16_t __c,
> mve_pred16_t __p)
> @@ -10635,90 +10425,6 @@ __arm_vdupq_x_n_u32 (uint32_t __a,
> mve_pred16_t __p)
>    return __builtin_mve_vdupq_m_n_uv4si (__arm_vuninitializedq_u32 (),
> __a, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv16qi (__arm_vuninitializedq_s8 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv8hi (__arm_vuninitializedq_s16 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_sv4si (__arm_vuninitializedq_s32 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv16qi (__arm_vuninitializedq_u8 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv8hi (__arm_vuninitializedq_u16 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vminq_m_uv4si (__arm_vuninitializedq_u32 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv16qi (__arm_vuninitializedq_s8 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv8hi (__arm_vuninitializedq_s16 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_sv4si (__arm_vuninitializedq_s32 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv16qi (__arm_vuninitializedq_u8 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv8hi (__arm_vuninitializedq_u16 (), __a,
> __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vmaxq_m_uv4si (__arm_vuninitializedq_u32 (), __a,
> __b, __p);
> -}
> -
>  __extension__ extern __inline int8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p)
> @@ -15624,13 +15330,6 @@ __arm_vminvq (uint8_t __a, uint8x16_t __b)
>   return __arm_vminvq_u8 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (uint8x16_t __a, uint8x16_t __b)
> -{
> - return __arm_vminq_u8 (__a, __b);
> -}
> -
>  __extension__ extern __inline uint8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (uint8_t __a, uint8x16_t __b)
> @@ -15638,13 +15337,6 @@ __arm_vmaxvq (uint8_t __a, uint8x16_t __b)
>   return __arm_vmaxvq_u8 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (uint8x16_t __a, uint8x16_t __b)
> -{
> - return __arm_vmaxq_u8 (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq (uint8x16_t __a, uint8_t __b)
> @@ -15918,13 +15610,6 @@ __arm_vminvq (int8_t __a, int8x16_t __b)
>   return __arm_vminvq_s8 (__a, __b);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (int8x16_t __a, int8x16_t __b)
> -{
> - return __arm_vminq_s8 (__a, __b);
> -}
> -
>  __extension__ extern __inline int8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (int8_t __a, int8x16_t __b)
> @@ -15932,13 +15617,6 @@ __arm_vmaxvq (int8_t __a, int8x16_t __b)
>   return __arm_vmaxvq_s8 (__a, __b);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (int8x16_t __a, int8x16_t __b)
> -{
> - return __arm_vmaxq_s8 (__a, __b);
> -}
> -
>  __extension__ extern __inline int8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90 (int8x16_t __a, int8x16_t __b)
> @@ -16030,13 +15708,6 @@ __arm_vminvq (uint16_t __a, uint16x8_t __b)
>   return __arm_vminvq_u16 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (uint16x8_t __a, uint16x8_t __b)
> -{
> - return __arm_vminq_u16 (__a, __b);
> -}
> -
>  __extension__ extern __inline uint16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (uint16_t __a, uint16x8_t __b)
> @@ -16044,13 +15715,6 @@ __arm_vmaxvq (uint16_t __a, uint16x8_t __b)
>   return __arm_vmaxvq_u16 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (uint16x8_t __a, uint16x8_t __b)
> -{
> - return __arm_vmaxq_u16 (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq (uint16x8_t __a, uint16_t __b)
> @@ -16324,13 +15988,6 @@ __arm_vminvq (int16_t __a, int16x8_t __b)
>   return __arm_vminvq_s16 (__a, __b);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (int16x8_t __a, int16x8_t __b)
> -{
> - return __arm_vminq_s16 (__a, __b);
> -}
> -
>  __extension__ extern __inline int16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (int16_t __a, int16x8_t __b)
> @@ -16338,13 +15995,6 @@ __arm_vmaxvq (int16_t __a, int16x8_t __b)
>   return __arm_vmaxvq_s16 (__a, __b);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (int16x8_t __a, int16x8_t __b)
> -{
> - return __arm_vmaxq_s16 (__a, __b);
> -}
> -
>  __extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90 (int16x8_t __a, int16x8_t __b)
> @@ -16436,13 +16086,6 @@ __arm_vminvq (uint32_t __a, uint32x4_t __b)
>   return __arm_vminvq_u32 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (uint32x4_t __a, uint32x4_t __b)
> -{
> - return __arm_vminq_u32 (__a, __b);
> -}
> -
>  __extension__ extern __inline uint32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (uint32_t __a, uint32x4_t __b)
> @@ -16450,13 +16093,6 @@ __arm_vmaxvq (uint32_t __a, uint32x4_t __b)
>   return __arm_vmaxvq_u32 (__a, __b);
>  }
> 
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (uint32x4_t __a, uint32x4_t __b)
> -{
> - return __arm_vmaxq_u32 (__a, __b);
> -}
> -
>  __extension__ extern __inline mve_pred16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vcmpneq (uint32x4_t __a, uint32_t __b)
> @@ -16730,13 +16366,6 @@ __arm_vminvq (int32_t __a, int32x4_t __b)
>   return __arm_vminvq_s32 (__a, __b);
>  }
> 
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq (int32x4_t __a, int32x4_t __b)
> -{
> - return __arm_vminq_s32 (__a, __b);
> -}
> -
>  __extension__ extern __inline int32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmaxvq (int32_t __a, int32x4_t __b)
> @@ -16744,13 +16373,6 @@ __arm_vmaxvq (int32_t __a, int32x4_t __b)
>   return __arm_vmaxvq_s32 (__a, __b);
>  }
> 
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq (int32x4_t __a, int32x4_t __b)
> -{
> - return __arm_vmaxq_s32 (__a, __b);
> -}
> -
>  __extension__ extern __inline int32x4_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vhcaddq_rot90 (int32x4_t __a, int32x4_t __b)
> @@ -20020,90 +19642,6 @@ __arm_vhcaddq_rot90_m (int16x8_t __inactive,
> int16x8_t __a, int16x8_t __b, mve_p
>   return __arm_vhcaddq_rot90_m_s16 (__inactive, __a, __b, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_s8 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_s32 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_s16 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_u8 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_u32 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vmaxq_m_u16 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_s8 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_s32 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_s16 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_u8 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_u32 (__inactive, __a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b,
> mve_pred16_t __p)
> -{
> - return __arm_vminq_m_u16 (__inactive, __a, __b, __p);
> -}
> -
>  __extension__ extern __inline int32_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmladavaq_p (int32_t __a, int8x16_t __b, int8x16_t __c,
> mve_pred16_t __p)
> @@ -22806,90 +22344,6 @@ __arm_viwdupq_x_u32 (uint32_t *__a,
> uint32_t __b, const int __imm, mve_pred16_t
>   return __arm_viwdupq_x_wb_u32 (__a, __b, __imm, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_s8 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_s16 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_s32 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_u8 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_u16 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vminq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vminq_x_u32 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_s8 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_s16 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_s32 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_u8 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_u16 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmaxq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vmaxq_x_u32 (__a, __b, __p);
> -}
> -
>  __extension__ extern __inline int8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vabsq_x (int8x16_t __a, mve_pred16_t __p)
> @@ -27274,16 +26728,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vhcaddq_rot90_s16 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
>    int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vhcaddq_rot90_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)));})
> 
> -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t)), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t)), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t)), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint32x4_t)));})
> -
>  #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27291,16 +26735,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)));})
> 
> -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t)), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t)), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t)), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint32x4_t)));})
> -
>  #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28867,16 +28301,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmullbq_int_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t)), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vmullbq_int_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint32x4_t)));})
> 
> -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t)), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t)), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t)), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint32x4_t)));})
> -
>  #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28884,16 +28308,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)));})
> 
> -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t)), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int16x8_t)), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> __ARM_mve_coerce(__p1, int32x4_t)), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t)), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t)), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint32x4_t)));})
> -
>  #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -30608,28 +30022,6 @@ extern void *__ARM_undef;
>    int
> (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve
> _type_int16x8_t]: __arm_vhcaddq_rot90_m_s16 (__ARM_mve_coerce(__p0,
> int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2,
> int16x8_t), p3), \
>    int
> (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve
> _type_int32x4_t]: __arm_vhcaddq_rot90_m_s32 (__ARM_mve_coerce(__p0,
> int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t), p3));})
> 
> -#define __arm_vmaxq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  __typeof(p2) __p2 = (p2); \
> -  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ
> eid(__p2)])0, \
> -  int
> (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve
> _type_int8x16_t]: __arm_vmaxq_m_s8 (__ARM_mve_coerce(__p0,
> int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2,
> int8x16_t), p3), \
> -  int
> (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve
> _type_int16x8_t]: __arm_vmaxq_m_s16 (__ARM_mve_coerce(__p0,
> int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2,
> int16x8_t), p3), \
> -  int
> (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve
> _type_int32x4_t]: __arm_vmaxq_m_s32 (__ARM_mve_coerce(__p0,
> int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m
> ve_type_uint8x16_t]: __arm_vmaxq_m_u8 (__ARM_mve_coerce(__p0,
> uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t),
> __ARM_mve_coerce(__p2, uint8x16_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m
> ve_type_uint16x8_t]: __arm_vmaxq_m_u16 (__ARM_mve_coerce(__p0,
> uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t),
> __ARM_mve_coerce(__p2, uint16x8_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m
> ve_type_uint32x4_t]: __arm_vmaxq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t),
> __ARM_mve_coerce(__p2, uint32x4_t), p3));})
> -
> -#define __arm_vminq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  __typeof(p2) __p2 = (p2); \
> -  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ
> eid(__p2)])0, \
> -  int
> (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve
> _type_int8x16_t]: __arm_vminq_m_s8 (__ARM_mve_coerce(__p0,
> int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2,
> int8x16_t), p3), \
> -  int
> (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve
> _type_int16x8_t]: __arm_vminq_m_s16 (__ARM_mve_coerce(__p0,
> int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2,
> int16x8_t), p3), \
> -  int
> (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve
> _type_int32x4_t]: __arm_vminq_m_s32 (__ARM_mve_coerce(__p0,
> int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m
> ve_type_uint8x16_t]: __arm_vminq_m_u8 (__ARM_mve_coerce(__p0,
> uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t),
> __ARM_mve_coerce(__p2, uint8x16_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m
> ve_type_uint16x8_t]: __arm_vminq_m_u16 (__ARM_mve_coerce(__p0,
> uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t),
> __ARM_mve_coerce(__p2, uint16x8_t), p3), \
> -  int
> (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m
> ve_type_uint32x4_t]: __arm_vminq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t),
> __ARM_mve_coerce(__p2, uint32x4_t), p3));})
> -
>  #define __arm_vmlaq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    __typeof(p2) __p2 = (p2); \
> @@ -31068,26 +30460,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int16x8_t]:
> __arm_vminavq_p_s16 (__p0, __ARM_mve_coerce(__p1, int16x8_t), p2), \
>    int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int32x4_t]:
> __arm_vminavq_p_s32 (__p0, __ARM_mve_coerce(__p1, int32x4_t), p2));})
> 
> -#define __arm_vmaxq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \
> -  __typeof(p2) __p2 = (p2); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vmaxq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t),
> __ARM_mve_coerce(__p2, int8x16_t), p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vmaxq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t),
> __ARM_mve_coerce(__p2, int16x8_t), p3), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vmaxq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t),
> __ARM_mve_coerce(__p2, int32x4_t), p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vmaxq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t),
> __ARM_mve_coerce(__p2, uint8x16_t), p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmaxq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t),
> __ARM_mve_coerce(__p2, uint16x8_t), p3), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vmaxq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t),
> __ARM_mve_coerce(__p2, uint32x4_t), p3));})
> -
> -#define __arm_vminq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \
> -  __typeof(p2) __p2 = (p2); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vminq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t),
> __ARM_mve_coerce(__p2, int8x16_t), p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:
> __arm_vminq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t),
> __ARM_mve_coerce(__p2, int16x8_t), p3), \
> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> __arm_vminq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t),
> __ARM_mve_coerce(__p2, int32x4_t), p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vminq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t),
> __ARM_mve_coerce(__p2, uint8x16_t), p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vminq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t),
> __ARM_mve_coerce(__p2, uint16x8_t), p3), \
> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> __arm_vminq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t),
> __ARM_mve_coerce(__p2, uint32x4_t), p3));})
> -
>  #define __arm_vminvq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> --
> 2.34.1
  

Patch

diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc
index 4bebf86f784..1839d5cb1a5 100644
--- a/gcc/config/arm/arm-mve-builtins-base.cc
+++ b/gcc/config/arm/arm-mve-builtins-base.cc
@@ -110,6 +110,15 @@  namespace arm_mve {
     UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F,				\
     UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1))
 
+  /* Helper for builtins with RTX codes, _m predicated override, but
+     no floating-point versions.  */
+#define FUNCTION_WITH_RTX_M_NO_F(NAME, RTX_S, RTX_U, UNSPEC) FUNCTION	\
+  (NAME, unspec_based_mve_function_exact_insn,				\
+   (RTX_S, RTX_U, UNKNOWN,						\
+    -1, -1, -1,								\
+    UNSPEC##_M_S, UNSPEC##_M_U, -1,					\
+    -1, -1, -1))
+
   /* Helper for builtins without RTX codes, no _m predicated and no _n
      overrides.  */
 #define FUNCTION_WITHOUT_M_N(NAME, UNSPEC) FUNCTION			\
@@ -173,6 +182,8 @@  FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ)
 FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ)
 FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ)
 FUNCTION_WITH_M_N_NO_F (vhsubq, VHSUBQ)
+FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ)
+FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ)
 FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ)
 FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ)
 FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ)
diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def
index f2e40cda2af..3b42bf46e81 100644
--- a/gcc/config/arm/arm-mve-builtins-base.def
+++ b/gcc/config/arm/arm-mve-builtins-base.def
@@ -25,6 +25,8 @@  DEF_MVE_FUNCTION (vcreateq, create, all_integer_with_64, none)
 DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none)
 DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none)
 DEF_MVE_FUNCTION (vhsubq, binary_opt_n, all_integer, mx_or_none)
+DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none)
+DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none)
 DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none)
 DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none)
 DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none)
diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h
index 5b62de6a922..81d10f4a8f4 100644
--- a/gcc/config/arm/arm-mve-builtins-base.h
+++ b/gcc/config/arm/arm-mve-builtins-base.h
@@ -30,6 +30,8 @@  extern const function_base *const vcreateq;
 extern const function_base *const veorq;
 extern const function_base *const vhaddq;
 extern const function_base *const vhsubq;
+extern const function_base *const vmaxq;
+extern const function_base *const vminq;
 extern const function_base *const vmulhq;
 extern const function_base *const vmulq;
 extern const function_base *const vorrq;
diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h
index ad67dcfd024..5fbea52c8ef 100644
--- a/gcc/config/arm/arm_mve.h
+++ b/gcc/config/arm/arm_mve.h
@@ -65,9 +65,7 @@ 
 #define vmullbq_int(__a, __b) __arm_vmullbq_int(__a, __b)
 #define vmladavq(__a, __b) __arm_vmladavq(__a, __b)
 #define vminvq(__a, __b) __arm_vminvq(__a, __b)
-#define vminq(__a, __b) __arm_vminq(__a, __b)
 #define vmaxvq(__a, __b) __arm_vmaxvq(__a, __b)
-#define vmaxq(__a, __b) __arm_vmaxq(__a, __b)
 #define vcmphiq(__a, __b) __arm_vcmphiq(__a, __b)
 #define vcmpeqq(__a, __b) __arm_vcmpeqq(__a, __b)
 #define vcmpcsq(__a, __b) __arm_vcmpcsq(__a, __b)
@@ -214,8 +212,6 @@ 
 #define vcaddq_rot90_m(__inactive, __a, __b, __p) __arm_vcaddq_rot90_m(__inactive, __a, __b, __p)
 #define vhcaddq_rot270_m(__inactive, __a, __b, __p) __arm_vhcaddq_rot270_m(__inactive, __a, __b, __p)
 #define vhcaddq_rot90_m(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m(__inactive, __a, __b, __p)
-#define vmaxq_m(__inactive, __a, __b, __p) __arm_vmaxq_m(__inactive, __a, __b, __p)
-#define vminq_m(__inactive, __a, __b, __p) __arm_vminq_m(__inactive, __a, __b, __p)
 #define vmladavaq_p(__a, __b, __c, __p) __arm_vmladavaq_p(__a, __b, __c, __p)
 #define vmladavaxq_p(__a, __b, __c, __p) __arm_vmladavaxq_p(__a, __b, __c, __p)
 #define vmlaq_m(__a, __b, __c, __p) __arm_vmlaq_m(__a, __b, __c, __p)
@@ -339,8 +335,6 @@ 
 #define viwdupq_x_u8(__a, __b, __imm, __p) __arm_viwdupq_x_u8(__a, __b, __imm, __p)
 #define viwdupq_x_u16(__a, __b, __imm, __p) __arm_viwdupq_x_u16(__a, __b, __imm, __p)
 #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a, __b, __imm, __p)
-#define vminq_x(__a, __b, __p) __arm_vminq_x(__a, __b, __p)
-#define vmaxq_x(__a, __b, __p) __arm_vmaxq_x(__a, __b, __p)
 #define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p)
 #define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p)
 #define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p)
@@ -614,9 +608,7 @@ 
 #define vmullbq_int_u8(__a, __b) __arm_vmullbq_int_u8(__a, __b)
 #define vmladavq_u8(__a, __b) __arm_vmladavq_u8(__a, __b)
 #define vminvq_u8(__a, __b) __arm_vminvq_u8(__a, __b)
-#define vminq_u8(__a, __b) __arm_vminq_u8(__a, __b)
 #define vmaxvq_u8(__a, __b) __arm_vmaxvq_u8(__a, __b)
-#define vmaxq_u8(__a, __b) __arm_vmaxq_u8(__a, __b)
 #define vcmpneq_n_u8(__a, __b) __arm_vcmpneq_n_u8(__a, __b)
 #define vcmphiq_u8(__a, __b) __arm_vcmphiq_u8(__a, __b)
 #define vcmphiq_n_u8(__a, __b) __arm_vcmphiq_n_u8(__a, __b)
@@ -656,9 +648,7 @@ 
 #define vmladavxq_s8(__a, __b) __arm_vmladavxq_s8(__a, __b)
 #define vmladavq_s8(__a, __b) __arm_vmladavq_s8(__a, __b)
 #define vminvq_s8(__a, __b) __arm_vminvq_s8(__a, __b)
-#define vminq_s8(__a, __b) __arm_vminq_s8(__a, __b)
 #define vmaxvq_s8(__a, __b) __arm_vmaxvq_s8(__a, __b)
-#define vmaxq_s8(__a, __b) __arm_vmaxq_s8(__a, __b)
 #define vhcaddq_rot90_s8(__a, __b) __arm_vhcaddq_rot90_s8(__a, __b)
 #define vhcaddq_rot270_s8(__a, __b) __arm_vhcaddq_rot270_s8(__a, __b)
 #define vcaddq_rot90_s8(__a, __b) __arm_vcaddq_rot90_s8(__a, __b)
@@ -672,9 +662,7 @@ 
 #define vmullbq_int_u16(__a, __b) __arm_vmullbq_int_u16(__a, __b)
 #define vmladavq_u16(__a, __b) __arm_vmladavq_u16(__a, __b)
 #define vminvq_u16(__a, __b) __arm_vminvq_u16(__a, __b)
-#define vminq_u16(__a, __b) __arm_vminq_u16(__a, __b)
 #define vmaxvq_u16(__a, __b) __arm_vmaxvq_u16(__a, __b)
-#define vmaxq_u16(__a, __b) __arm_vmaxq_u16(__a, __b)
 #define vcmpneq_n_u16(__a, __b) __arm_vcmpneq_n_u16(__a, __b)
 #define vcmphiq_u16(__a, __b) __arm_vcmphiq_u16(__a, __b)
 #define vcmphiq_n_u16(__a, __b) __arm_vcmphiq_n_u16(__a, __b)
@@ -714,9 +702,7 @@ 
 #define vmladavxq_s16(__a, __b) __arm_vmladavxq_s16(__a, __b)
 #define vmladavq_s16(__a, __b) __arm_vmladavq_s16(__a, __b)
 #define vminvq_s16(__a, __b) __arm_vminvq_s16(__a, __b)
-#define vminq_s16(__a, __b) __arm_vminq_s16(__a, __b)
 #define vmaxvq_s16(__a, __b) __arm_vmaxvq_s16(__a, __b)
-#define vmaxq_s16(__a, __b) __arm_vmaxq_s16(__a, __b)
 #define vhcaddq_rot90_s16(__a, __b) __arm_vhcaddq_rot90_s16(__a, __b)
 #define vhcaddq_rot270_s16(__a, __b) __arm_vhcaddq_rot270_s16(__a, __b)
 #define vcaddq_rot90_s16(__a, __b) __arm_vcaddq_rot90_s16(__a, __b)
@@ -730,9 +716,7 @@ 
 #define vmullbq_int_u32(__a, __b) __arm_vmullbq_int_u32(__a, __b)
 #define vmladavq_u32(__a, __b) __arm_vmladavq_u32(__a, __b)
 #define vminvq_u32(__a, __b) __arm_vminvq_u32(__a, __b)
-#define vminq_u32(__a, __b) __arm_vminq_u32(__a, __b)
 #define vmaxvq_u32(__a, __b) __arm_vmaxvq_u32(__a, __b)
-#define vmaxq_u32(__a, __b) __arm_vmaxq_u32(__a, __b)
 #define vcmpneq_n_u32(__a, __b) __arm_vcmpneq_n_u32(__a, __b)
 #define vcmphiq_u32(__a, __b) __arm_vcmphiq_u32(__a, __b)
 #define vcmphiq_n_u32(__a, __b) __arm_vcmphiq_n_u32(__a, __b)
@@ -772,9 +756,7 @@ 
 #define vmladavxq_s32(__a, __b) __arm_vmladavxq_s32(__a, __b)
 #define vmladavq_s32(__a, __b) __arm_vmladavq_s32(__a, __b)
 #define vminvq_s32(__a, __b) __arm_vminvq_s32(__a, __b)
-#define vminq_s32(__a, __b) __arm_vminq_s32(__a, __b)
 #define vmaxvq_s32(__a, __b) __arm_vmaxvq_s32(__a, __b)
-#define vmaxq_s32(__a, __b) __arm_vmaxq_s32(__a, __b)
 #define vhcaddq_rot90_s32(__a, __b) __arm_vhcaddq_rot90_s32(__a, __b)
 #define vhcaddq_rot270_s32(__a, __b) __arm_vhcaddq_rot270_s32(__a, __b)
 #define vcaddq_rot90_s32(__a, __b) __arm_vcaddq_rot90_s32(__a, __b)
@@ -1411,18 +1393,6 @@ 
 #define vhcaddq_rot90_m_s8(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s8(__inactive, __a, __b, __p)
 #define vhcaddq_rot90_m_s32(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s32(__inactive, __a, __b, __p)
 #define vhcaddq_rot90_m_s16(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s16(__inactive, __a, __b, __p)
-#define vmaxq_m_s8(__inactive, __a, __b, __p) __arm_vmaxq_m_s8(__inactive, __a, __b, __p)
-#define vmaxq_m_s32(__inactive, __a, __b, __p) __arm_vmaxq_m_s32(__inactive, __a, __b, __p)
-#define vmaxq_m_s16(__inactive, __a, __b, __p) __arm_vmaxq_m_s16(__inactive, __a, __b, __p)
-#define vmaxq_m_u8(__inactive, __a, __b, __p) __arm_vmaxq_m_u8(__inactive, __a, __b, __p)
-#define vmaxq_m_u32(__inactive, __a, __b, __p) __arm_vmaxq_m_u32(__inactive, __a, __b, __p)
-#define vmaxq_m_u16(__inactive, __a, __b, __p) __arm_vmaxq_m_u16(__inactive, __a, __b, __p)
-#define vminq_m_s8(__inactive, __a, __b, __p) __arm_vminq_m_s8(__inactive, __a, __b, __p)
-#define vminq_m_s32(__inactive, __a, __b, __p) __arm_vminq_m_s32(__inactive, __a, __b, __p)
-#define vminq_m_s16(__inactive, __a, __b, __p) __arm_vminq_m_s16(__inactive, __a, __b, __p)
-#define vminq_m_u8(__inactive, __a, __b, __p) __arm_vminq_m_u8(__inactive, __a, __b, __p)
-#define vminq_m_u32(__inactive, __a, __b, __p) __arm_vminq_m_u32(__inactive, __a, __b, __p)
-#define vminq_m_u16(__inactive, __a, __b, __p) __arm_vminq_m_u16(__inactive, __a, __b, __p)
 #define vmladavaq_p_s8(__a, __b, __c, __p) __arm_vmladavaq_p_s8(__a, __b, __c, __p)
 #define vmladavaq_p_s32(__a, __b, __c, __p) __arm_vmladavaq_p_s32(__a, __b, __c, __p)
 #define vmladavaq_p_s16(__a, __b, __c, __p) __arm_vmladavaq_p_s16(__a, __b, __c, __p)
@@ -1943,18 +1913,6 @@ 
 #define vdupq_x_n_u8(__a, __p) __arm_vdupq_x_n_u8(__a, __p)
 #define vdupq_x_n_u16(__a, __p) __arm_vdupq_x_n_u16(__a, __p)
 #define vdupq_x_n_u32(__a, __p) __arm_vdupq_x_n_u32(__a, __p)
-#define vminq_x_s8(__a, __b, __p) __arm_vminq_x_s8(__a, __b, __p)
-#define vminq_x_s16(__a, __b, __p) __arm_vminq_x_s16(__a, __b, __p)
-#define vminq_x_s32(__a, __b, __p) __arm_vminq_x_s32(__a, __b, __p)
-#define vminq_x_u8(__a, __b, __p) __arm_vminq_x_u8(__a, __b, __p)
-#define vminq_x_u16(__a, __b, __p) __arm_vminq_x_u16(__a, __b, __p)
-#define vminq_x_u32(__a, __b, __p) __arm_vminq_x_u32(__a, __b, __p)
-#define vmaxq_x_s8(__a, __b, __p) __arm_vmaxq_x_s8(__a, __b, __p)
-#define vmaxq_x_s16(__a, __b, __p) __arm_vmaxq_x_s16(__a, __b, __p)
-#define vmaxq_x_s32(__a, __b, __p) __arm_vmaxq_x_s32(__a, __b, __p)
-#define vmaxq_x_u8(__a, __b, __p) __arm_vmaxq_x_u8(__a, __b, __p)
-#define vmaxq_x_u16(__a, __b, __p) __arm_vmaxq_x_u16(__a, __b, __p)
-#define vmaxq_x_u32(__a, __b, __p) __arm_vmaxq_x_u32(__a, __b, __p)
 #define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p)
 #define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p)
 #define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p)
@@ -2937,13 +2895,6 @@  __arm_vminvq_u8 (uint8_t __a, uint8x16_t __b)
   return __builtin_mve_vminvq_uv16qi (__a, __b);
 }
 
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_u8 (uint8x16_t __a, uint8x16_t __b)
-{
-  return __builtin_mve_vminq_uv16qi (__a, __b);
-}
-
 __extension__ extern __inline uint8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b)
@@ -2951,13 +2902,6 @@  __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b)
   return __builtin_mve_vmaxvq_uv16qi (__a, __b);
 }
 
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_u8 (uint8x16_t __a, uint8x16_t __b)
-{
-  return __builtin_mve_vmaxq_uv16qi (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq_n_u8 (uint8x16_t __a, uint8_t __b)
@@ -3233,13 +3177,6 @@  __arm_vminvq_s8 (int8_t __a, int8x16_t __b)
   return __builtin_mve_vminvq_sv16qi (__a, __b);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_s8 (int8x16_t __a, int8x16_t __b)
-{
-  return __builtin_mve_vminq_sv16qi (__a, __b);
-}
-
 __extension__ extern __inline int8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b)
@@ -3247,13 +3184,6 @@  __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b)
   return __builtin_mve_vmaxvq_sv16qi (__a, __b);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_s8 (int8x16_t __a, int8x16_t __b)
-{
-  return __builtin_mve_vmaxq_sv16qi (__a, __b);
-}
-
 __extension__ extern __inline int8x16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90_s8 (int8x16_t __a, int8x16_t __b)
@@ -3345,13 +3275,6 @@  __arm_vminvq_u16 (uint16_t __a, uint16x8_t __b)
   return __builtin_mve_vminvq_uv8hi (__a, __b);
 }
 
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_u16 (uint16x8_t __a, uint16x8_t __b)
-{
-  return __builtin_mve_vminq_uv8hi (__a, __b);
-}
-
 __extension__ extern __inline uint16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b)
@@ -3359,13 +3282,6 @@  __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b)
   return __builtin_mve_vmaxvq_uv8hi (__a, __b);
 }
 
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_u16 (uint16x8_t __a, uint16x8_t __b)
-{
-  return __builtin_mve_vmaxq_uv8hi (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq_n_u16 (uint16x8_t __a, uint16_t __b)
@@ -3641,13 +3557,6 @@  __arm_vminvq_s16 (int16_t __a, int16x8_t __b)
   return __builtin_mve_vminvq_sv8hi (__a, __b);
 }
 
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_s16 (int16x8_t __a, int16x8_t __b)
-{
-  return __builtin_mve_vminq_sv8hi (__a, __b);
-}
-
 __extension__ extern __inline int16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b)
@@ -3655,13 +3564,6 @@  __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b)
   return __builtin_mve_vmaxvq_sv8hi (__a, __b);
 }
 
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_s16 (int16x8_t __a, int16x8_t __b)
-{
-  return __builtin_mve_vmaxq_sv8hi (__a, __b);
-}
-
 __extension__ extern __inline int16x8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90_s16 (int16x8_t __a, int16x8_t __b)
@@ -3753,13 +3655,6 @@  __arm_vminvq_u32 (uint32_t __a, uint32x4_t __b)
   return __builtin_mve_vminvq_uv4si (__a, __b);
 }
 
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_u32 (uint32x4_t __a, uint32x4_t __b)
-{
-  return __builtin_mve_vminq_uv4si (__a, __b);
-}
-
 __extension__ extern __inline uint32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b)
@@ -3767,13 +3662,6 @@  __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b)
   return __builtin_mve_vmaxvq_uv4si (__a, __b);
 }
 
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_u32 (uint32x4_t __a, uint32x4_t __b)
-{
-  return __builtin_mve_vmaxq_uv4si (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq_n_u32 (uint32x4_t __a, uint32_t __b)
@@ -4049,13 +3937,6 @@  __arm_vminvq_s32 (int32_t __a, int32x4_t __b)
   return __builtin_mve_vminvq_sv4si (__a, __b);
 }
 
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_s32 (int32x4_t __a, int32x4_t __b)
-{
-  return __builtin_mve_vminq_sv4si (__a, __b);
-}
-
 __extension__ extern __inline int32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b)
@@ -4063,13 +3944,6 @@  __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b)
   return __builtin_mve_vmaxvq_sv4si (__a, __b);
 }
 
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_s32 (int32x4_t __a, int32x4_t __b)
-{
-  return __builtin_mve_vmaxq_sv4si (__a, __b);
-}
-
 __extension__ extern __inline int32x4_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90_s32 (int32x4_t __a, int32x4_t __b)
@@ -7380,90 +7254,6 @@  __arm_vhcaddq_rot90_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, m
   return __builtin_mve_vhcaddq_rot90_m_sv8hi (__inactive, __a, __b, __p);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv16qi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv4si (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv8hi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv16qi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv4si (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv8hi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv16qi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv4si (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv8hi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv16qi (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv4si (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv8hi (__inactive, __a, __b, __p);
-}
-
 __extension__ extern __inline int32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmladavaq_p_s8 (int32_t __a, int8x16_t __b, int8x16_t __c, mve_pred16_t __p)
@@ -10635,90 +10425,6 @@  __arm_vdupq_x_n_u32 (uint32_t __a, mve_pred16_t __p)
   return __builtin_mve_vdupq_m_n_uv4si (__arm_vuninitializedq_u32 (), __a, __p);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vminq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p);
-}
-
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
-  return __builtin_mve_vmaxq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p);
-}
-
 __extension__ extern __inline int8x16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p)
@@ -15624,13 +15330,6 @@  __arm_vminvq (uint8_t __a, uint8x16_t __b)
  return __arm_vminvq_u8 (__a, __b);
 }
 
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (uint8x16_t __a, uint8x16_t __b)
-{
- return __arm_vminq_u8 (__a, __b);
-}
-
 __extension__ extern __inline uint8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (uint8_t __a, uint8x16_t __b)
@@ -15638,13 +15337,6 @@  __arm_vmaxvq (uint8_t __a, uint8x16_t __b)
  return __arm_vmaxvq_u8 (__a, __b);
 }
 
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (uint8x16_t __a, uint8x16_t __b)
-{
- return __arm_vmaxq_u8 (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq (uint8x16_t __a, uint8_t __b)
@@ -15918,13 +15610,6 @@  __arm_vminvq (int8_t __a, int8x16_t __b)
  return __arm_vminvq_s8 (__a, __b);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (int8x16_t __a, int8x16_t __b)
-{
- return __arm_vminq_s8 (__a, __b);
-}
-
 __extension__ extern __inline int8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (int8_t __a, int8x16_t __b)
@@ -15932,13 +15617,6 @@  __arm_vmaxvq (int8_t __a, int8x16_t __b)
  return __arm_vmaxvq_s8 (__a, __b);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (int8x16_t __a, int8x16_t __b)
-{
- return __arm_vmaxq_s8 (__a, __b);
-}
-
 __extension__ extern __inline int8x16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90 (int8x16_t __a, int8x16_t __b)
@@ -16030,13 +15708,6 @@  __arm_vminvq (uint16_t __a, uint16x8_t __b)
  return __arm_vminvq_u16 (__a, __b);
 }
 
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (uint16x8_t __a, uint16x8_t __b)
-{
- return __arm_vminq_u16 (__a, __b);
-}
-
 __extension__ extern __inline uint16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (uint16_t __a, uint16x8_t __b)
@@ -16044,13 +15715,6 @@  __arm_vmaxvq (uint16_t __a, uint16x8_t __b)
  return __arm_vmaxvq_u16 (__a, __b);
 }
 
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (uint16x8_t __a, uint16x8_t __b)
-{
- return __arm_vmaxq_u16 (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq (uint16x8_t __a, uint16_t __b)
@@ -16324,13 +15988,6 @@  __arm_vminvq (int16_t __a, int16x8_t __b)
  return __arm_vminvq_s16 (__a, __b);
 }
 
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (int16x8_t __a, int16x8_t __b)
-{
- return __arm_vminq_s16 (__a, __b);
-}
-
 __extension__ extern __inline int16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (int16_t __a, int16x8_t __b)
@@ -16338,13 +15995,6 @@  __arm_vmaxvq (int16_t __a, int16x8_t __b)
  return __arm_vmaxvq_s16 (__a, __b);
 }
 
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (int16x8_t __a, int16x8_t __b)
-{
- return __arm_vmaxq_s16 (__a, __b);
-}
-
 __extension__ extern __inline int16x8_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90 (int16x8_t __a, int16x8_t __b)
@@ -16436,13 +16086,6 @@  __arm_vminvq (uint32_t __a, uint32x4_t __b)
  return __arm_vminvq_u32 (__a, __b);
 }
 
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (uint32x4_t __a, uint32x4_t __b)
-{
- return __arm_vminq_u32 (__a, __b);
-}
-
 __extension__ extern __inline uint32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (uint32_t __a, uint32x4_t __b)
@@ -16450,13 +16093,6 @@  __arm_vmaxvq (uint32_t __a, uint32x4_t __b)
  return __arm_vmaxvq_u32 (__a, __b);
 }
 
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (uint32x4_t __a, uint32x4_t __b)
-{
- return __arm_vmaxq_u32 (__a, __b);
-}
-
 __extension__ extern __inline mve_pred16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vcmpneq (uint32x4_t __a, uint32_t __b)
@@ -16730,13 +16366,6 @@  __arm_vminvq (int32_t __a, int32x4_t __b)
  return __arm_vminvq_s32 (__a, __b);
 }
 
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq (int32x4_t __a, int32x4_t __b)
-{
- return __arm_vminq_s32 (__a, __b);
-}
-
 __extension__ extern __inline int32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmaxvq (int32_t __a, int32x4_t __b)
@@ -16744,13 +16373,6 @@  __arm_vmaxvq (int32_t __a, int32x4_t __b)
  return __arm_vmaxvq_s32 (__a, __b);
 }
 
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq (int32x4_t __a, int32x4_t __b)
-{
- return __arm_vmaxq_s32 (__a, __b);
-}
-
 __extension__ extern __inline int32x4_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vhcaddq_rot90 (int32x4_t __a, int32x4_t __b)
@@ -20020,90 +19642,6 @@  __arm_vhcaddq_rot90_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_p
  return __arm_vhcaddq_rot90_m_s16 (__inactive, __a, __b, __p);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_s8 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_s32 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_s16 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_u8 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_u32 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_m_u16 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_s8 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_s32 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_s16 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_u8 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_u32 (__inactive, __a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_m_u16 (__inactive, __a, __b, __p);
-}
-
 __extension__ extern __inline int32_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vmladavaq_p (int32_t __a, int8x16_t __b, int8x16_t __c, mve_pred16_t __p)
@@ -22806,90 +22344,6 @@  __arm_viwdupq_x_u32 (uint32_t *__a, uint32_t __b, const int __imm, mve_pred16_t
  return __arm_viwdupq_x_wb_u32 (__a, __b, __imm, __p);
 }
 
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_s8 (__a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_s16 (__a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_s32 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_u8 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_u16 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vminq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vminq_x_u32 (__a, __b, __p);
-}
-
-__extension__ extern __inline int8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_s8 (__a, __b, __p);
-}
-
-__extension__ extern __inline int16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_s16 (__a, __b, __p);
-}
-
-__extension__ extern __inline int32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_s32 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint8x16_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_u8 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint16x8_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_u16 (__a, __b, __p);
-}
-
-__extension__ extern __inline uint32x4_t
-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
-__arm_vmaxq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
-{
- return __arm_vmaxq_x_u32 (__a, __b, __p);
-}
-
 __extension__ extern __inline int8x16_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
 __arm_vabsq_x (int8x16_t __a, mve_pred16_t __p)
@@ -27274,16 +26728,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vhcaddq_rot90_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
   int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vhcaddq_rot90_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));})
 
-#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));})
-
 #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
@@ -27291,16 +26735,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
   int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));})
 
-#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));})
-
 #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
@@ -28867,16 +28301,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmullbq_int_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \
   int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmullbq_int_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));})
 
-#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));})
-
 #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
@@ -28884,16 +28308,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
   int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));})
 
-#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));})
-
 #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
@@ -30608,28 +30022,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vhcaddq_rot90_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
   int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vhcaddq_rot90_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));})
 
-#define __arm_vmaxq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  __typeof(p2) __p2 = (p2); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
-
-#define __arm_vminq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  __typeof(p2) __p2 = (p2); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
-
 #define __arm_vmlaq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   __typeof(p2) __p2 = (p2); \
@@ -31068,26 +30460,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int16x8_t]: __arm_vminavq_p_s16 (__p0, __ARM_mve_coerce(__p1, int16x8_t), p2), \
   int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int32x4_t]: __arm_vminavq_p_s32 (__p0, __ARM_mve_coerce(__p1, int32x4_t), p2));})
 
-#define __arm_vmaxq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \
-  __typeof(p2) __p2 = (p2); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
-
-#define __arm_vminq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \
-  __typeof(p2) __p2 = (p2); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
-
 #define __arm_vminvq(p0,p1) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
   _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \