[v2,ARM,1/4x] : MVE intrinsics with quaternary operands.

Message ID AM0PR08MB5380CA181E4721D6DF3F2CAD9BF70@AM0PR08MB5380.eurprd08.prod.outlook.com
State New
Headers show
Series
  • [v2,ARM,1/4x] : MVE intrinsics with quaternary operands.
Related show

Commit Message

Srinath Parvathaneni March 18, 2020, 11:29 a.m.
Hello Kyrill,

Following patch is the rebased version of v1.
(version v1) https://gcc.gnu.org/pipermail/gcc-patches/2019-November/534332.html

####

Hello,

This patch supports following MVE ACLE intrinsics with quaternary operands.

vsriq_m_n_s8, vsubq_m_s8, vsubq_x_s8, vcvtq_m_n_f16_u16, vcvtq_x_n_f16_u16,
vqshluq_m_n_s8, vabavq_p_s8, vsriq_m_n_u8, vshlq_m_u8, vshlq_x_u8, vsubq_m_u8,
vsubq_x_u8, vabavq_p_u8, vshlq_m_s8, vshlq_x_s8, vcvtq_m_n_f16_s16,
vcvtq_x_n_f16_s16, vsriq_m_n_s16, vsubq_m_s16, vsubq_x_s16, vcvtq_m_n_f32_u32,
vcvtq_x_n_f32_u32, vqshluq_m_n_s16, vabavq_p_s16, vsriq_m_n_u16,
vshlq_m_u16, vshlq_x_u16, vsubq_m_u16, vsubq_x_u16, vabavq_p_u16, vshlq_m_s16,
vshlq_x_s16, vcvtq_m_n_f32_s32, vcvtq_x_n_f32_s32, vsriq_m_n_s32, vsubq_m_s32,
vsubq_x_s32, vqshluq_m_n_s32, vabavq_p_s32, vsriq_m_n_u32, vshlq_m_u32,
vshlq_x_u32, vsubq_m_u32, vsubq_x_u32, vabavq_p_u32, vshlq_m_s32, vshlq_x_s32.

Please refer to M-profile Vector Extension (MVE) intrinsics [1]  for more details.
[1] https://developer.arm.com/architectures/instruction-sets/simd-isas/helium/mve-intrinsics

Regression tested on arm-none-eabi and found no regressions.

Ok for trunk?

Thanks,
Srinath.

gcc/ChangeLog:

2019-10-29  Andre Vieira  <andre.simoesdiasvieira@arm.com>
            Mihail Ionescu  <mihail.ionescu@arm.com>
            Srinath Parvathaneni  <srinath.parvathaneni@arm.com>

	* config/arm/arm-builtins.c (QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS):
	Define builtin qualifier.
	(QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS): Likewise.
	(QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS): Likewise.
	* config/arm/arm_mve.h (vsriq_m_n_s8): Define macro.
	(vsubq_m_s8): Likewise.
	(vcvtq_m_n_f16_u16): Likewise.
	(vqshluq_m_n_s8): Likewise.
	(vabavq_p_s8): Likewise.
	(vsriq_m_n_u8): Likewise.
	(vshlq_m_u8): Likewise.
	(vsubq_m_u8): Likewise.
	(vabavq_p_u8): Likewise.
	(vshlq_m_s8): Likewise.
	(vcvtq_m_n_f16_s16): Likewise.
	(vsriq_m_n_s16): Likewise.
	(vsubq_m_s16): Likewise.
	(vcvtq_m_n_f32_u32): Likewise.
	(vqshluq_m_n_s16): Likewise.
	(vabavq_p_s16): Likewise.
	(vsriq_m_n_u16): Likewise.
	(vshlq_m_u16): Likewise.
	(vsubq_m_u16): Likewise.
	(vabavq_p_u16): Likewise.
	(vshlq_m_s16): Likewise.
	(vcvtq_m_n_f32_s32): Likewise.
	(vsriq_m_n_s32): Likewise.
	(vsubq_m_s32): Likewise.
	(vqshluq_m_n_s32): Likewise.
	(vabavq_p_s32): Likewise.
	(vsriq_m_n_u32): Likewise.
	(vshlq_m_u32): Likewise.
	(vsubq_m_u32): Likewise.
	(vabavq_p_u32): Likewise.
	(vshlq_m_s32): Likewise.
	(__arm_vsriq_m_n_s8): Define intrinsic.
	(__arm_vsubq_m_s8): Likewise.
	(__arm_vqshluq_m_n_s8): Likewise.
	(__arm_vabavq_p_s8): Likewise.
	(__arm_vsriq_m_n_u8): Likewise.
	(__arm_vshlq_m_u8): Likewise.
	(__arm_vsubq_m_u8): Likewise.
	(__arm_vabavq_p_u8): Likewise.
	(__arm_vshlq_m_s8): Likewise.
	(__arm_vsriq_m_n_s16): Likewise.
	(__arm_vsubq_m_s16): Likewise.
	(__arm_vqshluq_m_n_s16): Likewise.
	(__arm_vabavq_p_s16): Likewise.
	(__arm_vsriq_m_n_u16): Likewise.
	(__arm_vshlq_m_u16): Likewise.
	(__arm_vsubq_m_u16): Likewise.
	(__arm_vabavq_p_u16): Likewise.
	(__arm_vshlq_m_s16): Likewise.
	(__arm_vsriq_m_n_s32): Likewise.
	(__arm_vsubq_m_s32): Likewise.
	(__arm_vqshluq_m_n_s32): Likewise.
	(__arm_vabavq_p_s32): Likewise.
	(__arm_vsriq_m_n_u32): Likewise.
	(__arm_vshlq_m_u32): Likewise.
	(__arm_vsubq_m_u32): Likewise.
	(__arm_vabavq_p_u32): Likewise.
	(__arm_vshlq_m_s32): Likewise.
	(__arm_vcvtq_m_n_f16_u16): Likewise.
	(__arm_vcvtq_m_n_f16_s16): Likewise.
	(__arm_vcvtq_m_n_f32_u32): Likewise.
	(__arm_vcvtq_m_n_f32_s32): Likewise.
	(vcvtq_m_n): Define polymorphic variant.
	(vqshluq_m_n): Likewise.
	(vshlq_m): Likewise.
	(vsriq_m_n): Likewise.
	(vsubq_m): Likewise.
	(vabavq_p): Likewise.
	* config/arm/arm_mve_builtins.def
	(QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS): Use builtin qualifier.
	(QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS): Likewise.
	(QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS): Likewise.
	(QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS): Likewise.
	* config/arm/mve.md (VABAVQ_P): Define iterator.
	(VSHLQ_M): Likewise.
	(VSRIQ_M_N): Likewise.
	(VSUBQ_M): Likewise.
	(VCVTQ_M_N_TO_F): Likewise.
	(mve_vabavq_p_<supf><mode>): Define RTL pattern.
	(mve_vqshluq_m_n_s<mode>): Likewise.
	(mve_vshlq_m_<supf><mode>): Likewise.
	(mve_vsriq_m_n_<supf><mode>): Likewise.
	(mve_vsubq_m_<supf><mode>): Likewise.
	(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.

gcc/testsuite/ChangeLog:

2019-10-29  Andre Vieira  <andre.simoesdiasvieira@arm.com>
            Mihail Ionescu  <mihail.ionescu@arm.com>
            Srinath Parvathaneni  <srinath.parvathaneni@arm.com>

	* gcc.target/arm/mve/intrinsics/vabavq_p_s16.c: New test.
	* gcc.target/arm/mve/intrinsics/vabavq_p_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vabavq_p_s8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vabavq_p_u16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vabavq_p_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vabavq_p_u8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_s16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_s8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_u16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vshlq_m_u8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_s16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_s8.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_u16.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsubq_m_u8.c: Likewise.


###############     Attachment also inlined for ease of reply    ###############

Comments

Kyrylo Tkachov March 18, 2020, 4:48 p.m. | #1
Hi Srinath,

> -----Original Message-----

> From: Srinath Parvathaneni <Srinath.Parvathaneni@arm.com>

> Sent: 18 March 2020 11:29

> To: gcc-patches@gcc.gnu.org

> Cc: Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>

> Subject: [PATCH v2][ARM][GCC][1/4x]: MVE intrinsics with quaternary

> operands.

> 

> Hello Kyrill,

> 

> Following patch is the rebased version of v1.

> (version v1) https://gcc.gnu.org/pipermail/gcc-patches/2019-

> November/534332.html

> 

> ####

> 

> Hello,

> 

> This patch supports following MVE ACLE intrinsics with quaternary operands.

> 

> vsriq_m_n_s8, vsubq_m_s8, vsubq_x_s8, vcvtq_m_n_f16_u16,

> vcvtq_x_n_f16_u16,

> vqshluq_m_n_s8, vabavq_p_s8, vsriq_m_n_u8, vshlq_m_u8, vshlq_x_u8,

> vsubq_m_u8,

> vsubq_x_u8, vabavq_p_u8, vshlq_m_s8, vshlq_x_s8, vcvtq_m_n_f16_s16,

> vcvtq_x_n_f16_s16, vsriq_m_n_s16, vsubq_m_s16, vsubq_x_s16,

> vcvtq_m_n_f32_u32,

> vcvtq_x_n_f32_u32, vqshluq_m_n_s16, vabavq_p_s16, vsriq_m_n_u16,

> vshlq_m_u16, vshlq_x_u16, vsubq_m_u16, vsubq_x_u16, vabavq_p_u16,

> vshlq_m_s16,

> vshlq_x_s16, vcvtq_m_n_f32_s32, vcvtq_x_n_f32_s32, vsriq_m_n_s32,

> vsubq_m_s32,

> vsubq_x_s32, vqshluq_m_n_s32, vabavq_p_s32, vsriq_m_n_u32,

> vshlq_m_u32,

> vshlq_x_u32, vsubq_m_u32, vsubq_x_u32, vabavq_p_u32, vshlq_m_s32,

> vshlq_x_s32.

> 

> Please refer to M-profile Vector Extension (MVE) intrinsics [1]  for more

> details.

> [1] https://developer.arm.com/architectures/instruction-sets/simd-

> isas/helium/mve-intrinsics

> 

> Regression tested on arm-none-eabi and found no regressions.

> 

> Ok for trunk?


Thanks, I've pushed this patch to master.
Kyrill


> 

> Thanks,

> Srinath.

> 

> gcc/ChangeLog:

> 

> 2019-10-29  Andre Vieira  <andre.simoesdiasvieira@arm.com>

>             Mihail Ionescu  <mihail.ionescu@arm.com>

>             Srinath Parvathaneni  <srinath.parvathaneni@arm.com>

> 

> 	* config/arm/arm-builtins.c

> (QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS):

> 	Define builtin qualifier.

> 	(QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS):

> Likewise.

> 	* config/arm/arm_mve.h (vsriq_m_n_s8): Define macro.

> 	(vsubq_m_s8): Likewise.

> 	(vcvtq_m_n_f16_u16): Likewise.

> 	(vqshluq_m_n_s8): Likewise.

> 	(vabavq_p_s8): Likewise.

> 	(vsriq_m_n_u8): Likewise.

> 	(vshlq_m_u8): Likewise.

> 	(vsubq_m_u8): Likewise.

> 	(vabavq_p_u8): Likewise.

> 	(vshlq_m_s8): Likewise.

> 	(vcvtq_m_n_f16_s16): Likewise.

> 	(vsriq_m_n_s16): Likewise.

> 	(vsubq_m_s16): Likewise.

> 	(vcvtq_m_n_f32_u32): Likewise.

> 	(vqshluq_m_n_s16): Likewise.

> 	(vabavq_p_s16): Likewise.

> 	(vsriq_m_n_u16): Likewise.

> 	(vshlq_m_u16): Likewise.

> 	(vsubq_m_u16): Likewise.

> 	(vabavq_p_u16): Likewise.

> 	(vshlq_m_s16): Likewise.

> 	(vcvtq_m_n_f32_s32): Likewise.

> 	(vsriq_m_n_s32): Likewise.

> 	(vsubq_m_s32): Likewise.

> 	(vqshluq_m_n_s32): Likewise.

> 	(vabavq_p_s32): Likewise.

> 	(vsriq_m_n_u32): Likewise.

> 	(vshlq_m_u32): Likewise.

> 	(vsubq_m_u32): Likewise.

> 	(vabavq_p_u32): Likewise.

> 	(vshlq_m_s32): Likewise.

> 	(__arm_vsriq_m_n_s8): Define intrinsic.

> 	(__arm_vsubq_m_s8): Likewise.

> 	(__arm_vqshluq_m_n_s8): Likewise.

> 	(__arm_vabavq_p_s8): Likewise.

> 	(__arm_vsriq_m_n_u8): Likewise.

> 	(__arm_vshlq_m_u8): Likewise.

> 	(__arm_vsubq_m_u8): Likewise.

> 	(__arm_vabavq_p_u8): Likewise.

> 	(__arm_vshlq_m_s8): Likewise.

> 	(__arm_vsriq_m_n_s16): Likewise.

> 	(__arm_vsubq_m_s16): Likewise.

> 	(__arm_vqshluq_m_n_s16): Likewise.

> 	(__arm_vabavq_p_s16): Likewise.

> 	(__arm_vsriq_m_n_u16): Likewise.

> 	(__arm_vshlq_m_u16): Likewise.

> 	(__arm_vsubq_m_u16): Likewise.

> 	(__arm_vabavq_p_u16): Likewise.

> 	(__arm_vshlq_m_s16): Likewise.

> 	(__arm_vsriq_m_n_s32): Likewise.

> 	(__arm_vsubq_m_s32): Likewise.

> 	(__arm_vqshluq_m_n_s32): Likewise.

> 	(__arm_vabavq_p_s32): Likewise.

> 	(__arm_vsriq_m_n_u32): Likewise.

> 	(__arm_vshlq_m_u32): Likewise.

> 	(__arm_vsubq_m_u32): Likewise.

> 	(__arm_vabavq_p_u32): Likewise.

> 	(__arm_vshlq_m_s32): Likewise.

> 	(__arm_vcvtq_m_n_f16_u16): Likewise.

> 	(__arm_vcvtq_m_n_f16_s16): Likewise.

> 	(__arm_vcvtq_m_n_f32_u32): Likewise.

> 	(__arm_vcvtq_m_n_f32_s32): Likewise.

> 	(vcvtq_m_n): Define polymorphic variant.

> 	(vqshluq_m_n): Likewise.

> 	(vshlq_m): Likewise.

> 	(vsriq_m_n): Likewise.

> 	(vsubq_m): Likewise.

> 	(vabavq_p): Likewise.

> 	* config/arm/arm_mve_builtins.def

> 	(QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS): Use

> builtin qualifier.

> 	(QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS): Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS):

> Likewise.

> 	(QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS):

> Likewise.

> 	* config/arm/mve.md (VABAVQ_P): Define iterator.

> 	(VSHLQ_M): Likewise.

> 	(VSRIQ_M_N): Likewise.

> 	(VSUBQ_M): Likewise.

> 	(VCVTQ_M_N_TO_F): Likewise.

> 	(mve_vabavq_p_<supf><mode>): Define RTL pattern.

> 	(mve_vqshluq_m_n_s<mode>): Likewise.

> 	(mve_vshlq_m_<supf><mode>): Likewise.

> 	(mve_vsriq_m_n_<supf><mode>): Likewise.

> 	(mve_vsubq_m_<supf><mode>): Likewise.

> 	(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.

> 

> gcc/testsuite/ChangeLog:

> 

> 2019-10-29  Andre Vieira  <andre.simoesdiasvieira@arm.com>

>             Mihail Ionescu  <mihail.ionescu@arm.com>

>             Srinath Parvathaneni  <srinath.parvathaneni@arm.com>

> 

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_s16.c: New test.

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_s8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_u16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_u32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vabavq_p_u8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_s16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_s8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_u16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_u32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vshlq_m_u8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_s16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_s32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_s8.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_u16.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_u32.c: Likewise.

> 	* gcc.target/arm/mve/intrinsics/vsubq_m_u8.c: Likewise.

> 

> 

> ###############     Attachment also inlined for ease of reply

> ###############

> 

> 

> diff --git a/gcc/config/arm/arm-builtins.c b/gcc/config/arm/arm-builtins.c

> index

> af4f3b6dddf72cb73e87aa42b8b09b7dc9a89ebe..26f0379f62b95886414d2eb4

> d7c6a6c4fc235e60 100644

> --- a/gcc/config/arm/arm-builtins.c

> +++ b/gcc/config/arm/arm-builtins.c

> @@ -523,6 +523,62 @@

> arm_ternop_none_none_none_none_qualifiers[SIMD_MAX_BUILTIN_ARGS]

>  #define TERNOP_NONE_NONE_NONE_NONE_QUALIFIERS \

>    (arm_ternop_none_none_none_none_qualifiers)

> 

> +static enum arm_type_qualifiers

> +arm_quadop_unone_unone_none_none_unone_qualifiers[SIMD_MAX_BUI

> LTIN_ARGS]

> +  = { qualifier_unsigned, qualifier_unsigned, qualifier_none, qualifier_none,

> +    qualifier_unsigned };

> +#define QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS \

> +  (arm_quadop_unone_unone_none_none_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_none_none_none_none_unone_qualifiers[SIMD_MAX_BUILT

> IN_ARGS]

> +  = { qualifier_none, qualifier_none, qualifier_none, qualifier_none,

> +    qualifier_unsigned };

> +#define QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS \

> +  (arm_quadop_none_none_none_none_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_none_none_none_imm_unone_qualifiers[SIMD_MAX_BUILTI

> N_ARGS]

> +  = { qualifier_none, qualifier_none, qualifier_none, qualifier_immediate,

> +    qualifier_unsigned };

> +#define QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS \

> +  (arm_quadop_none_none_none_imm_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_unone_unone_unone_unone_unone_qualifiers[SIMD_MAX_

> BUILTIN_ARGS]

> +  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,

> +    qualifier_unsigned, qualifier_unsigned };

> +#define QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS \

> +  (arm_quadop_unone_unone_unone_unone_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_unone_unone_none_imm_unone_qualifiers[SIMD_MAX_BUI

> LTIN_ARGS]

> +  = { qualifier_unsigned, qualifier_unsigned, qualifier_none,

> +    qualifier_immediate, qualifier_unsigned };

> +#define QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS \

> +  (arm_quadop_unone_unone_none_imm_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_none_none_unone_imm_unone_qualifiers[SIMD_MAX_BUIL

> TIN_ARGS]

> +  = { qualifier_none, qualifier_none, qualifier_unsigned, qualifier_immediate,

> +    qualifier_unsigned };

> +#define QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS \

> +  (arm_quadop_none_none_unone_imm_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_unone_unone_unone_imm_unone_qualifiers[SIMD_MAX_BU

> ILTIN_ARGS]

> +  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,

> +    qualifier_immediate, qualifier_unsigned };

> +#define QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS \

> +  (arm_quadop_unone_unone_unone_imm_unone_qualifiers)

> +

> +static enum arm_type_qualifiers

> +arm_quadop_unone_unone_unone_none_unone_qualifiers[SIMD_MAX_B

> UILTIN_ARGS]

> +  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,

> +    qualifier_none, qualifier_unsigned };

> +#define QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS \

> +  (arm_quadop_unone_unone_unone_none_unone_qualifiers)

> +

>  /* End of Qualifier for MVE builtins.  */

> 

>     /* void ([T element type] *, T, immediate).  */

> diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h

> index

> 224583aa63d5d003f9d2b469b4830007bee92f0a..e236bffa31b4c9cc48efe150f

> 9f82a54b9fcae82 100644

> --- a/gcc/config/arm/arm_mve.h

> +++ b/gcc/config/arm/arm_mve.h

> @@ -1232,6 +1232,37 @@ typedef struct { uint8x16_t val[4]; } uint8x16x4_t;

>  #define vqmovnbq_m_u32(__a, __b, __p) __arm_vqmovnbq_m_u32(__a,

> __b, __p)

>  #define vqmovntq_m_u32(__a, __b, __p) __arm_vqmovntq_m_u32(__a,

> __b, __p)

>  #define vrev32q_m_u16(__inactive, __a, __p)

> __arm_vrev32q_m_u16(__inactive, __a, __p)

> +#define vsriq_m_n_s8(__a, __b,  __imm, __p) __arm_vsriq_m_n_s8(__a,

> __b,  __imm, __p)

> +#define vsubq_m_s8(__inactive, __a, __b, __p)

> __arm_vsubq_m_s8(__inactive, __a, __b, __p)

> +#define vcvtq_m_n_f16_u16(__inactive, __a,  __imm6, __p)

> __arm_vcvtq_m_n_f16_u16(__inactive, __a,  __imm6, __p)

> +#define vqshluq_m_n_s8(__inactive, __a,  __imm, __p)

> __arm_vqshluq_m_n_s8(__inactive, __a,  __imm, __p)

> +#define vabavq_p_s8(__a, __b, __c, __p) __arm_vabavq_p_s8(__a, __b,

> __c, __p)

> +#define vsriq_m_n_u8(__a, __b,  __imm, __p) __arm_vsriq_m_n_u8(__a,

> __b,  __imm, __p)

> +#define vshlq_m_u8(__inactive, __a, __b, __p)

> __arm_vshlq_m_u8(__inactive, __a, __b, __p)

> +#define vsubq_m_u8(__inactive, __a, __b, __p)

> __arm_vsubq_m_u8(__inactive, __a, __b, __p)

> +#define vabavq_p_u8(__a, __b, __c, __p) __arm_vabavq_p_u8(__a, __b,

> __c, __p)

> +#define vshlq_m_s8(__inactive, __a, __b, __p)

> __arm_vshlq_m_s8(__inactive, __a, __b, __p)

> +#define vcvtq_m_n_f16_s16(__inactive, __a,  __imm6, __p)

> __arm_vcvtq_m_n_f16_s16(__inactive, __a,  __imm6, __p)

> +#define vsriq_m_n_s16(__a, __b,  __imm, __p) __arm_vsriq_m_n_s16(__a,

> __b,  __imm, __p)

> +#define vsubq_m_s16(__inactive, __a, __b, __p)

> __arm_vsubq_m_s16(__inactive, __a, __b, __p)

> +#define vcvtq_m_n_f32_u32(__inactive, __a,  __imm6, __p)

> __arm_vcvtq_m_n_f32_u32(__inactive, __a,  __imm6, __p)

> +#define vqshluq_m_n_s16(__inactive, __a,  __imm, __p)

> __arm_vqshluq_m_n_s16(__inactive, __a,  __imm, __p)

> +#define vabavq_p_s16(__a, __b, __c, __p) __arm_vabavq_p_s16(__a, __b,

> __c, __p)

> +#define vsriq_m_n_u16(__a, __b,  __imm, __p) __arm_vsriq_m_n_u16(__a,

> __b,  __imm, __p)

> +#define vshlq_m_u16(__inactive, __a, __b, __p)

> __arm_vshlq_m_u16(__inactive, __a, __b, __p)

> +#define vsubq_m_u16(__inactive, __a, __b, __p)

> __arm_vsubq_m_u16(__inactive, __a, __b, __p)

> +#define vabavq_p_u16(__a, __b, __c, __p) __arm_vabavq_p_u16(__a, __b,

> __c, __p)

> +#define vshlq_m_s16(__inactive, __a, __b, __p)

> __arm_vshlq_m_s16(__inactive, __a, __b, __p)

> +#define vcvtq_m_n_f32_s32(__inactive, __a,  __imm6, __p)

> __arm_vcvtq_m_n_f32_s32(__inactive, __a,  __imm6, __p)

> +#define vsriq_m_n_s32(__a, __b,  __imm, __p) __arm_vsriq_m_n_s32(__a,

> __b,  __imm, __p)

> +#define vsubq_m_s32(__inactive, __a, __b, __p)

> __arm_vsubq_m_s32(__inactive, __a, __b, __p)

> +#define vqshluq_m_n_s32(__inactive, __a,  __imm, __p)

> __arm_vqshluq_m_n_s32(__inactive, __a,  __imm, __p)

> +#define vabavq_p_s32(__a, __b, __c, __p) __arm_vabavq_p_s32(__a, __b,

> __c, __p)

> +#define vsriq_m_n_u32(__a, __b,  __imm, __p) __arm_vsriq_m_n_u32(__a,

> __b,  __imm, __p)

> +#define vshlq_m_u32(__inactive, __a, __b, __p)

> __arm_vshlq_m_u32(__inactive, __a, __b, __p)

> +#define vsubq_m_u32(__inactive, __a, __b, __p)

> __arm_vsubq_m_u32(__inactive, __a, __b, __p)

> +#define vabavq_p_u32(__a, __b, __c, __p) __arm_vabavq_p_u32(__a, __b,

> __c, __p)

> +#define vshlq_m_s32(__inactive, __a, __b, __p)

> __arm_vshlq_m_s32(__inactive, __a, __b, __p)

>  #endif

> 

>  __extension__ extern __inline void

> @@ -7696,6 +7727,196 @@ __arm_vrev32q_m_u16 (uint16x8_t __inactive,

> uint16x8_t __a, mve_pred16_t __p)

>  {

>    return __builtin_mve_vrev32q_m_uv8hi (__inactive, __a, __p);

>  }

> +

> +__extension__ extern __inline int8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_s8 (int8x16_t __a, int8x16_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_sv16qi (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline int8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_sv16qi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vqshluq_m_n_s8 (uint8x16_t __inactive, int8x16_t __a, const int

> __imm, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vqshluq_m_n_sv16qi (__inactive, __a, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_s8 (uint32_t __a, int8x16_t __b, int8x16_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_sv16qi (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline uint8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_uv16qi (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, int8x16_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_uv16qi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_uv16qi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_u8 (uint32_t __a, uint8x16_t __b, uint8x16_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_uv16qi (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline int8x16_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_sv16qi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline int16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_s16 (int16x8_t __a, int16x8_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_sv8hi (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline int16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_sv8hi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vqshluq_m_n_s16 (uint16x8_t __inactive, int16x8_t __a, const int

> __imm, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vqshluq_m_n_sv8hi (__inactive, __a, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_s16 (uint32_t __a, int16x8_t __b, int16x8_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_sv8hi (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline uint16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_uv8hi (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, int16x8_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_uv8hi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t

> __b, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_uv8hi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_u16 (uint32_t __a, uint16x8_t __b, uint16x8_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_uv8hi (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline int16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_sv8hi (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline int32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_s32 (int32x4_t __a, int32x4_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_sv4si (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline int32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_sv4si (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vqshluq_m_n_s32 (uint32x4_t __inactive, int32x4_t __a, const int

> __imm, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vqshluq_m_n_sv4si (__inactive, __a, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_s32 (uint32_t __a, int32x4_t __b, int32x4_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_sv4si (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline uint32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsriq_m_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __imm,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsriq_m_n_uv4si (__a, __b, __imm, __p);

> +}

> +

> +__extension__ extern __inline uint32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, int32x4_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_uv4si (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vsubq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t

> __b, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vsubq_m_uv4si (__inactive, __a, __b, __p);

> +}

> +

> +__extension__ extern __inline uint32_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vabavq_p_u32 (uint32_t __a, uint32x4_t __b, uint32x4_t __c,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vabavq_p_uv4si (__a, __b, __c, __p);

> +}

> +

> +__extension__ extern __inline int32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vshlq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,

> mve_pred16_t __p)

> +{

> +  return __builtin_mve_vshlq_m_sv4si (__inactive, __a, __b, __p);

> +}

> +

>  #if (__ARM_FEATURE_MVE & 2) /* MVE Floating point.  */

> 

>  __extension__ extern __inline void

> @@ -9376,6 +9597,34 @@ __arm_vcvtq_m_u32_f32 (uint32x4_t __inactive,

> float32x4_t __a, mve_pred16_t __p)

>    return __builtin_mve_vcvtq_m_from_f_uv4si (__inactive, __a, __p);

>  }

> 

> +__extension__ extern __inline float16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vcvtq_m_n_f16_u16 (float16x8_t __inactive, uint16x8_t __a, const

> int __imm6, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vcvtq_m_n_to_f_uv8hf (__inactive, __a, __imm6,

> __p);

> +}

> +

> +__extension__ extern __inline float16x8_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vcvtq_m_n_f16_s16 (float16x8_t __inactive, int16x8_t __a, const int

> __imm6, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vcvtq_m_n_to_f_sv8hf (__inactive, __a, __imm6,

> __p);

> +}

> +

> +__extension__ extern __inline float32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vcvtq_m_n_f32_u32 (float32x4_t __inactive, uint32x4_t __a, const

> int __imm6, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vcvtq_m_n_to_f_uv4sf (__inactive, __a, __imm6,

> __p);

> +}

> +

> +__extension__ extern __inline float32x4_t

> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))

> +__arm_vcvtq_m_n_f32_s32 (float32x4_t __inactive, int32x4_t __a, const int

> __imm6, mve_pred16_t __p)

> +{

> +  return __builtin_mve_vcvtq_m_n_to_f_sv4sf (__inactive, __a, __imm6,

> __p);

> +}

> +

>  #endif

> 

>  enum {

> @@ -11008,6 +11257,15 @@ extern void *__ARM_undef;

>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_float16x8_t]:

> __arm_vcvtq_m_u16_f16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, float16x8_t), p2), \

>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_float32x4_t]:

> __arm_vcvtq_m_u32_f32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, float32x4_t), p2));})

> 

> +#define vcvtq_m_n(p0,p1,p2,p3) __arm_vcvtq_m_n(p0,p1,p2,p3)

> +#define __arm_vcvtq_m_n(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \

> +  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_int16x8_t]:

> __arm_vcvtq_m_n_f16_s16 (__ARM_mve_coerce(__p0, float16x8_t),

> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \

> +  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_int32x4_t]:

> __arm_vcvtq_m_n_f32_s32 (__ARM_mve_coerce(__p0, float32x4_t),

> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \

> +  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_uint16x8_t]:

> __arm_vcvtq_m_n_f16_u16 (__ARM_mve_coerce(__p0, float16x8_t),

> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \

> +  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_uint32x4_t]:

> __arm_vcvtq_m_n_f32_u32 (__ARM_mve_coerce(__p0, float32x4_t),

> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})

> +

>  #define vabsq_m(p0,p1,p2) __arm_vabsq_m(p0,p1,p2)

>  #define __arm_vabsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

>    __typeof(p1) __p1 = (p1); \

> @@ -11050,19 +11308,6 @@ extern void *__ARM_undef;

>    int

> (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t][__ARM_

> mve_type_float16x8_t]: __arm_vcmlaq_rot90_f16

> (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1,

> float16x8_t), __ARM_mve_coerce(__p2, float16x8_t)), \

>    int

> (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t][__ARM_

> mve_type_float32x4_t]: __arm_vcmlaq_rot90_f32

> (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1,

> float32x4_t), __ARM_mve_coerce(__p2, float32x4_t)));})

> 

> -#define vcmpeqq_m_n(p0,p1,p2) __arm_vcmpeqq_m_n(p0,p1,p2)

> -#define __arm_vcmpeqq_m_n(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

> -  __typeof(p1) __p1 = (p1); \

> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,

> \

> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8_t]:

> __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8_t), p2), \

> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16_t]:

> __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t),

> __ARM_mve_coerce(__p1, int16_t), p2), \

> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32_t]:

> __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t),

> __ARM_mve_coerce(__p1, int32_t), p2), \

> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]:

> __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, uint8_t), p2), \

> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]:

> __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, uint16_t), p2), \

> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]:

> __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, uint32_t), p2), \

> -  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16_t]:

> __arm_vcmpeqq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t),

> __ARM_mve_coerce(__p1, float16_t), p2), \

> -  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32_t]:

> __arm_vcmpeqq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t),

> __ARM_mve_coerce(__p1, float32_t), p2));})

> -

>  #define vrndxq_m(p0,p1,p2) __arm_vrndxq_m(p0,p1,p2)

>  #define __arm_vrndxq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

>    __typeof(p1) __p1 = (p1); \

> @@ -13005,28 +13250,6 @@ extern void *__ARM_undef;

>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:

> __arm_vcmpcsq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, uint16x8_t), p2), \

>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:

> __arm_vcmpcsq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, uint32x4_t), p2));})

> 

> -#define vcmpeqq_m_n(p0,p1,p2) __arm_vcmpeqq_m_n(p0,p1,p2)

> -#define __arm_vcmpeqq_m_n(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

> -  __typeof(p1) __p1 = (p1); \

> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,

> \

> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8_t]:

> __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8_t), p2), \

> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16_t]:

> __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t),

> __ARM_mve_coerce(__p1, int16_t), p2), \

> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32_t]:

> __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t),

> __ARM_mve_coerce(__p1, int32_t), p2), \

> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]:

> __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, uint8_t), p2), \

> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]:

> __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, uint16_t), p2), \

> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]:

> __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, uint32_t), p2));})

> -

> -#define vcmpeqq_m(p0,p1,p2) __arm_vcmpeqq_m(p0,p1,p2)

> -#define __arm_vcmpeqq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

> -  __typeof(p1) __p1 = (p1); \

> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,

> \

> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:

> __arm_vcmpeqq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8x16_t), p2), \

> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:

> __arm_vcmpeqq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t),

> __ARM_mve_coerce(__p1, int16x8_t), p2), \

> -  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:

> __arm_vcmpeqq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t),

> __ARM_mve_coerce(__p1, int32x4_t), p2), \

> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:

> __arm_vcmpeqq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, uint8x16_t), p2), \

> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:

> __arm_vcmpeqq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, uint16x8_t), p2), \

> -  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:

> __arm_vcmpeqq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, uint32x4_t), p2));})

> -

>  #define vmladavxq_p(p0,p1,p2) __arm_vmladavxq_p(p0,p1,p2)

>  #define __arm_vmladavxq_p(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \

>    __typeof(p1) __p1 = (p1); \

> @@ -13409,6 +13632,30 @@ extern void *__ARM_undef;

>  #define vrmlsldavhxq_p(p0,p1,p2) __arm_vrmlsldavhxq_p(p0,p1,p2)

>  #define __arm_vrmlsldavhxq_p(p0,p1,p2)

> __arm_vrmlsldavhxq_p_s32(p0,p1,p2)

> 

> +#define vsubq_m(p0,p1,p2,p3) __arm_vsubq_m(p0,p1,p2,p3)

> +#define __arm_vsubq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  __typeof(p2) __p2 = (p2); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ

> eid(__p2)])0, \

> +  int

> (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve

> _type_int8x16_t]: __arm_vsubq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t),

> p3), \

> +  int

> (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve

> _type_int16x8_t]: __arm_vsubq_m_s16 (__ARM_mve_coerce(__p0,

> int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2,

> int16x8_t), p3), \

> +  int

> (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve

> _type_int32x4_t]: __arm_vsubq_m_s32 (__ARM_mve_coerce(__p0,

> int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,

> int32x4_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m

> ve_type_uint8x16_t]: __arm_vsubq_m_u8 (__ARM_mve_coerce(__p0,

> uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t),

> __ARM_mve_coerce(__p2, uint8x16_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m

> ve_type_uint16x8_t]: __arm_vsubq_m_u16 (__ARM_mve_coerce(__p0,

> uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t),

> __ARM_mve_coerce(__p2, uint16x8_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m

> ve_type_uint32x4_t]: __arm_vsubq_m_u32 (__ARM_mve_coerce(__p0,

> uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t),

> __ARM_mve_coerce(__p2, uint32x4_t), p3));})

> +

> +#define vabavq_p(p0,p1,p2,p3) __arm_vabavq_p(p0,p1,p2,p3)

> +#define __arm_vabavq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  __typeof(p2) __p2 = (p2); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \

> +  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:

> __arm_vabavq_p_s8(__p0, __ARM_mve_coerce(__p1, int8x16_t),

> __ARM_mve_coerce(__p2, int8x16_t), p3), \

> +  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:

> __arm_vabavq_p_s16(__p0, __ARM_mve_coerce(__p1, int16x8_t),

> __ARM_mve_coerce(__p2, int16x8_t), p3), \

> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:

> __arm_vabavq_p_s32(__p0, __ARM_mve_coerce(__p1, int32x4_t),

> __ARM_mve_coerce(__p2, int32x4_t), p3), \

> +  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:

> __arm_vabavq_p_u8(__p0, __ARM_mve_coerce(__p1, uint8x16_t),

> __ARM_mve_coerce(__p2, uint8x16_t), p3), \

> +  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:

> __arm_vabavq_p_u16(__p0, __ARM_mve_coerce(__p1, uint16x8_t),

> __ARM_mve_coerce(__p2, uint16x8_t), p3), \

> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:

> __arm_vabavq_p_u32(__p0, __ARM_mve_coerce(__p1, uint32x4_t),

> __ARM_mve_coerce(__p2, uint32x4_t), p3));})

> +

>  #endif /* MVE Integer.  */

> 

>  #define vqabsq_m(p0,p1,p2) __arm_vqabsq_m(p0,p1,p2)

> @@ -13449,6 +13696,37 @@ extern void *__ARM_undef;

>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:

> __arm_vqshrunbq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, int16x8_t), p2), \

>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:

> __arm_vqshrunbq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, int32x4_t), p2));})

> 

> +#define vqshluq_m(p0,p1,p2,p3) __arm_vqshluq_m(p0,p1,p2,p3)

> +#define __arm_vqshluq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \

> +  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int8x16_t]:

> __arm_vqshluq_m_n_s8 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \

> +  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]:

> __arm_vqshluq_m_n_s16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \

> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]:

> __arm_vqshluq_m_n_s32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})

> +

> +#define vshlq_m(p0,p1,p2,p3) __arm_vshlq_m(p0,p1,p2,p3)

> +#define __arm_vshlq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  __typeof(p2) __p2 = (p2); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ

> eid(__p2)])0, \

> +  int

> (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve

> _type_int8x16_t]: __arm_vshlq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t),

> p3), \

> +  int

> (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve

> _type_int16x8_t]: __arm_vshlq_m_s16 (__ARM_mve_coerce(__p0,

> int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2,

> int16x8_t), p3), \

> +  int

> (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve

> _type_int32x4_t]: __arm_vshlq_m_s32 (__ARM_mve_coerce(__p0,

> int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,

> int32x4_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m

> ve_type_int8x16_t]: __arm_vshlq_m_u8 (__ARM_mve_coerce(__p0,

> uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t),

> __ARM_mve_coerce(__p2, int8x16_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m

> ve_type_int16x8_t]: __arm_vshlq_m_u16 (__ARM_mve_coerce(__p0,

> uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t),

> __ARM_mve_coerce(__p2, int16x8_t), p3), \

> +  int

> (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m

> ve_type_int32x4_t]: __arm_vshlq_m_u32 (__ARM_mve_coerce(__p0,

> uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t),

> __ARM_mve_coerce(__p2, int32x4_t), p3));})

> +

> +#define vsriq_m(p0,p1,p2,p3) __arm_vsriq_m(p0,p1,p2,p3)

> +#define __arm_vsriq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \

> +  __typeof(p1) __p1 = (p1); \

> +  _Generic( (int

> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \

> +  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:

> __arm_vsriq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t),

> __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \

> +  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]:

> __arm_vsriq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t),

> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \

> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:

> __arm_vsriq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t),

> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \

> +  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:

> __arm_vsriq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t),

> __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \

> +  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:

> __arm_vsriq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t),

> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \

> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:

> __arm_vsriq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t),

> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})

> +

>  #ifdef __cplusplus

>  }

>  #endif

> diff --git a/gcc/config/arm/arm_mve_builtins.def

> b/gcc/config/arm/arm_mve_builtins.def

> index

> f625eed1b3cd4e9f558d7e531bba41473c5ad8d5..c7d64ff7858c7cbc2539ac09

> 504ff512331ae1ca 100644

> --- a/gcc/config/arm/arm_mve_builtins.def

> +++ b/gcc/config/arm/arm_mve_builtins.def

> @@ -502,3 +502,14 @@ VAR1 (TERNOP_NONE_NONE_NONE_UNONE,

> vaddlvaq_p_s, v4si)

>  VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlsldavhaxq_s, v4si)

>  VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlsldavhaq_s, v4si)

>  VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlaldavhaxq_s, v4si)

> +VAR3 (QUADOP_NONE_NONE_NONE_IMM_UNONE, vsriq_m_n_s, v16qi,

> v8hi, v4si)

> +VAR3 (QUADOP_UNONE_UNONE_UNONE_IMM_UNONE, vsriq_m_n_u,

> v16qi, v8hi, v4si)

> +VAR3 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vsubq_m_s, v16qi,

> v8hi, v4si)

> +VAR3 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vsubq_m_u,

> v16qi, v8hi, v4si)

> +VAR2 (QUADOP_NONE_NONE_UNONE_IMM_UNONE, vcvtq_m_n_to_f_u,

> v8hf, v4sf)

> +VAR2 (QUADOP_NONE_NONE_NONE_IMM_UNONE, vcvtq_m_n_to_f_s,

> v8hf, v4sf)

> +VAR3 (QUADOP_UNONE_UNONE_NONE_IMM_UNONE, vqshluq_m_n_s,

> v16qi, v8hi, v4si)

> +VAR3 (QUADOP_UNONE_UNONE_NONE_NONE_UNONE, vabavq_p_s, v16qi,

> v8hi, v4si)

> +VAR3 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vabavq_p_u,

> v16qi, v8hi, v4si)

> +VAR3 (QUADOP_UNONE_UNONE_UNONE_NONE_UNONE, vshlq_m_u,

> v16qi, v8hi, v4si)

> +VAR3 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vshlq_m_s, v16qi,

> v8hi, v4si)

> diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md

> index

> dc7c3cb75172e7455497b76eee194397034521be..b65849cc54a063ffc2dea713

> 7c76a9ec9cf8bbdf 100644

> --- a/gcc/config/arm/mve.md

> +++ b/gcc/config/arm/mve.md

> @@ -140,7 +140,10 @@

>  			 VCVTPQ_M_S VCVTPQ_M_U

> VCVTQ_M_N_FROM_F_S VCVTNQ_M_U

>  			 VREV16Q_M_S VREV16Q_M_U VREV32Q_M

> VCVTQ_M_FROM_F_U

>  			 VCVTQ_M_FROM_F_S VRMLALDAVHQ_P_U

> VADDLVAQ_P_U

> -			 VCVTQ_M_N_FROM_F_U])

> +			 VCVTQ_M_N_FROM_F_U VQSHLUQ_M_N_S

> VABAVQ_P_S

> +			 VABAVQ_P_U VSHLQ_M_S VSHLQ_M_U

> VSRIQ_M_N_S

> +			 VSRIQ_M_N_U VSUBQ_M_U VSUBQ_M_S

> VCVTQ_M_N_TO_F_U

> +			 VCVTQ_M_N_TO_F_S])

> 

>  (define_mode_attr MVE_CNVT [(V8HI "V8HF") (V4SI "V4SF")

>  			    (V8HF "V8HI") (V4SF "V4SI")])

> @@ -244,7 +247,11 @@

>  		       (VCVTQ_M_N_FROM_F_U "u") (VCVTQ_M_FROM_F_S

> "s")

>  		       (VCVTQ_M_FROM_F_U "u") (VRMLALDAVHQ_P_U "u")

>  		       (VRMLALDAVHQ_P_S "s") (VADDLVAQ_P_U "u")

> -		       (VCVTQ_M_N_FROM_F_S "s")])

> +		       (VCVTQ_M_N_FROM_F_S "s") (VABAVQ_P_U "u")

> +		       (VABAVQ_P_S "s") (VSHLQ_M_S "s") (VSHLQ_M_U "u")

> +		       (VSRIQ_M_N_S "s") (VSRIQ_M_N_U "u") (VSUBQ_M_S

> "s")

> +		       (VSUBQ_M_U "u") (VCVTQ_M_N_TO_F_S "s")

> +		       (VCVTQ_M_N_TO_F_U "u")])

> 

>  (define_int_attr mode1 [(VCTP8Q "8") (VCTP16Q "16") (VCTP32Q "32")

>  			(VCTP64Q "64") (VCTP8Q_M "8") (VCTP16Q_M "16")

> @@ -407,6 +414,11 @@

>  (define_int_iterator VCVTQ_M_FROM_F [VCVTQ_M_FROM_F_U

> VCVTQ_M_FROM_F_S])

>  (define_int_iterator VRMLALDAVHQ_P [VRMLALDAVHQ_P_S

> VRMLALDAVHQ_P_U])

>  (define_int_iterator VADDLVAQ_P [VADDLVAQ_P_U VADDLVAQ_P_S])

> +(define_int_iterator VABAVQ_P [VABAVQ_P_S VABAVQ_P_U])

> +(define_int_iterator VSHLQ_M [VSHLQ_M_S VSHLQ_M_U])

> +(define_int_iterator VSRIQ_M_N [VSRIQ_M_N_S VSRIQ_M_N_U])

> +(define_int_iterator VSUBQ_M [VSUBQ_M_U VSUBQ_M_S])

> +(define_int_iterator VCVTQ_M_N_TO_F [VCVTQ_M_N_TO_F_U

> VCVTQ_M_N_TO_F_S])

> 

>  (define_insn "*mve_mov<mode>"

>    [(set (match_operand:MVE_types 0 "nonimmediate_operand"

> "=w,w,r,w,w,r,w,Us")

> @@ -5551,7 +5563,7 @@

>  	 VSHRNTQ_N))

>    ]

>    "TARGET_HAVE_MVE"

> -  "vshrnt.i%#<V_sz_elem>	%q0, %q2, %3"

> +  "vshrnt.i%#<V_sz_elem>\t%q0, %q2, %3"

>    [(set_attr "type" "mve_move")

>  ])

> 

> @@ -5567,7 +5579,7 @@

>  	 VCVTMQ_M))

>    ]

>    "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> -  "vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"

> +  "vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"

>    [(set_attr "type" "mve_move")

>     (set_attr "length""8")])

> 

> @@ -5583,7 +5595,7 @@

>  	 VCVTPQ_M))

>    ]

>    "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> -  "vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"

> +  "vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"

>    [(set_attr "type" "mve_move")

>     (set_attr "length""8")])

> 

> @@ -5599,7 +5611,7 @@

>  	 VCVTNQ_M))

>    ]

>    "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> -  "vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"

> +  "vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"

>    [(set_attr "type" "mve_move")

>     (set_attr "length""8")])

> 

> @@ -5616,7 +5628,7 @@

>  	 VCVTQ_M_N_FROM_F))

>    ]

>    "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> -  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>	%q0, %q2, %3"

> +  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"

>    [(set_attr "type" "mve_move")

>     (set_attr "length""8")])

> 

> @@ -5648,7 +5660,7 @@

>  	 VCVTQ_M_FROM_F))

>    ]

>    "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> -  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>	%q0, %q2"

> +  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"

>    [(set_attr "type" "mve_move")

>     (set_attr "length""8")])

> 

> @@ -5683,3 +5695,101 @@

>    "vrmlsldavha.s32 %Q0, %R0, %q2, %q3"

>    [(set_attr "type" "mve_move")

>  ])

> +

> +;;

> +;; [vabavq_p_s, vabavq_p_u])

> +;;

> +(define_insn "mve_vabavq_p_<supf><mode>"

> +  [

> +   (set (match_operand:SI 0 "s_register_operand" "=r")

> +	(unspec:SI [(match_operand:SI 1 "s_register_operand" "0")

> +		    (match_operand:MVE_2 2 "s_register_operand" "w")

> +		    (match_operand:MVE_2 3 "s_register_operand" "w")

> +		    (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VABAVQ_P))

> +  ]

> +  "TARGET_HAVE_MVE"

> +  "vpst\;vabavt.<supf>%#<V_sz_elem>\t%0, %q2, %q3"

> +  [(set_attr "type" "mve_move")

> +])

> +

> +;;

> +;; [vqshluq_m_n_s])

> +;;

> +(define_insn "mve_vqshluq_m_n_s<mode>"

> +  [

> +   (set (match_operand:MVE_2 0 "s_register_operand" "=w")

> +	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")

> +		       (match_operand:MVE_2 2 "s_register_operand" "w")

> +		       (match_operand:SI 3 "mve_imm_7" "Ra")

> +		       (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VQSHLUQ_M_N_S))

> +  ]

> +  "TARGET_HAVE_MVE"

> +  "vpst\n\tvqshlut.s%#<V_sz_elem>\t%q0, %q2, %3"

> +  [(set_attr "type" "mve_move")])

> +

> +;;

> +;; [vshlq_m_s, vshlq_m_u])

> +;;

> +(define_insn "mve_vshlq_m_<supf><mode>"

> +  [

> +   (set (match_operand:MVE_2 0 "s_register_operand" "=w")

> +	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")

> +		       (match_operand:MVE_2 2 "s_register_operand" "w")

> +		       (match_operand:MVE_2 3 "s_register_operand" "w")

> +		       (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VSHLQ_M))

> +  ]

> +  "TARGET_HAVE_MVE"

> +  "vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"

> +  [(set_attr "type" "mve_move")])

> +

> +;;

> +;; [vsriq_m_n_s, vsriq_m_n_u])

> +;;

> +(define_insn "mve_vsriq_m_n_<supf><mode>"

> +  [

> +   (set (match_operand:MVE_2 0 "s_register_operand" "=w")

> +	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")

> +		       (match_operand:MVE_2 2 "s_register_operand" "w")

> +		       (match_operand:SI 3 "mve_imm_selective_upto_8" "Rg")

> +		       (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VSRIQ_M_N))

> +  ]

> +  "TARGET_HAVE_MVE"

> +  "vpst\;vsrit.%#<V_sz_elem>\t%q0, %q2, %3"

> +  [(set_attr "type" "mve_move")])

> +

> +;;

> +;; [vsubq_m_u, vsubq_m_s])

> +;;

> +(define_insn "mve_vsubq_m_<supf><mode>"

> +  [

> +   (set (match_operand:MVE_2 0 "s_register_operand" "=w")

> +	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")

> +		       (match_operand:MVE_2 2 "s_register_operand" "w")

> +		       (match_operand:MVE_2 3 "s_register_operand" "w")

> +		       (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VSUBQ_M))

> +  ]

> +  "TARGET_HAVE_MVE"

> +  "vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %q3"

> +  [(set_attr "type" "mve_move")])

> +

> +;;

> +;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])

> +;;

> +(define_insn "mve_vcvtq_m_n_to_f_<supf><mode>"

> +  [

> +   (set (match_operand:MVE_0 0 "s_register_operand" "=w")

> +	(unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0")

> +		       (match_operand:<MVE_CNVT> 2 "s_register_operand"

> "w")

> +		       (match_operand:SI 3 "mve_imm_16" "Rd")

> +		       (match_operand:HI 4 "vpr_register_operand" "Up")]

> +	 VCVTQ_M_N_TO_F))

> +  ]

> +  "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"

> +  "vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"

> +  [(set_attr "type" "mve_move")

> +   (set_attr "length""8")])

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..c9d9f836dbffe82cdbf82070

> 3a9a72dac0f2591d

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, int16x8_t b, int16x8_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_s16 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s16"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, int16x8_t b, int16x8_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s16"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..a5b1da8d61c7518694c7c09

> 2f03ca88962f6b92e

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, int32x4_t b, int32x4_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_s32 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s32"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, int32x4_t b, int32x4_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s32"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..15b95521976766c6ab99041

> b1bd3cce4ede7c665

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, int8x16_t b, int8x16_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_s8 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s8"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, int8x16_t b, int8x16_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.s8"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..1c27b6b46f700145bb02403

> d54804963e934358a

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, uint16x8_t b, uint16x8_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_u16 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u16"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, uint16x8_t b, uint16x8_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u16"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..c50fe7c4e8083e1b1dc51af4

> c2152ecfe214d9bd

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, uint32x4_t b, uint32x4_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_u32 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u32"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, uint32x4_t b, uint32x4_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u32"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..0566222e96b904cbf90529a

> 1c3017e29d6927b0e

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c

> @@ -0,0 +1,22 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32_t

> +foo (uint32_t a, uint8x16_t b, uint8x16_t c, mve_pred16_t p)

> +{

> +  return vabavq_p_u8 (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u8"  }  } */

> +

> +uint32_t

> +foo1 (uint32_t a, uint8x16_t b, uint8x16_t c, mve_pred16_t p)

> +{

> +  return vabavq_p (a, b, c, p);

> +}

> +

> +/* { dg-final { scan-assembler "vabavt.u8"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..e5b5e9befaad0e09e649205

> d0b137596995b55d6

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c

> @@ -0,0 +1,24 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */

> +/* { dg-add-options arm_v8_1m_mve_fp } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +float16x8_t

> +foo (float16x8_t inactive, int16x8_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n_f16_s16 (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f16.s16"  }  } */

> +

> +float16x8_t

> +foo1 (float16x8_t inactive, int16x8_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f16.s16"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..271fb1b6ea04e3ebac505f2

> 1118ccb4a575db351

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c

> @@ -0,0 +1,24 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */

> +/* { dg-add-options arm_v8_1m_mve_fp } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +float16x8_t

> +foo (float16x8_t inactive, uint16x8_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n_f16_u16 (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f16.u16"  }  } */

> +

> +float16x8_t

> +foo1 (float16x8_t inactive, uint16x8_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f16.u16"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..280c5105b7eebb52a0635dc

> 8ead518720ba95da4

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c

> @@ -0,0 +1,24 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */

> +/* { dg-add-options arm_v8_1m_mve_fp } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +float32x4_t

> +foo (float32x4_t inactive, int32x4_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n_f32_s32 (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f32.s32"  }  } */

> +

> +float32x4_t

> +foo1 (float32x4_t inactive, int32x4_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n (inactive, a, 1, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f32.s32"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..691756b077e973d7fd9b894

> 5af48888a870e5361

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c

> @@ -0,0 +1,24 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */

> +/* { dg-add-options arm_v8_1m_mve_fp } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +float32x4_t

> +foo (float32x4_t inactive, uint32x4_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n_f32_u32 (inactive, a, 16, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f32.u32"  }  } */

> +

> +float32x4_t

> +foo1 (float32x4_t inactive, uint32x4_t a, mve_pred16_t p)

> +{

> +  return vcvtq_m_n (inactive, a, 16, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vcvtt.f32.u32"  }  } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..03016b0beec1fbd9c306038

> b1012d497a66fdc8e

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint16x8_t

> +foo (uint16x8_t inactive, int16x8_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m_n_s16 (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vqshlut.s16"  }  } */

> +

> +uint16x8_t

> +foo1 (uint16x8_t inactive, int16x8_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..3f812e1e374a4d47f99970c

> eb048e9e67da329e1

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32x4_t

> +foo (uint32x4_t inactive, int32x4_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m_n_s32 (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vqshlut.s32"  }  } */

> +

> +uint32x4_t

> +foo1 (uint32x4_t inactive, int32x4_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..59c0108fa670093cdacb334

> 3e979359d91f563c1

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint8x16_t

> +foo (uint8x16_t inactive, int8x16_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m_n_s8 (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vqshlut.s8"  }  } */

> +

> +uint8x16_t

> +foo1 (uint8x16_t inactive, int8x16_t a, mve_pred16_t p)

> +{

> +  return vqshluq_m (inactive, a, 7, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..26b664d923cf6e5610a4aa7

> 4590d68a6b565a0f7

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int16x8_t

> +foo (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_s16 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.s16"  }  } */

> +

> +int16x8_t

> +foo1 (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..2bc83361ee1ea35355a64cc

> c9469e57afc486dc7

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int32x4_t

> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_s32 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.s32"  }  } */

> +

> +int32x4_t

> +foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..5dec31eb5232220ed7b8fdb

> c70247ad671917911

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int8x16_t

> +foo (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_s8 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.s8"  }  } */

> +

> +int8x16_t

> +foo1 (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..d4e42d83387a18188e83b2

> 0e4d0750579b4ba71d

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint16x8_t

> +foo (uint16x8_t inactive, uint16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_u16 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.u16"  }  } */

> +

> +uint16x8_t

> +foo1 (uint16x8_t inactive, uint16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..8c0b62dc2add3dfe97efd98

> 5f724cdbb8dccba92

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32x4_t

> +foo (uint32x4_t inactive, uint32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_u32 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.u32"  }  } */

> +

> +uint32x4_t

> +foo1 (uint32x4_t inactive, uint32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..429b2f4a8518c170d1f29eb

> 5af311340a1f8e93a

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint8x16_t

> +foo (uint8x16_t inactive, uint8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vshlq_m_u8 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vshlt.u8"  }  } */

> +

> +uint8x16_t

> +foo1 (uint8x16_t inactive, uint8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vshlq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..041cc7249dea8f85034a4ffc

> a4dd8c61335d89b9

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int16x8_t

> +foo (int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_s16 (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.16"  }  } */

> +

> +int16x8_t

> +foo1 (int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..52cd978239d54376a17cbc4

> 5ff4e8120e735be9d

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int32x4_t

> +foo (int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_s32 (a, b, 2, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.32"  }  } */

> +

> +int32x4_t

> +foo1 (int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 2, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..208f8dc9a69f437aee69140f

> 469eb2643f18d2ff

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int8x16_t

> +foo (int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_s8 (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.8"  }  } */

> +

> +int8x16_t

> +foo1 (int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..c1a1c4eeb19dd75e0e89cf6

> d2fd222cfa3c93500

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint16x8_t

> +foo (uint16x8_t a, uint16x8_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_u16 (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.16"  }  } */

> +

> +uint16x8_t

> +foo1 (uint16x8_t a, uint16x8_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..3524c502f4dfc0a377301d7

> 2f194942c9b81f837

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32x4_t

> +foo (uint32x4_t a, uint32x4_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_u32 (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.32"  }  } */

> +

> +uint32x4_t

> +foo1 (uint32x4_t a, uint32x4_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..4636544ea238955cb5f3097

> 923c75de7047df988

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint8x16_t

> +foo (uint8x16_t a, uint8x16_t b, mve_pred16_t p)

> +{

> +  return vsriq_m_n_u8 (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsrit.8"  }  } */

> +

> +uint8x16_t

> +foo1 (uint8x16_t a, uint8x16_t b, mve_pred16_t p)

> +{

> +  return vsriq_m (a, b, 4, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..142b91f0d2ebf2aee46ffead

> e1a98652017ec63f

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int16x8_t

> +foo (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_s16 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i16"  }  } */

> +

> +int16x8_t

> +foo1 (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..d82af8a0d1014b6f8a81b46

> 7ba701759dc02217b

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int32x4_t

> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_s32 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i32"  }  } */

> +

> +int32x4_t

> +foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..182b7c9759b224d6ceb988

> ba56daccb879f5c81d

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +int8x16_t

> +foo (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_s8 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i8"  }  } */

> +

> +int8x16_t

> +foo1 (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..abafd6c9ad30c4b9519d3f9

> e4063ae998386683e

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint16x8_t

> +foo (uint16x8_t inactive, uint16x8_t a, uint16x8_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_u16 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i16"  }  } */

> +

> +uint16x8_t

> +foo1 (uint16x8_t inactive, uint16x8_t a, uint16x8_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..dbd8341c793c6a1bbf7181b

> 9cac4396647b4c91f

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint32x4_t

> +foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_u32 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i32"  }  } */

> +

> +uint32x4_t

> +foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c

> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c

> new file mode 100644

> index

> 0000000000000000000000000000000000000000..3acbefb60889e01f69f6aeb1

> c613e4c0dea6bfa3

> --- /dev/null

> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c

> @@ -0,0 +1,23 @@

> +/* { dg-do compile  } */

> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */

> +/* { dg-add-options arm_v8_1m_mve } */

> +/* { dg-additional-options "-O2" } */

> +

> +#include "arm_mve.h"

> +

> +uint8x16_t

> +foo (uint8x16_t inactive, uint8x16_t a, uint8x16_t b, mve_pred16_t p)

> +{

> +  return vsubq_m_u8 (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

> +/* { dg-final { scan-assembler "vsubt.i8"  }  } */

> +

> +uint8x16_t

> +foo1 (uint8x16_t inactive, uint8x16_t a, uint8x16_t b, mve_pred16_t p)

> +{

> +  return vsubq_m (inactive, a, b, p);

> +}

> +

> +/* { dg-final { scan-assembler "vpst" } } */

Patch

diff --git a/gcc/config/arm/arm-builtins.c b/gcc/config/arm/arm-builtins.c
index af4f3b6dddf72cb73e87aa42b8b09b7dc9a89ebe..26f0379f62b95886414d2eb4d7c6a6c4fc235e60 100644
--- a/gcc/config/arm/arm-builtins.c
+++ b/gcc/config/arm/arm-builtins.c
@@ -523,6 +523,62 @@  arm_ternop_none_none_none_none_qualifiers[SIMD_MAX_BUILTIN_ARGS]
 #define TERNOP_NONE_NONE_NONE_NONE_QUALIFIERS \
   (arm_ternop_none_none_none_none_qualifiers)
 
+static enum arm_type_qualifiers
+arm_quadop_unone_unone_none_none_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned, qualifier_none, qualifier_none,
+    qualifier_unsigned };
+#define QUADOP_UNONE_UNONE_NONE_NONE_UNONE_QUALIFIERS \
+  (arm_quadop_unone_unone_none_none_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_none_none_none_none_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_none, qualifier_none, qualifier_none, qualifier_none,
+    qualifier_unsigned };
+#define QUADOP_NONE_NONE_NONE_NONE_UNONE_QUALIFIERS \
+  (arm_quadop_none_none_none_none_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_none_none_none_imm_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_none, qualifier_none, qualifier_none, qualifier_immediate,
+    qualifier_unsigned };
+#define QUADOP_NONE_NONE_NONE_IMM_UNONE_QUALIFIERS \
+  (arm_quadop_none_none_none_imm_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_unone_unone_unone_unone_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,
+    qualifier_unsigned, qualifier_unsigned };
+#define QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE_QUALIFIERS \
+  (arm_quadop_unone_unone_unone_unone_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_unone_unone_none_imm_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned, qualifier_none,
+    qualifier_immediate, qualifier_unsigned };
+#define QUADOP_UNONE_UNONE_NONE_IMM_UNONE_QUALIFIERS \
+  (arm_quadop_unone_unone_none_imm_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_none_none_unone_imm_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_none, qualifier_none, qualifier_unsigned, qualifier_immediate,
+    qualifier_unsigned };
+#define QUADOP_NONE_NONE_UNONE_IMM_UNONE_QUALIFIERS \
+  (arm_quadop_none_none_unone_imm_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_unone_unone_unone_imm_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,
+    qualifier_immediate, qualifier_unsigned };
+#define QUADOP_UNONE_UNONE_UNONE_IMM_UNONE_QUALIFIERS \
+  (arm_quadop_unone_unone_unone_imm_unone_qualifiers)
+
+static enum arm_type_qualifiers
+arm_quadop_unone_unone_unone_none_unone_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned,
+    qualifier_none, qualifier_unsigned };
+#define QUADOP_UNONE_UNONE_UNONE_NONE_UNONE_QUALIFIERS \
+  (arm_quadop_unone_unone_unone_none_unone_qualifiers)
+
 /* End of Qualifier for MVE builtins.  */
 
    /* void ([T element type] *, T, immediate).  */
diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h
index 224583aa63d5d003f9d2b469b4830007bee92f0a..e236bffa31b4c9cc48efe150f9f82a54b9fcae82 100644
--- a/gcc/config/arm/arm_mve.h
+++ b/gcc/config/arm/arm_mve.h
@@ -1232,6 +1232,37 @@  typedef struct { uint8x16_t val[4]; } uint8x16x4_t;
 #define vqmovnbq_m_u32(__a, __b, __p) __arm_vqmovnbq_m_u32(__a, __b, __p)
 #define vqmovntq_m_u32(__a, __b, __p) __arm_vqmovntq_m_u32(__a, __b, __p)
 #define vrev32q_m_u16(__inactive, __a, __p) __arm_vrev32q_m_u16(__inactive, __a, __p)
+#define vsriq_m_n_s8(__a, __b,  __imm, __p) __arm_vsriq_m_n_s8(__a, __b,  __imm, __p)
+#define vsubq_m_s8(__inactive, __a, __b, __p) __arm_vsubq_m_s8(__inactive, __a, __b, __p)
+#define vcvtq_m_n_f16_u16(__inactive, __a,  __imm6, __p) __arm_vcvtq_m_n_f16_u16(__inactive, __a,  __imm6, __p)
+#define vqshluq_m_n_s8(__inactive, __a,  __imm, __p) __arm_vqshluq_m_n_s8(__inactive, __a,  __imm, __p)
+#define vabavq_p_s8(__a, __b, __c, __p) __arm_vabavq_p_s8(__a, __b, __c, __p)
+#define vsriq_m_n_u8(__a, __b,  __imm, __p) __arm_vsriq_m_n_u8(__a, __b,  __imm, __p)
+#define vshlq_m_u8(__inactive, __a, __b, __p) __arm_vshlq_m_u8(__inactive, __a, __b, __p)
+#define vsubq_m_u8(__inactive, __a, __b, __p) __arm_vsubq_m_u8(__inactive, __a, __b, __p)
+#define vabavq_p_u8(__a, __b, __c, __p) __arm_vabavq_p_u8(__a, __b, __c, __p)
+#define vshlq_m_s8(__inactive, __a, __b, __p) __arm_vshlq_m_s8(__inactive, __a, __b, __p)
+#define vcvtq_m_n_f16_s16(__inactive, __a,  __imm6, __p) __arm_vcvtq_m_n_f16_s16(__inactive, __a,  __imm6, __p)
+#define vsriq_m_n_s16(__a, __b,  __imm, __p) __arm_vsriq_m_n_s16(__a, __b,  __imm, __p)
+#define vsubq_m_s16(__inactive, __a, __b, __p) __arm_vsubq_m_s16(__inactive, __a, __b, __p)
+#define vcvtq_m_n_f32_u32(__inactive, __a,  __imm6, __p) __arm_vcvtq_m_n_f32_u32(__inactive, __a,  __imm6, __p)
+#define vqshluq_m_n_s16(__inactive, __a,  __imm, __p) __arm_vqshluq_m_n_s16(__inactive, __a,  __imm, __p)
+#define vabavq_p_s16(__a, __b, __c, __p) __arm_vabavq_p_s16(__a, __b, __c, __p)
+#define vsriq_m_n_u16(__a, __b,  __imm, __p) __arm_vsriq_m_n_u16(__a, __b,  __imm, __p)
+#define vshlq_m_u16(__inactive, __a, __b, __p) __arm_vshlq_m_u16(__inactive, __a, __b, __p)
+#define vsubq_m_u16(__inactive, __a, __b, __p) __arm_vsubq_m_u16(__inactive, __a, __b, __p)
+#define vabavq_p_u16(__a, __b, __c, __p) __arm_vabavq_p_u16(__a, __b, __c, __p)
+#define vshlq_m_s16(__inactive, __a, __b, __p) __arm_vshlq_m_s16(__inactive, __a, __b, __p)
+#define vcvtq_m_n_f32_s32(__inactive, __a,  __imm6, __p) __arm_vcvtq_m_n_f32_s32(__inactive, __a,  __imm6, __p)
+#define vsriq_m_n_s32(__a, __b,  __imm, __p) __arm_vsriq_m_n_s32(__a, __b,  __imm, __p)
+#define vsubq_m_s32(__inactive, __a, __b, __p) __arm_vsubq_m_s32(__inactive, __a, __b, __p)
+#define vqshluq_m_n_s32(__inactive, __a,  __imm, __p) __arm_vqshluq_m_n_s32(__inactive, __a,  __imm, __p)
+#define vabavq_p_s32(__a, __b, __c, __p) __arm_vabavq_p_s32(__a, __b, __c, __p)
+#define vsriq_m_n_u32(__a, __b,  __imm, __p) __arm_vsriq_m_n_u32(__a, __b,  __imm, __p)
+#define vshlq_m_u32(__inactive, __a, __b, __p) __arm_vshlq_m_u32(__inactive, __a, __b, __p)
+#define vsubq_m_u32(__inactive, __a, __b, __p) __arm_vsubq_m_u32(__inactive, __a, __b, __p)
+#define vabavq_p_u32(__a, __b, __c, __p) __arm_vabavq_p_u32(__a, __b, __c, __p)
+#define vshlq_m_s32(__inactive, __a, __b, __p) __arm_vshlq_m_s32(__inactive, __a, __b, __p)
 #endif
 
 __extension__ extern __inline void
@@ -7696,6 +7727,196 @@  __arm_vrev32q_m_u16 (uint16x8_t __inactive, uint16x8_t __a, mve_pred16_t __p)
 {
   return __builtin_mve_vrev32q_m_uv8hi (__inactive, __a, __p);
 }
+
+__extension__ extern __inline int8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_s8 (int8x16_t __a, int8x16_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_sv16qi (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline int8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_sv16qi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vqshluq_m_n_s8 (uint8x16_t __inactive, int8x16_t __a, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vqshluq_m_n_sv16qi (__inactive, __a, __imm, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_s8 (uint32_t __a, int8x16_t __b, int8x16_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_sv16qi (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline uint8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_uv16qi (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline uint8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, int8x16_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_uv16qi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_uv16qi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_u8 (uint32_t __a, uint8x16_t __b, uint8x16_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_uv16qi (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline int8x16_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_sv16qi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline int16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_s16 (int16x8_t __a, int16x8_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_sv8hi (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline int16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_sv8hi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vqshluq_m_n_s16 (uint16x8_t __inactive, int16x8_t __a, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vqshluq_m_n_sv8hi (__inactive, __a, __imm, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_s16 (uint32_t __a, int16x8_t __b, int16x8_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_sv8hi (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline uint16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_uv8hi (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline uint16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, int16x8_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_uv8hi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_uv8hi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_u16 (uint32_t __a, uint16x8_t __b, uint16x8_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_uv8hi (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline int16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_sv8hi (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_s32 (int32x4_t __a, int32x4_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_sv4si (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_sv4si (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vqshluq_m_n_s32 (uint32x4_t __inactive, int32x4_t __a, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vqshluq_m_n_sv4si (__inactive, __a, __imm, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_s32 (uint32_t __a, int32x4_t __b, int32x4_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_sv4si (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsriq_m_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __imm, mve_pred16_t __p)
+{
+  return __builtin_mve_vsriq_m_n_uv4si (__a, __b, __imm, __p);
+}
+
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, int32x4_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_uv4si (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vsubq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vsubq_m_uv4si (__inactive, __a, __b, __p);
+}
+
+__extension__ extern __inline uint32_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vabavq_p_u32 (uint32_t __a, uint32x4_t __b, uint32x4_t __c, mve_pred16_t __p)
+{
+  return __builtin_mve_vabavq_p_uv4si (__a, __b, __c, __p);
+}
+
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vshlq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p)
+{
+  return __builtin_mve_vshlq_m_sv4si (__inactive, __a, __b, __p);
+}
+
 #if (__ARM_FEATURE_MVE & 2) /* MVE Floating point.  */
 
 __extension__ extern __inline void
@@ -9376,6 +9597,34 @@  __arm_vcvtq_m_u32_f32 (uint32x4_t __inactive, float32x4_t __a, mve_pred16_t __p)
   return __builtin_mve_vcvtq_m_from_f_uv4si (__inactive, __a, __p);
 }
 
+__extension__ extern __inline float16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vcvtq_m_n_f16_u16 (float16x8_t __inactive, uint16x8_t __a, const int __imm6, mve_pred16_t __p)
+{
+  return __builtin_mve_vcvtq_m_n_to_f_uv8hf (__inactive, __a, __imm6, __p);
+}
+
+__extension__ extern __inline float16x8_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vcvtq_m_n_f16_s16 (float16x8_t __inactive, int16x8_t __a, const int __imm6, mve_pred16_t __p)
+{
+  return __builtin_mve_vcvtq_m_n_to_f_sv8hf (__inactive, __a, __imm6, __p);
+}
+
+__extension__ extern __inline float32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vcvtq_m_n_f32_u32 (float32x4_t __inactive, uint32x4_t __a, const int __imm6, mve_pred16_t __p)
+{
+  return __builtin_mve_vcvtq_m_n_to_f_uv4sf (__inactive, __a, __imm6, __p);
+}
+
+__extension__ extern __inline float32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+__arm_vcvtq_m_n_f32_s32 (float32x4_t __inactive, int32x4_t __a, const int __imm6, mve_pred16_t __p)
+{
+  return __builtin_mve_vcvtq_m_n_to_f_sv4sf (__inactive, __a, __imm6, __p);
+}
+
 #endif
 
 enum {
@@ -11008,6 +11257,15 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcvtq_m_u16_f16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \
   int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcvtq_m_u32_f32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));})
 
+#define vcvtq_m_n(p0,p1,p2,p3) __arm_vcvtq_m_n(p0,p1,p2,p3)
+#define __arm_vcvtq_m_n(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
+  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcvtq_m_n_f16_s16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
+  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcvtq_m_n_f32_s32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
+  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcvtq_m_n_f16_u16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
+  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcvtq_m_n_f32_u32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
+
 #define vabsq_m(p0,p1,p2) __arm_vabsq_m(p0,p1,p2)
 #define __arm_vabsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
@@ -11050,19 +11308,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmlaq_rot90_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t)), \
   int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmlaq_rot90_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t)));})
 
-#define vcmpeqq_m_n(p0,p1,p2) __arm_vcmpeqq_m_n(p0,p1,p2)
-#define __arm_vcmpeqq_m_n(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8_t]: __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8_t), p2), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16_t]: __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16_t), p2), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32_t]: __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32_t), p2), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]: __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8_t), p2), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]: __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16_t), p2), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]: __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32_t), p2), \
-  int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16_t]: __arm_vcmpeqq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16_t), p2), \
-  int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32_t]: __arm_vcmpeqq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32_t), p2));})
-
 #define vrndxq_m(p0,p1,p2) __arm_vrndxq_m(p0,p1,p2)
 #define __arm_vrndxq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
@@ -13005,28 +13250,6 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpcsq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \
   int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpcsq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));})
 
-#define vcmpeqq_m_n(p0,p1,p2) __arm_vcmpeqq_m_n(p0,p1,p2)
-#define __arm_vcmpeqq_m_n(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8_t]: __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8_t), p2), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16_t]: __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16_t), p2), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32_t]: __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32_t), p2), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]: __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8_t), p2), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]: __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16_t), p2), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]: __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32_t), p2));})
-
-#define vcmpeqq_m(p0,p1,p2) __arm_vcmpeqq_m(p0,p1,p2)
-#define __arm_vcmpeqq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
-  __typeof(p1) __p1 = (p1); \
-  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
-  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpeqq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \
-  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpeqq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \
-  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpeqq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \
-  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpeqq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \
-  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpeqq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \
-  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpeqq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));})
-
 #define vmladavxq_p(p0,p1,p2) __arm_vmladavxq_p(p0,p1,p2)
 #define __arm_vmladavxq_p(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
   __typeof(p1) __p1 = (p1); \
@@ -13409,6 +13632,30 @@  extern void *__ARM_undef;
 #define vrmlsldavhxq_p(p0,p1,p2) __arm_vrmlsldavhxq_p(p0,p1,p2)
 #define __arm_vrmlsldavhxq_p(p0,p1,p2) __arm_vrmlsldavhxq_p_s32(p0,p1,p2)
 
+#define vsubq_m(p0,p1,p2,p3) __arm_vsubq_m(p0,p1,p2,p3)
+#define __arm_vsubq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  __typeof(p2) __p2 = (p2); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
+  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vsubq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
+  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vsubq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
+  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vsubq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
+  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vsubq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
+  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vsubq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
+  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vsubq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
+
+#define vabavq_p(p0,p1,p2,p3) __arm_vabavq_p(p0,p1,p2,p3)
+#define __arm_vabavq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  __typeof(p2) __p2 = (p2); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
+  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabavq_p_s8(__p0, __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
+  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabavq_p_s16(__p0, __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
+  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabavq_p_s32(__p0, __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
+  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabavq_p_u8(__p0, __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \
+  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabavq_p_u16(__p0, __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \
+  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabavq_p_u32(__p0, __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));})
+
 #endif /* MVE Integer.  */
 
 #define vqabsq_m(p0,p1,p2) __arm_vqabsq_m(p0,p1,p2)
@@ -13449,6 +13696,37 @@  extern void *__ARM_undef;
   int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqshrunbq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \
   int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqshrunbq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));})
 
+#define vqshluq_m(p0,p1,p2,p3) __arm_vqshluq_m(p0,p1,p2,p3)
+#define __arm_vqshluq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
+  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int8x16_t]: __arm_vqshluq_m_n_s8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \
+  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqshluq_m_n_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
+  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqshluq_m_n_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
+
+#define vshlq_m(p0,p1,p2,p3) __arm_vshlq_m(p0,p1,p2,p3)
+#define __arm_vshlq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  __typeof(p2) __p2 = (p2); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \
+  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vshlq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
+  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vshlq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
+  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vshlq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \
+  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_int8x16_t]: __arm_vshlq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \
+  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vshlq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \
+  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vshlq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));})
+
+#define vsriq_m(p0,p1,p2,p3) __arm_vsriq_m(p0,p1,p2,p3)
+#define __arm_vsriq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
+  __typeof(p1) __p1 = (p1); \
+  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \
+  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vsriq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \
+  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vsriq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
+  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vsriq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
+  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vsriq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \
+  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vsriq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
+  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vsriq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/gcc/config/arm/arm_mve_builtins.def b/gcc/config/arm/arm_mve_builtins.def
index f625eed1b3cd4e9f558d7e531bba41473c5ad8d5..c7d64ff7858c7cbc2539ac09504ff512331ae1ca 100644
--- a/gcc/config/arm/arm_mve_builtins.def
+++ b/gcc/config/arm/arm_mve_builtins.def
@@ -502,3 +502,14 @@  VAR1 (TERNOP_NONE_NONE_NONE_UNONE, vaddlvaq_p_s, v4si)
 VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlsldavhaxq_s, v4si)
 VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlsldavhaq_s, v4si)
 VAR1 (TERNOP_NONE_NONE_NONE_NONE, vrmlaldavhaxq_s, v4si)
+VAR3 (QUADOP_NONE_NONE_NONE_IMM_UNONE, vsriq_m_n_s, v16qi, v8hi, v4si)
+VAR3 (QUADOP_UNONE_UNONE_UNONE_IMM_UNONE, vsriq_m_n_u, v16qi, v8hi, v4si)
+VAR3 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vsubq_m_s, v16qi, v8hi, v4si)
+VAR3 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vsubq_m_u, v16qi, v8hi, v4si)
+VAR2 (QUADOP_NONE_NONE_UNONE_IMM_UNONE, vcvtq_m_n_to_f_u, v8hf, v4sf)
+VAR2 (QUADOP_NONE_NONE_NONE_IMM_UNONE, vcvtq_m_n_to_f_s, v8hf, v4sf)
+VAR3 (QUADOP_UNONE_UNONE_NONE_IMM_UNONE, vqshluq_m_n_s, v16qi, v8hi, v4si)
+VAR3 (QUADOP_UNONE_UNONE_NONE_NONE_UNONE, vabavq_p_s, v16qi, v8hi, v4si)
+VAR3 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vabavq_p_u, v16qi, v8hi, v4si)
+VAR3 (QUADOP_UNONE_UNONE_UNONE_NONE_UNONE, vshlq_m_u, v16qi, v8hi, v4si)
+VAR3 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vshlq_m_s, v16qi, v8hi, v4si)
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index dc7c3cb75172e7455497b76eee194397034521be..b65849cc54a063ffc2dea7137c76a9ec9cf8bbdf 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -140,7 +140,10 @@ 
 			 VCVTPQ_M_S VCVTPQ_M_U VCVTQ_M_N_FROM_F_S VCVTNQ_M_U
 			 VREV16Q_M_S VREV16Q_M_U VREV32Q_M VCVTQ_M_FROM_F_U
 			 VCVTQ_M_FROM_F_S VRMLALDAVHQ_P_U VADDLVAQ_P_U
-			 VCVTQ_M_N_FROM_F_U])
+			 VCVTQ_M_N_FROM_F_U VQSHLUQ_M_N_S VABAVQ_P_S
+			 VABAVQ_P_U VSHLQ_M_S VSHLQ_M_U VSRIQ_M_N_S
+			 VSRIQ_M_N_U VSUBQ_M_U VSUBQ_M_S VCVTQ_M_N_TO_F_U
+			 VCVTQ_M_N_TO_F_S])
 
 (define_mode_attr MVE_CNVT [(V8HI "V8HF") (V4SI "V4SF")
 			    (V8HF "V8HI") (V4SF "V4SI")])
@@ -244,7 +247,11 @@ 
 		       (VCVTQ_M_N_FROM_F_U "u") (VCVTQ_M_FROM_F_S "s")
 		       (VCVTQ_M_FROM_F_U "u") (VRMLALDAVHQ_P_U "u")
 		       (VRMLALDAVHQ_P_S "s") (VADDLVAQ_P_U "u")
-		       (VCVTQ_M_N_FROM_F_S "s")])
+		       (VCVTQ_M_N_FROM_F_S "s") (VABAVQ_P_U "u")
+		       (VABAVQ_P_S "s") (VSHLQ_M_S "s") (VSHLQ_M_U "u")
+		       (VSRIQ_M_N_S "s") (VSRIQ_M_N_U "u") (VSUBQ_M_S "s")
+		       (VSUBQ_M_U "u") (VCVTQ_M_N_TO_F_S "s")
+		       (VCVTQ_M_N_TO_F_U "u")])
 
 (define_int_attr mode1 [(VCTP8Q "8") (VCTP16Q "16") (VCTP32Q "32")
 			(VCTP64Q "64") (VCTP8Q_M "8") (VCTP16Q_M "16")
@@ -407,6 +414,11 @@ 
 (define_int_iterator VCVTQ_M_FROM_F [VCVTQ_M_FROM_F_U VCVTQ_M_FROM_F_S])
 (define_int_iterator VRMLALDAVHQ_P [VRMLALDAVHQ_P_S VRMLALDAVHQ_P_U])
 (define_int_iterator VADDLVAQ_P [VADDLVAQ_P_U VADDLVAQ_P_S])
+(define_int_iterator VABAVQ_P [VABAVQ_P_S VABAVQ_P_U])
+(define_int_iterator VSHLQ_M [VSHLQ_M_S VSHLQ_M_U])
+(define_int_iterator VSRIQ_M_N [VSRIQ_M_N_S VSRIQ_M_N_U])
+(define_int_iterator VSUBQ_M [VSUBQ_M_U VSUBQ_M_S])
+(define_int_iterator VCVTQ_M_N_TO_F [VCVTQ_M_N_TO_F_U VCVTQ_M_N_TO_F_S])
 
 (define_insn "*mve_mov<mode>"
   [(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w,w,r,w,Us")
@@ -5551,7 +5563,7 @@ 
 	 VSHRNTQ_N))
   ]
   "TARGET_HAVE_MVE"
-  "vshrnt.i%#<V_sz_elem>	%q0, %q2, %3"
+  "vshrnt.i%#<V_sz_elem>\t%q0, %q2, %3"
   [(set_attr "type" "mve_move")
 ])
 
@@ -5567,7 +5579,7 @@ 
 	 VCVTMQ_M))
   ]
   "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
-  "vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"
+  "vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
   [(set_attr "type" "mve_move")
    (set_attr "length""8")])
 
@@ -5583,7 +5595,7 @@ 
 	 VCVTPQ_M))
   ]
   "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
-  "vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"
+  "vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
   [(set_attr "type" "mve_move")
    (set_attr "length""8")])
 
@@ -5599,7 +5611,7 @@ 
 	 VCVTNQ_M))
   ]
   "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
-  "vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>       %q0, %q2"
+  "vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
   [(set_attr "type" "mve_move")
    (set_attr "length""8")])
 
@@ -5616,7 +5628,7 @@ 
 	 VCVTQ_M_N_FROM_F))
   ]
   "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
-  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>	%q0, %q2, %3"
+  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
   [(set_attr "type" "mve_move")
    (set_attr "length""8")])
 
@@ -5648,7 +5660,7 @@ 
 	 VCVTQ_M_FROM_F))
   ]
   "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
-  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>	%q0, %q2"
+  "vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
   [(set_attr "type" "mve_move")
    (set_attr "length""8")])
 
@@ -5683,3 +5695,101 @@ 
   "vrmlsldavha.s32 %Q0, %R0, %q2, %q3"
   [(set_attr "type" "mve_move")
 ])
+
+;;
+;; [vabavq_p_s, vabavq_p_u])
+;;
+(define_insn "mve_vabavq_p_<supf><mode>"
+  [
+   (set (match_operand:SI 0 "s_register_operand" "=r")
+	(unspec:SI [(match_operand:SI 1 "s_register_operand" "0")
+		    (match_operand:MVE_2 2 "s_register_operand" "w")
+		    (match_operand:MVE_2 3 "s_register_operand" "w")
+		    (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VABAVQ_P))
+  ]
+  "TARGET_HAVE_MVE"
+  "vpst\;vabavt.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
+  [(set_attr "type" "mve_move")
+])
+
+;;
+;; [vqshluq_m_n_s])
+;;
+(define_insn "mve_vqshluq_m_n_s<mode>"
+  [
+   (set (match_operand:MVE_2 0 "s_register_operand" "=w")
+	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")
+		       (match_operand:MVE_2 2 "s_register_operand" "w")
+		       (match_operand:SI 3 "mve_imm_7" "Ra")
+		       (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VQSHLUQ_M_N_S))
+  ]
+  "TARGET_HAVE_MVE"
+  "vpst\n\tvqshlut.s%#<V_sz_elem>\t%q0, %q2, %3"
+  [(set_attr "type" "mve_move")])
+
+;;
+;; [vshlq_m_s, vshlq_m_u])
+;;
+(define_insn "mve_vshlq_m_<supf><mode>"
+  [
+   (set (match_operand:MVE_2 0 "s_register_operand" "=w")
+	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")
+		       (match_operand:MVE_2 2 "s_register_operand" "w")
+		       (match_operand:MVE_2 3 "s_register_operand" "w")
+		       (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VSHLQ_M))
+  ]
+  "TARGET_HAVE_MVE"
+  "vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
+  [(set_attr "type" "mve_move")])
+
+;;
+;; [vsriq_m_n_s, vsriq_m_n_u])
+;;
+(define_insn "mve_vsriq_m_n_<supf><mode>"
+  [
+   (set (match_operand:MVE_2 0 "s_register_operand" "=w")
+	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")
+		       (match_operand:MVE_2 2 "s_register_operand" "w")
+		       (match_operand:SI 3 "mve_imm_selective_upto_8" "Rg")
+		       (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VSRIQ_M_N))
+  ]
+  "TARGET_HAVE_MVE"
+  "vpst\;vsrit.%#<V_sz_elem>\t%q0, %q2, %3"
+  [(set_attr "type" "mve_move")])
+
+;;
+;; [vsubq_m_u, vsubq_m_s])
+;;
+(define_insn "mve_vsubq_m_<supf><mode>"
+  [
+   (set (match_operand:MVE_2 0 "s_register_operand" "=w")
+	(unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0")
+		       (match_operand:MVE_2 2 "s_register_operand" "w")
+		       (match_operand:MVE_2 3 "s_register_operand" "w")
+		       (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VSUBQ_M))
+  ]
+  "TARGET_HAVE_MVE"
+  "vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %q3"
+  [(set_attr "type" "mve_move")])
+
+;;
+;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
+;;
+(define_insn "mve_vcvtq_m_n_to_f_<supf><mode>"
+  [
+   (set (match_operand:MVE_0 0 "s_register_operand" "=w")
+	(unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0")
+		       (match_operand:<MVE_CNVT> 2 "s_register_operand" "w")
+		       (match_operand:SI 3 "mve_imm_16" "Rd")
+		       (match_operand:HI 4 "vpr_register_operand" "Up")]
+	 VCVTQ_M_N_TO_F))
+  ]
+  "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
+  "vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
+  [(set_attr "type" "mve_move")
+   (set_attr "length""8")])
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..c9d9f836dbffe82cdbf820703a9a72dac0f2591d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s16.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, int16x8_t b, int16x8_t c, mve_pred16_t p)
+{
+  return vabavq_p_s16 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s16"  }  } */
+
+uint32_t
+foo1 (uint32_t a, int16x8_t b, int16x8_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s16"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..a5b1da8d61c7518694c7c092f03ca88962f6b92e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s32.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, int32x4_t b, int32x4_t c, mve_pred16_t p)
+{
+  return vabavq_p_s32 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s32"  }  } */
+
+uint32_t
+foo1 (uint32_t a, int32x4_t b, int32x4_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s32"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c
new file mode 100644
index 0000000000000000000000000000000000000000..15b95521976766c6ab99041b1bd3cce4ede7c665
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_s8.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, int8x16_t b, int8x16_t c, mve_pred16_t p)
+{
+  return vabavq_p_s8 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s8"  }  } */
+
+uint32_t
+foo1 (uint32_t a, int8x16_t b, int8x16_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.s8"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c
new file mode 100644
index 0000000000000000000000000000000000000000..1c27b6b46f700145bb02403d54804963e934358a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u16.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, uint16x8_t b, uint16x8_t c, mve_pred16_t p)
+{
+  return vabavq_p_u16 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u16"  }  } */
+
+uint32_t
+foo1 (uint32_t a, uint16x8_t b, uint16x8_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u16"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c
new file mode 100644
index 0000000000000000000000000000000000000000..c50fe7c4e8083e1b1dc51af4c2152ecfe214d9bd
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u32.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, uint32x4_t b, uint32x4_t c, mve_pred16_t p)
+{
+  return vabavq_p_u32 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u32"  }  } */
+
+uint32_t
+foo1 (uint32_t a, uint32x4_t b, uint32x4_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u32"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c
new file mode 100644
index 0000000000000000000000000000000000000000..0566222e96b904cbf90529a1c3017e29d6927b0e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vabavq_p_u8.c
@@ -0,0 +1,22 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32_t
+foo (uint32_t a, uint8x16_t b, uint8x16_t c, mve_pred16_t p)
+{
+  return vabavq_p_u8 (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u8"  }  } */
+
+uint32_t
+foo1 (uint32_t a, uint8x16_t b, uint8x16_t c, mve_pred16_t p)
+{
+  return vabavq_p (a, b, c, p);
+}
+
+/* { dg-final { scan-assembler "vabavt.u8"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..e5b5e9befaad0e09e649205d0b137596995b55d6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_s16.c
@@ -0,0 +1,24 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */
+/* { dg-add-options arm_v8_1m_mve_fp } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+float16x8_t
+foo (float16x8_t inactive, int16x8_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n_f16_s16 (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f16.s16"  }  } */
+
+float16x8_t
+foo1 (float16x8_t inactive, int16x8_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f16.s16"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c
new file mode 100644
index 0000000000000000000000000000000000000000..271fb1b6ea04e3ebac505f21118ccb4a575db351
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f16_u16.c
@@ -0,0 +1,24 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */
+/* { dg-add-options arm_v8_1m_mve_fp } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+float16x8_t
+foo (float16x8_t inactive, uint16x8_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n_f16_u16 (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f16.u16"  }  } */
+
+float16x8_t
+foo1 (float16x8_t inactive, uint16x8_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f16.u16"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..280c5105b7eebb52a0635dc8ead518720ba95da4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_s32.c
@@ -0,0 +1,24 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */
+/* { dg-add-options arm_v8_1m_mve_fp } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+float32x4_t
+foo (float32x4_t inactive, int32x4_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n_f32_s32 (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f32.s32"  }  } */
+
+float32x4_t
+foo1 (float32x4_t inactive, int32x4_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n (inactive, a, 1, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f32.s32"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c
new file mode 100644
index 0000000000000000000000000000000000000000..691756b077e973d7fd9b8945af48888a870e5361
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vcvtq_m_n_f32_u32.c
@@ -0,0 +1,24 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */
+/* { dg-add-options arm_v8_1m_mve_fp } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+float32x4_t
+foo (float32x4_t inactive, uint32x4_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n_f32_u32 (inactive, a, 16, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f32.u32"  }  } */
+
+float32x4_t
+foo1 (float32x4_t inactive, uint32x4_t a, mve_pred16_t p)
+{
+  return vcvtq_m_n (inactive, a, 16, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vcvtt.f32.u32"  }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..03016b0beec1fbd9c306038b1012d497a66fdc8e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint16x8_t
+foo (uint16x8_t inactive, int16x8_t a, mve_pred16_t p)
+{
+  return vqshluq_m_n_s16 (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vqshlut.s16"  }  } */
+
+uint16x8_t
+foo1 (uint16x8_t inactive, int16x8_t a, mve_pred16_t p)
+{
+  return vqshluq_m (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..3f812e1e374a4d47f99970ceb048e9e67da329e1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32x4_t
+foo (uint32x4_t inactive, int32x4_t a, mve_pred16_t p)
+{
+  return vqshluq_m_n_s32 (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vqshlut.s32"  }  } */
+
+uint32x4_t
+foo1 (uint32x4_t inactive, int32x4_t a, mve_pred16_t p)
+{
+  return vqshluq_m (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c
new file mode 100644
index 0000000000000000000000000000000000000000..59c0108fa670093cdacb3343e979359d91f563c1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vqshluq_m_n_s8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint8x16_t
+foo (uint8x16_t inactive, int8x16_t a, mve_pred16_t p)
+{
+  return vqshluq_m_n_s8 (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vqshlut.s8"  }  } */
+
+uint8x16_t
+foo1 (uint8x16_t inactive, int8x16_t a, mve_pred16_t p)
+{
+  return vqshluq_m (inactive, a, 7, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..26b664d923cf6e5610a4aa74590d68a6b565a0f7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int16x8_t
+foo (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vshlq_m_s16 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.s16"  }  } */
+
+int16x8_t
+foo1 (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..2bc83361ee1ea35355a64ccc9469e57afc486dc7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int32x4_t
+foo (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vshlq_m_s32 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.s32"  }  } */
+
+int32x4_t
+foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c
new file mode 100644
index 0000000000000000000000000000000000000000..5dec31eb5232220ed7b8fdbc70247ad671917911
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_s8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int8x16_t
+foo (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vshlq_m_s8 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.s8"  }  } */
+
+int8x16_t
+foo1 (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c
new file mode 100644
index 0000000000000000000000000000000000000000..d4e42d83387a18188e83b20e4d0750579b4ba71d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint16x8_t
+foo (uint16x8_t inactive, uint16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vshlq_m_u16 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.u16"  }  } */
+
+uint16x8_t
+foo1 (uint16x8_t inactive, uint16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c
new file mode 100644
index 0000000000000000000000000000000000000000..8c0b62dc2add3dfe97efd985f724cdbb8dccba92
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32x4_t
+foo (uint32x4_t inactive, uint32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vshlq_m_u32 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.u32"  }  } */
+
+uint32x4_t
+foo1 (uint32x4_t inactive, uint32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c
new file mode 100644
index 0000000000000000000000000000000000000000..429b2f4a8518c170d1f29eb5af311340a1f8e93a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vshlq_m_u8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint8x16_t
+foo (uint8x16_t inactive, uint8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vshlq_m_u8 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vshlt.u8"  }  } */
+
+uint8x16_t
+foo1 (uint8x16_t inactive, uint8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vshlq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..041cc7249dea8f85034a4ffca4dd8c61335d89b9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int16x8_t
+foo (int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_s16 (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.16"  }  } */
+
+int16x8_t
+foo1 (int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..52cd978239d54376a17cbc45ff4e8120e735be9d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int32x4_t
+foo (int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_s32 (a, b, 2, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.32"  }  } */
+
+int32x4_t
+foo1 (int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 2, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c
new file mode 100644
index 0000000000000000000000000000000000000000..208f8dc9a69f437aee69140f469eb2643f18d2ff
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_s8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int8x16_t
+foo (int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_s8 (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.8"  }  } */
+
+int8x16_t
+foo1 (int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c
new file mode 100644
index 0000000000000000000000000000000000000000..c1a1c4eeb19dd75e0e89cf6d2fd222cfa3c93500
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint16x8_t
+foo (uint16x8_t a, uint16x8_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_u16 (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.16"  }  } */
+
+uint16x8_t
+foo1 (uint16x8_t a, uint16x8_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c
new file mode 100644
index 0000000000000000000000000000000000000000..3524c502f4dfc0a377301d72f194942c9b81f837
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32x4_t
+foo (uint32x4_t a, uint32x4_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_u32 (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.32"  }  } */
+
+uint32x4_t
+foo1 (uint32x4_t a, uint32x4_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c
new file mode 100644
index 0000000000000000000000000000000000000000..4636544ea238955cb5f3097923c75de7047df988
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsriq_m_n_u8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint8x16_t
+foo (uint8x16_t a, uint8x16_t b, mve_pred16_t p)
+{
+  return vsriq_m_n_u8 (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsrit.8"  }  } */
+
+uint8x16_t
+foo1 (uint8x16_t a, uint8x16_t b, mve_pred16_t p)
+{
+  return vsriq_m (a, b, 4, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c
new file mode 100644
index 0000000000000000000000000000000000000000..142b91f0d2ebf2aee46ffeade1a98652017ec63f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int16x8_t
+foo (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vsubq_m_s16 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i16"  }  } */
+
+int16x8_t
+foo1 (int16x8_t inactive, int16x8_t a, int16x8_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c
new file mode 100644
index 0000000000000000000000000000000000000000..d82af8a0d1014b6f8a81b467ba701759dc02217b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int32x4_t
+foo (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vsubq_m_s32 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i32"  }  } */
+
+int32x4_t
+foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c
new file mode 100644
index 0000000000000000000000000000000000000000..182b7c9759b224d6ceb988ba56daccb879f5c81d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_s8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+int8x16_t
+foo (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vsubq_m_s8 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i8"  }  } */
+
+int8x16_t
+foo1 (int8x16_t inactive, int8x16_t a, int8x16_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c
new file mode 100644
index 0000000000000000000000000000000000000000..abafd6c9ad30c4b9519d3f9e4063ae998386683e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u16.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint16x8_t
+foo (uint16x8_t inactive, uint16x8_t a, uint16x8_t b, mve_pred16_t p)
+{
+  return vsubq_m_u16 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i16"  }  } */
+
+uint16x8_t
+foo1 (uint16x8_t inactive, uint16x8_t a, uint16x8_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c
new file mode 100644
index 0000000000000000000000000000000000000000..dbd8341c793c6a1bbf7181b9cac4396647b4c91f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u32.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint32x4_t
+foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, mve_pred16_t p)
+{
+  return vsubq_m_u32 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i32"  }  } */
+
+uint32x4_t
+foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c
new file mode 100644
index 0000000000000000000000000000000000000000..3acbefb60889e01f69f6aeb1c613e4c0dea6bfa3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsubq_m_u8.c
@@ -0,0 +1,23 @@ 
+/* { dg-do compile  } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-add-options arm_v8_1m_mve } */
+/* { dg-additional-options "-O2" } */
+
+#include "arm_mve.h"
+
+uint8x16_t
+foo (uint8x16_t inactive, uint8x16_t a, uint8x16_t b, mve_pred16_t p)
+{
+  return vsubq_m_u8 (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */
+/* { dg-final { scan-assembler "vsubt.i8"  }  } */
+
+uint8x16_t
+foo1 (uint8x16_t inactive, uint8x16_t a, uint8x16_t b, mve_pred16_t p)
+{
+  return vsubq_m (inactive, a, b, p);
+}
+
+/* { dg-final { scan-assembler "vpst" } } */