From patchwork Sun Jun 4 08:51:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "juzhe.zhong@rivai.ai" X-Patchwork-Id: 102969 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2040104vqr; Sun, 4 Jun 2023 01:52:35 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ51yUZRlxKGgb6ChCMP7SiEpE7cl8QmTaDYKAciGSCw+kixwMERqsppPATAFC+Mh3/WD54i X-Received: by 2002:a17:907:3f9f:b0:965:ff38:2fbb with SMTP id hr31-20020a1709073f9f00b00965ff382fbbmr4500362ejc.1.1685868755556; Sun, 04 Jun 2023 01:52:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685868755; cv=none; d=google.com; s=arc-20160816; b=C2RTr/Yocsx3RcIEyQD0x1u1/4lRPZNDmCI3EPsCmFKxyLAuAeCNpiUDKRRE0qGqud UcUViMxrSFc/8icBgD6oG0HWNobcOPGJlj4pThvfq0BbPDQXS46KW8QwtzD4sA6NzW57 zxUMoHQwa5fndzY+EJng/QEOMW62EybxB/kmI9FHrinl+a+rHvtzmDGnziPJ89CKPvYd nBuT1UlIJIdRC4qFJ7nvdyYyPkQUUegZBiiI3uSG0izba8TKUMLSVkHwuhgspkMwyrzn Az6vB7fJNKKwI/9WVZM3uhckKt/g9vtpXC+mWmDug3vobGwY4fO2q/rXlkuKLzY2QGrG 7+yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:feedback-id :content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:dmarc-filter:delivered-to; bh=8xt8R5LtnMitmYQMkvFVJHk/JpcLyLaWLxY2GQAUG7I=; b=ehrn97zJgWmAcXXA7Ik3Zqdj/UxdLENqJkprvt+NyIwOPgwVzRr9BXTJYKO92UTmhI +VZVEGS8Gnod0YjcNcQz/y0VFCK23tWMsolvf8xnEIYJ5Dm0OGz8PTIVV7gUKS0U+Dv3 ic3IVFXZZQy4F4+JcxBz1J+x7rT52Pzf+33Q7Bv+HwdgcEzMViaMb9QFEtHoDjbHJrp9 45cgFsHOay75MdzVRFISVpNBdRym9hs4S5fzGJ0fMn5pLPaF+lQ/npgj/HtMrDMHrtGT u+x7nQ3BctKGAPgYM6C8rlnf+w0h6vyDw4SAvK7ejUTnHSJpiU9cFBF1CFIuY3vdiHkY 374g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id ez24-20020a1709070bd800b00965d87098b3si3177935ejc.987.2023.06.04.01.52.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jun 2023 01:52:35 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 68D2B385782B for ; Sun, 4 Jun 2023 08:52:27 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbg150.qq.com (smtpbg150.qq.com [18.132.163.193]) by sourceware.org (Postfix) with ESMTPS id 2DE5A3858D39 for ; Sun, 4 Jun 2023 08:51:55 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2DE5A3858D39 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp77t1685868709tiuiiz6a Received: from rios-cad5.localdomain ( [58.60.1.11]) by bizesmtp.qq.com (ESMTP) with id ; Sun, 04 Jun 2023 16:51:48 +0800 (CST) X-QQ-SSF: 01400000000000F0S000000A0000000 X-QQ-FEAT: oGOjGSUjcuDZtjdbrJsC/FmrwkQNBTTRsL0Az+/+3nS9rit1Jz9uTnedF0n0V gOmC2NsVwqnmZA2MijlPt2Em+XPgwpyRuyYdG+S32Fo304vc3AA4saMn8f3sJGXzxpJyVtD zc0PopI6pBXo/hW3XfCWIMvH/qbV0ztwqaVrN6kQa/BJxBHrMqxD9V+rf38YoH9QzqHzdp2 i0hrQP1JIiZ0aUib8V1jH3rNkjqq5O5mgWmsUKATZBXOotZ8bHhWORW2HeWn0UT+0VTJSKc uXGe54y3iibMMZT4P/FeHbnDaKPJVzrolryRU005KkGhrh9fJICj8wJ2SpDIXO10N7vv/XD 6NjFSu4mQmjp7aOqoPvbpsa5RdeK0zmV7ncofd4Hu89uoyPdpDwew01lz8eNHj+mnKoucgf X-QQ-GoodBg: 2 X-BIZMAIL-ID: 15802757298639067528 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Cc: kito.cheng@sifive.com, palmer@rivosinc.com, rdapp.gcc@gmail.com, jeffreyalaw@gmail.com, Juzhe-Zhong Subject: [PATCH] RISC-V: Remove redundant vlmul_ext_* patterns to fix PR110109 Date: Sun, 4 Jun 2023 16:51:47 +0800 Message-Id: <20230604085147.3989859-1-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.3 MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvrgz:qybglogicsvrgz7a-one-0 X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_PASS, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767761516404598570?= X-GMAIL-MSGID: =?utf-8?q?1767761516404598570?= From: Juzhe-Zhong PR target/110109 This patch is to fix PR110109 issue. This issue happens is because: (define_insn_and_split "*vlmul_extx2" [(set (match_operand: 0 "register_operand" "=vr, ?&vr") (subreg: (match_operand:VLMULEXT2 1 "register_operand" " 0, vr") 0))] "TARGET_VECTOR" "#" "&& reload_completed" [(const_int 0)] { emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); DONE; }) Such pattern generate such codes in insn-recog.cc: static int pattern57 (rtx x1) { rtx * const operands ATTRIBUTE_UNUSED = &recog_data.operand[0]; rtx x2; int res ATTRIBUTE_UNUSED; if (maybe_ne (SUBREG_BYTE (x1).to_constant (), 0)) return -1; ... PR110109 ICE at maybe_ne (SUBREG_BYTE (x1).to_constant (), 0) since for scalable RVV modes can not be accessed as SUBREG_BYTE (x1).to_constant () I create that patterns is to optimize the following test: vfloat32m2_t test_vlmul_ext_v_f32mf2_f32m2(vfloat32mf2_t op1) { return __riscv_vlmul_ext_v_f32mf2_f32m2(op1); } codegen: test_vlmul_ext_v_f32mf2_f32m2: vsetvli a5,zero,e32,m2,ta,ma vmv.v.i v2,0 vsetvli a5,zero,e32,mf2,ta,ma vle32.v v2,0(a1) vs2r.v v2,0(a0) ret There is a redundant 'vmv.v.i' here, Since GCC doesn't undefine IR (unlike LLVM, LLVM has undef/poison). For vlmul_ext_* RVV intrinsic, GCC will initiate all zeros into register. However, I think it's not a big issue after we support subreg livness tracking. gcc/ChangeLog: * config/riscv/riscv-vector-builtins-bases.cc: Change expand approach. * config/riscv/vector.md (@vlmul_extx2): Remove it. (@vlmul_extx4): Ditto. (@vlmul_extx8): Ditto. (@vlmul_extx16): Ditto. (@vlmul_extx32): Ditto. (@vlmul_extx64): Ditto. (*vlmul_extx2): Ditto. (*vlmul_extx4): Ditto. (*vlmul_extx8): Ditto. (*vlmul_extx16): Ditto. (*vlmul_extx32): Ditto. (*vlmul_extx64): Ditto. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/pr110109-1.c: New test. * gcc.target/riscv/rvv/base/pr110109-2.c: New test. --- .../riscv/riscv-vector-builtins-bases.cc | 28 +- gcc/config/riscv/vector.md | 120 ----- .../gcc.target/riscv/rvv/base/pr110109-1.c | 40 ++ .../gcc.target/riscv/rvv/base/pr110109-2.c | 485 ++++++++++++++++++ 4 files changed, 529 insertions(+), 144 deletions(-) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-2.c diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc index 09870c327fa..87a684dd127 100644 --- a/gcc/config/riscv/riscv-vector-builtins-bases.cc +++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc @@ -1565,30 +1565,10 @@ public: rtx expand (function_expander &e) const override { - e.add_input_operand (0); - switch (e.op_info->ret.base_type) - { - case RVV_BASE_vlmul_ext_x2: - return e.generate_insn ( - code_for_vlmul_extx2 (e.vector_mode ())); - case RVV_BASE_vlmul_ext_x4: - return e.generate_insn ( - code_for_vlmul_extx4 (e.vector_mode ())); - case RVV_BASE_vlmul_ext_x8: - return e.generate_insn ( - code_for_vlmul_extx8 (e.vector_mode ())); - case RVV_BASE_vlmul_ext_x16: - return e.generate_insn ( - code_for_vlmul_extx16 (e.vector_mode ())); - case RVV_BASE_vlmul_ext_x32: - return e.generate_insn ( - code_for_vlmul_extx32 (e.vector_mode ())); - case RVV_BASE_vlmul_ext_x64: - return e.generate_insn ( - code_for_vlmul_extx64 (e.vector_mode ())); - default: - gcc_unreachable (); - } + tree arg = CALL_EXPR_ARG (e.exp, 0); + rtx src = expand_normal (arg); + emit_insn (gen_rtx_SET (gen_lowpart (e.vector_mode (), e.target), src)); + return e.target; } }; diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index 79f1644732a..2496eff7874 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -498,126 +498,6 @@ } ) -(define_expand "@vlmul_extx2" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT2 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_expand "@vlmul_extx4" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT4 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_expand "@vlmul_extx8" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT8 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_expand "@vlmul_extx16" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT16 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_expand "@vlmul_extx32" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT32 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_expand "@vlmul_extx64" - [(set (match_operand: 0 "register_operand") - (subreg: - (match_operand:VLMULEXT64 1 "register_operand") 0))] - "TARGET_VECTOR" -{}) - -(define_insn_and_split "*vlmul_extx2" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT2 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - -(define_insn_and_split "*vlmul_extx4" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT4 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - -(define_insn_and_split "*vlmul_extx8" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT8 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - -(define_insn_and_split "*vlmul_extx16" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT16 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - -(define_insn_and_split "*vlmul_extx32" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT32 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - -(define_insn_and_split "*vlmul_extx64" - [(set (match_operand: 0 "register_operand" "=vr, ?&vr") - (subreg: - (match_operand:VLMULEXT64 1 "register_operand" " 0, vr") 0))] - "TARGET_VECTOR" - "#" - "&& reload_completed" - [(const_int 0)] -{ - emit_insn (gen_rtx_SET (gen_lowpart (mode, operands[0]), operands[1])); - DONE; -}) - ;; This pattern is used to hold the AVL operand for ;; RVV instructions that implicity use VLMAX AVL. ;; RVV instruction implicitly use GPR that is ultimately diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-1.c new file mode 100644 index 00000000000..e921c431c2b --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-1.c @@ -0,0 +1,40 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */ + +#include "riscv_vector.h" + +void __attribute__ ((noinline, noclone)) +clean_subreg (int32_t *in, int32_t *out, size_t m) +{ + vint16m8_t v24, v8, v16; + vint32m8_t result = __riscv_vle32_v_i32m8 (in, 32); + vint32m1_t v0 = __riscv_vget_v_i32m8_i32m1 (result, 0); + vint32m1_t v1 = __riscv_vget_v_i32m8_i32m1 (result, 1); + vint32m1_t v2 = __riscv_vget_v_i32m8_i32m1 (result, 2); + vint32m1_t v3 = __riscv_vget_v_i32m8_i32m1 (result, 3); + vint32m1_t v4 = __riscv_vget_v_i32m8_i32m1 (result, 4); + vint32m1_t v5 = __riscv_vget_v_i32m8_i32m1 (result, 5); + vint32m1_t v6 = __riscv_vget_v_i32m8_i32m1 (result, 6); + vint32m1_t v7 = __riscv_vget_v_i32m8_i32m1 (result, 7); + for (size_t i = 0; i < m; i++) + { + v0 = __riscv_vadd_vv_i32m1(v0, v0, 4); + v1 = __riscv_vadd_vv_i32m1(v1, v1, 4); + v2 = __riscv_vadd_vv_i32m1(v2, v2, 4); + v3 = __riscv_vadd_vv_i32m1(v3, v3, 4); + v4 = __riscv_vadd_vv_i32m1(v4, v4, 4); + v5 = __riscv_vadd_vv_i32m1(v5, v5, 4); + v6 = __riscv_vadd_vv_i32m1(v6, v6, 4); + v7 = __riscv_vadd_vv_i32m1(v7, v7, 4); + } + vint32m8_t result2 = __riscv_vundefined_i32m8 (); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 0, v0); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 1, v1); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 2, v2); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 3, v3); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 4, v4); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 5, v5); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 6, v6); + result2 = __riscv_vset_v_i32m1_i32m8 (result2, 7, v7); + __riscv_vse32_v_i32m8((out), result2, 64); +} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-2.c new file mode 100644 index 00000000000..e8b5bf8c714 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr110109-2.c @@ -0,0 +1,485 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */ + +#include "riscv_vector.h" + +vfloat32m1_t test_vlmul_ext_v_f32mf2_f32m1(vfloat32mf2_t op1) { + return __riscv_vlmul_ext_v_f32mf2_f32m1(op1); +} + +vfloat32m2_t test_vlmul_ext_v_f32mf2_f32m2(vfloat32mf2_t op1) { + return __riscv_vlmul_ext_v_f32mf2_f32m2(op1); +} + +vfloat32m4_t test_vlmul_ext_v_f32mf2_f32m4(vfloat32mf2_t op1) { + return __riscv_vlmul_ext_v_f32mf2_f32m4(op1); +} + +vfloat32m8_t test_vlmul_ext_v_f32mf2_f32m8(vfloat32mf2_t op1) { + return __riscv_vlmul_ext_v_f32mf2_f32m8(op1); +} + +vfloat32m2_t test_vlmul_ext_v_f32m1_f32m2(vfloat32m1_t op1) { + return __riscv_vlmul_ext_v_f32m1_f32m2(op1); +} + +vfloat32m4_t test_vlmul_ext_v_f32m1_f32m4(vfloat32m1_t op1) { + return __riscv_vlmul_ext_v_f32m1_f32m4(op1); +} + +vfloat32m8_t test_vlmul_ext_v_f32m1_f32m8(vfloat32m1_t op1) { + return __riscv_vlmul_ext_v_f32m1_f32m8(op1); +} + +vfloat32m4_t test_vlmul_ext_v_f32m2_f32m4(vfloat32m2_t op1) { + return __riscv_vlmul_ext_v_f32m2_f32m4(op1); +} + +vfloat32m8_t test_vlmul_ext_v_f32m2_f32m8(vfloat32m2_t op1) { + return __riscv_vlmul_ext_v_f32m2_f32m8(op1); +} + +vfloat32m8_t test_vlmul_ext_v_f32m4_f32m8(vfloat32m4_t op1) { + return __riscv_vlmul_ext_v_f32m4_f32m8(op1); +} + +vfloat64m2_t test_vlmul_ext_v_f64m1_f64m2(vfloat64m1_t op1) { + return __riscv_vlmul_ext_v_f64m1_f64m2(op1); +} + +vfloat64m4_t test_vlmul_ext_v_f64m1_f64m4(vfloat64m1_t op1) { + return __riscv_vlmul_ext_v_f64m1_f64m4(op1); +} + +vfloat64m8_t test_vlmul_ext_v_f64m1_f64m8(vfloat64m1_t op1) { + return __riscv_vlmul_ext_v_f64m1_f64m8(op1); +} + +vfloat64m4_t test_vlmul_ext_v_f64m2_f64m4(vfloat64m2_t op1) { + return __riscv_vlmul_ext_v_f64m2_f64m4(op1); +} + +vfloat64m8_t test_vlmul_ext_v_f64m2_f64m8(vfloat64m2_t op1) { + return __riscv_vlmul_ext_v_f64m2_f64m8(op1); +} + +vfloat64m8_t test_vlmul_ext_v_f64m4_f64m8(vfloat64m4_t op1) { + return __riscv_vlmul_ext_v_f64m4_f64m8(op1); +} + +vint8mf4_t test_vlmul_ext_v_i8mf8_i8mf4(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8mf4(op1); +} + +vint8mf2_t test_vlmul_ext_v_i8mf8_i8mf2(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8mf2(op1); +} + +vint8m1_t test_vlmul_ext_v_i8mf8_i8m1(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8m1(op1); +} + +vint8m2_t test_vlmul_ext_v_i8mf8_i8m2(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8m2(op1); +} + +vint8m4_t test_vlmul_ext_v_i8mf8_i8m4(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8m4(op1); +} + +vint8m8_t test_vlmul_ext_v_i8mf8_i8m8(vint8mf8_t op1) { + return __riscv_vlmul_ext_v_i8mf8_i8m8(op1); +} + +vint8mf2_t test_vlmul_ext_v_i8mf4_i8mf2(vint8mf4_t op1) { + return __riscv_vlmul_ext_v_i8mf4_i8mf2(op1); +} + +vint8m1_t test_vlmul_ext_v_i8mf4_i8m1(vint8mf4_t op1) { + return __riscv_vlmul_ext_v_i8mf4_i8m1(op1); +} + +vint8m2_t test_vlmul_ext_v_i8mf4_i8m2(vint8mf4_t op1) { + return __riscv_vlmul_ext_v_i8mf4_i8m2(op1); +} + +vint8m4_t test_vlmul_ext_v_i8mf4_i8m4(vint8mf4_t op1) { + return __riscv_vlmul_ext_v_i8mf4_i8m4(op1); +} + +vint8m8_t test_vlmul_ext_v_i8mf4_i8m8(vint8mf4_t op1) { + return __riscv_vlmul_ext_v_i8mf4_i8m8(op1); +} + +vint8m1_t test_vlmul_ext_v_i8mf2_i8m1(vint8mf2_t op1) { + return __riscv_vlmul_ext_v_i8mf2_i8m1(op1); +} + +vint8m2_t test_vlmul_ext_v_i8mf2_i8m2(vint8mf2_t op1) { + return __riscv_vlmul_ext_v_i8mf2_i8m2(op1); +} + +vint8m4_t test_vlmul_ext_v_i8mf2_i8m4(vint8mf2_t op1) { + return __riscv_vlmul_ext_v_i8mf2_i8m4(op1); +} + +vint8m8_t test_vlmul_ext_v_i8mf2_i8m8(vint8mf2_t op1) { + return __riscv_vlmul_ext_v_i8mf2_i8m8(op1); +} + +vint8m2_t test_vlmul_ext_v_i8m1_i8m2(vint8m1_t op1) { + return __riscv_vlmul_ext_v_i8m1_i8m2(op1); +} + +vint8m4_t test_vlmul_ext_v_i8m1_i8m4(vint8m1_t op1) { + return __riscv_vlmul_ext_v_i8m1_i8m4(op1); +} + +vint8m8_t test_vlmul_ext_v_i8m1_i8m8(vint8m1_t op1) { + return __riscv_vlmul_ext_v_i8m1_i8m8(op1); +} + +vint8m4_t test_vlmul_ext_v_i8m2_i8m4(vint8m2_t op1) { + return __riscv_vlmul_ext_v_i8m2_i8m4(op1); +} + +vint8m8_t test_vlmul_ext_v_i8m2_i8m8(vint8m2_t op1) { + return __riscv_vlmul_ext_v_i8m2_i8m8(op1); +} + +vint8m8_t test_vlmul_ext_v_i8m4_i8m8(vint8m4_t op1) { + return __riscv_vlmul_ext_v_i8m4_i8m8(op1); +} + +vint16mf2_t test_vlmul_ext_v_i16mf4_i16mf2(vint16mf4_t op1) { + return __riscv_vlmul_ext_v_i16mf4_i16mf2(op1); +} + +vint16m1_t test_vlmul_ext_v_i16mf4_i16m1(vint16mf4_t op1) { + return __riscv_vlmul_ext_v_i16mf4_i16m1(op1); +} + +vint16m2_t test_vlmul_ext_v_i16mf4_i16m2(vint16mf4_t op1) { + return __riscv_vlmul_ext_v_i16mf4_i16m2(op1); +} + +vint16m4_t test_vlmul_ext_v_i16mf4_i16m4(vint16mf4_t op1) { + return __riscv_vlmul_ext_v_i16mf4_i16m4(op1); +} + +vint16m8_t test_vlmul_ext_v_i16mf4_i16m8(vint16mf4_t op1) { + return __riscv_vlmul_ext_v_i16mf4_i16m8(op1); +} + +vint16m1_t test_vlmul_ext_v_i16mf2_i16m1(vint16mf2_t op1) { + return __riscv_vlmul_ext_v_i16mf2_i16m1(op1); +} + +vint16m2_t test_vlmul_ext_v_i16mf2_i16m2(vint16mf2_t op1) { + return __riscv_vlmul_ext_v_i16mf2_i16m2(op1); +} + +vint16m4_t test_vlmul_ext_v_i16mf2_i16m4(vint16mf2_t op1) { + return __riscv_vlmul_ext_v_i16mf2_i16m4(op1); +} + +vint16m8_t test_vlmul_ext_v_i16mf2_i16m8(vint16mf2_t op1) { + return __riscv_vlmul_ext_v_i16mf2_i16m8(op1); +} + +vint16m2_t test_vlmul_ext_v_i16m1_i16m2(vint16m1_t op1) { + return __riscv_vlmul_ext_v_i16m1_i16m2(op1); +} + +vint16m4_t test_vlmul_ext_v_i16m1_i16m4(vint16m1_t op1) { + return __riscv_vlmul_ext_v_i16m1_i16m4(op1); +} + +vint16m8_t test_vlmul_ext_v_i16m1_i16m8(vint16m1_t op1) { + return __riscv_vlmul_ext_v_i16m1_i16m8(op1); +} + +vint16m4_t test_vlmul_ext_v_i16m2_i16m4(vint16m2_t op1) { + return __riscv_vlmul_ext_v_i16m2_i16m4(op1); +} + +vint16m8_t test_vlmul_ext_v_i16m2_i16m8(vint16m2_t op1) { + return __riscv_vlmul_ext_v_i16m2_i16m8(op1); +} + +vint16m8_t test_vlmul_ext_v_i16m4_i16m8(vint16m4_t op1) { + return __riscv_vlmul_ext_v_i16m4_i16m8(op1); +} + +vint32m1_t test_vlmul_ext_v_i32mf2_i32m1(vint32mf2_t op1) { + return __riscv_vlmul_ext_v_i32mf2_i32m1(op1); +} + +vint32m2_t test_vlmul_ext_v_i32mf2_i32m2(vint32mf2_t op1) { + return __riscv_vlmul_ext_v_i32mf2_i32m2(op1); +} + +vint32m4_t test_vlmul_ext_v_i32mf2_i32m4(vint32mf2_t op1) { + return __riscv_vlmul_ext_v_i32mf2_i32m4(op1); +} + +vint32m8_t test_vlmul_ext_v_i32mf2_i32m8(vint32mf2_t op1) { + return __riscv_vlmul_ext_v_i32mf2_i32m8(op1); +} + +vint32m2_t test_vlmul_ext_v_i32m1_i32m2(vint32m1_t op1) { + return __riscv_vlmul_ext_v_i32m1_i32m2(op1); +} + +vint32m4_t test_vlmul_ext_v_i32m1_i32m4(vint32m1_t op1) { + return __riscv_vlmul_ext_v_i32m1_i32m4(op1); +} + +vint32m8_t test_vlmul_ext_v_i32m1_i32m8(vint32m1_t op1) { + return __riscv_vlmul_ext_v_i32m1_i32m8(op1); +} + +vint32m4_t test_vlmul_ext_v_i32m2_i32m4(vint32m2_t op1) { + return __riscv_vlmul_ext_v_i32m2_i32m4(op1); +} + +vint32m8_t test_vlmul_ext_v_i32m2_i32m8(vint32m2_t op1) { + return __riscv_vlmul_ext_v_i32m2_i32m8(op1); +} + +vint32m8_t test_vlmul_ext_v_i32m4_i32m8(vint32m4_t op1) { + return __riscv_vlmul_ext_v_i32m4_i32m8(op1); +} + +vint64m2_t test_vlmul_ext_v_i64m1_i64m2(vint64m1_t op1) { + return __riscv_vlmul_ext_v_i64m1_i64m2(op1); +} + +vint64m4_t test_vlmul_ext_v_i64m1_i64m4(vint64m1_t op1) { + return __riscv_vlmul_ext_v_i64m1_i64m4(op1); +} + +vint64m8_t test_vlmul_ext_v_i64m1_i64m8(vint64m1_t op1) { + return __riscv_vlmul_ext_v_i64m1_i64m8(op1); +} + +vint64m4_t test_vlmul_ext_v_i64m2_i64m4(vint64m2_t op1) { + return __riscv_vlmul_ext_v_i64m2_i64m4(op1); +} + +vint64m8_t test_vlmul_ext_v_i64m2_i64m8(vint64m2_t op1) { + return __riscv_vlmul_ext_v_i64m2_i64m8(op1); +} + +vint64m8_t test_vlmul_ext_v_i64m4_i64m8(vint64m4_t op1) { + return __riscv_vlmul_ext_v_i64m4_i64m8(op1); +} + +vuint8mf4_t test_vlmul_ext_v_u8mf8_u8mf4(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8mf4(op1); +} + +vuint8mf2_t test_vlmul_ext_v_u8mf8_u8mf2(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8mf2(op1); +} + +vuint8m1_t test_vlmul_ext_v_u8mf8_u8m1(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8m1(op1); +} + +vuint8m2_t test_vlmul_ext_v_u8mf8_u8m2(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8m2(op1); +} + +vuint8m4_t test_vlmul_ext_v_u8mf8_u8m4(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8m4(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8mf8_u8m8(vuint8mf8_t op1) { + return __riscv_vlmul_ext_v_u8mf8_u8m8(op1); +} + +vuint8mf2_t test_vlmul_ext_v_u8mf4_u8mf2(vuint8mf4_t op1) { + return __riscv_vlmul_ext_v_u8mf4_u8mf2(op1); +} + +vuint8m1_t test_vlmul_ext_v_u8mf4_u8m1(vuint8mf4_t op1) { + return __riscv_vlmul_ext_v_u8mf4_u8m1(op1); +} + +vuint8m2_t test_vlmul_ext_v_u8mf4_u8m2(vuint8mf4_t op1) { + return __riscv_vlmul_ext_v_u8mf4_u8m2(op1); +} + +vuint8m4_t test_vlmul_ext_v_u8mf4_u8m4(vuint8mf4_t op1) { + return __riscv_vlmul_ext_v_u8mf4_u8m4(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8mf4_u8m8(vuint8mf4_t op1) { + return __riscv_vlmul_ext_v_u8mf4_u8m8(op1); +} + +vuint8m1_t test_vlmul_ext_v_u8mf2_u8m1(vuint8mf2_t op1) { + return __riscv_vlmul_ext_v_u8mf2_u8m1(op1); +} + +vuint8m2_t test_vlmul_ext_v_u8mf2_u8m2(vuint8mf2_t op1) { + return __riscv_vlmul_ext_v_u8mf2_u8m2(op1); +} + +vuint8m4_t test_vlmul_ext_v_u8mf2_u8m4(vuint8mf2_t op1) { + return __riscv_vlmul_ext_v_u8mf2_u8m4(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8mf2_u8m8(vuint8mf2_t op1) { + return __riscv_vlmul_ext_v_u8mf2_u8m8(op1); +} + +vuint8m2_t test_vlmul_ext_v_u8m1_u8m2(vuint8m1_t op1) { + return __riscv_vlmul_ext_v_u8m1_u8m2(op1); +} + +vuint8m4_t test_vlmul_ext_v_u8m1_u8m4(vuint8m1_t op1) { + return __riscv_vlmul_ext_v_u8m1_u8m4(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8m1_u8m8(vuint8m1_t op1) { + return __riscv_vlmul_ext_v_u8m1_u8m8(op1); +} + +vuint8m4_t test_vlmul_ext_v_u8m2_u8m4(vuint8m2_t op1) { + return __riscv_vlmul_ext_v_u8m2_u8m4(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8m2_u8m8(vuint8m2_t op1) { + return __riscv_vlmul_ext_v_u8m2_u8m8(op1); +} + +vuint8m8_t test_vlmul_ext_v_u8m4_u8m8(vuint8m4_t op1) { + return __riscv_vlmul_ext_v_u8m4_u8m8(op1); +} + +vuint16mf2_t test_vlmul_ext_v_u16mf4_u16mf2(vuint16mf4_t op1) { + return __riscv_vlmul_ext_v_u16mf4_u16mf2(op1); +} + +vuint16m1_t test_vlmul_ext_v_u16mf4_u16m1(vuint16mf4_t op1) { + return __riscv_vlmul_ext_v_u16mf4_u16m1(op1); +} + +vuint16m2_t test_vlmul_ext_v_u16mf4_u16m2(vuint16mf4_t op1) { + return __riscv_vlmul_ext_v_u16mf4_u16m2(op1); +} + +vuint16m4_t test_vlmul_ext_v_u16mf4_u16m4(vuint16mf4_t op1) { + return __riscv_vlmul_ext_v_u16mf4_u16m4(op1); +} + +vuint16m8_t test_vlmul_ext_v_u16mf4_u16m8(vuint16mf4_t op1) { + return __riscv_vlmul_ext_v_u16mf4_u16m8(op1); +} + +vuint16m1_t test_vlmul_ext_v_u16mf2_u16m1(vuint16mf2_t op1) { + return __riscv_vlmul_ext_v_u16mf2_u16m1(op1); +} + +vuint16m2_t test_vlmul_ext_v_u16mf2_u16m2(vuint16mf2_t op1) { + return __riscv_vlmul_ext_v_u16mf2_u16m2(op1); +} + +vuint16m4_t test_vlmul_ext_v_u16mf2_u16m4(vuint16mf2_t op1) { + return __riscv_vlmul_ext_v_u16mf2_u16m4(op1); +} + +vuint16m8_t test_vlmul_ext_v_u16mf2_u16m8(vuint16mf2_t op1) { + return __riscv_vlmul_ext_v_u16mf2_u16m8(op1); +} + +vuint16m2_t test_vlmul_ext_v_u16m1_u16m2(vuint16m1_t op1) { + return __riscv_vlmul_ext_v_u16m1_u16m2(op1); +} + +vuint16m4_t test_vlmul_ext_v_u16m1_u16m4(vuint16m1_t op1) { + return __riscv_vlmul_ext_v_u16m1_u16m4(op1); +} + +vuint16m8_t test_vlmul_ext_v_u16m1_u16m8(vuint16m1_t op1) { + return __riscv_vlmul_ext_v_u16m1_u16m8(op1); +} + +vuint16m4_t test_vlmul_ext_v_u16m2_u16m4(vuint16m2_t op1) { + return __riscv_vlmul_ext_v_u16m2_u16m4(op1); +} + +vuint16m8_t test_vlmul_ext_v_u16m2_u16m8(vuint16m2_t op1) { + return __riscv_vlmul_ext_v_u16m2_u16m8(op1); +} + +vuint16m8_t test_vlmul_ext_v_u16m4_u16m8(vuint16m4_t op1) { + return __riscv_vlmul_ext_v_u16m4_u16m8(op1); +} + +vuint32m1_t test_vlmul_ext_v_u32mf2_u32m1(vuint32mf2_t op1) { + return __riscv_vlmul_ext_v_u32mf2_u32m1(op1); +} + +vuint32m2_t test_vlmul_ext_v_u32mf2_u32m2(vuint32mf2_t op1) { + return __riscv_vlmul_ext_v_u32mf2_u32m2(op1); +} + +vuint32m4_t test_vlmul_ext_v_u32mf2_u32m4(vuint32mf2_t op1) { + return __riscv_vlmul_ext_v_u32mf2_u32m4(op1); +} + +vuint32m8_t test_vlmul_ext_v_u32mf2_u32m8(vuint32mf2_t op1) { + return __riscv_vlmul_ext_v_u32mf2_u32m8(op1); +} + +vuint32m2_t test_vlmul_ext_v_u32m1_u32m2(vuint32m1_t op1) { + return __riscv_vlmul_ext_v_u32m1_u32m2(op1); +} + +vuint32m4_t test_vlmul_ext_v_u32m1_u32m4(vuint32m1_t op1) { + return __riscv_vlmul_ext_v_u32m1_u32m4(op1); +} + +vuint32m8_t test_vlmul_ext_v_u32m1_u32m8(vuint32m1_t op1) { + return __riscv_vlmul_ext_v_u32m1_u32m8(op1); +} + +vuint32m4_t test_vlmul_ext_v_u32m2_u32m4(vuint32m2_t op1) { + return __riscv_vlmul_ext_v_u32m2_u32m4(op1); +} + +vuint32m8_t test_vlmul_ext_v_u32m2_u32m8(vuint32m2_t op1) { + return __riscv_vlmul_ext_v_u32m2_u32m8(op1); +} + +vuint32m8_t test_vlmul_ext_v_u32m4_u32m8(vuint32m4_t op1) { + return __riscv_vlmul_ext_v_u32m4_u32m8(op1); +} + +vuint64m2_t test_vlmul_ext_v_u64m1_u64m2(vuint64m1_t op1) { + return __riscv_vlmul_ext_v_u64m1_u64m2(op1); +} + +vuint64m4_t test_vlmul_ext_v_u64m1_u64m4(vuint64m1_t op1) { + return __riscv_vlmul_ext_v_u64m1_u64m4(op1); +} + +vuint64m8_t test_vlmul_ext_v_u64m1_u64m8(vuint64m1_t op1) { + return __riscv_vlmul_ext_v_u64m1_u64m8(op1); +} + +vuint64m4_t test_vlmul_ext_v_u64m2_u64m4(vuint64m2_t op1) { + return __riscv_vlmul_ext_v_u64m2_u64m4(op1); +} + +vuint64m8_t test_vlmul_ext_v_u64m2_u64m8(vuint64m2_t op1) { + return __riscv_vlmul_ext_v_u64m2_u64m8(op1); +} + +vuint64m8_t test_vlmul_ext_v_u64m4_u64m8(vuint64m4_t op1) { + return __riscv_vlmul_ext_v_u64m4_u64m8(op1); +} +