From patchwork Wed Apr 19 11:23:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Pan2 via Gcc-patches" X-Patchwork-Id: 85343 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp307252vqo; Wed, 19 Apr 2023 04:24:04 -0700 (PDT) X-Google-Smtp-Source: AKy350bcRRn5xF2pdxnGTqkwMam31fTyNmSneNK3L82Q7crOjrahFzVKzadzXiELijSzr6FbR9Wz X-Received: by 2002:a17:907:31c9:b0:94f:558b:ed7f with SMTP id xf9-20020a17090731c900b0094f558bed7fmr10996456ejb.18.1681903443788; Wed, 19 Apr 2023 04:24:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681903443; cv=none; d=google.com; s=arc-20160816; b=saCf7qZdJbs/WTYDsBgoIkaGMfSM/ourUEk6qv+c0gShn98trzF2h2r2kLkmuuy4wU cjZX9jXHzDtOlZQHvtgrlMcINwdOPEbkbmMfpOgmhNfJvbtEvEGDa69hWXSo0BpN7Z5r 555ka7ug4r4HCYQJa3D2xUQo+zmKMKPd2OhfemAlUhaUtk0BEj8yF1TKBxavk5yEV8Xr YUKku9Sc/40mO3K3V1UslT/euZWd6gwhiXidWoSTrIRIeuBCflB7JgZhJF3qTBnQFAiZ 8hUSS2xHWtmM2Y/L468KD7rC4tUkLDSA6Ey/2UT+ICo8DY76wIbnMh5B8QRVwhF8JT8A ijog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=XsSrc4u1IiiXBf/2zUh3JitWfcWQucFP3AytSaaKstw=; b=PVjgV4QVuG1dYxHsAaZXwKRh3Q20YGbhVrbE/InKAKSHMI/Ejtc79R5ik2IV2y0nQg VhtpVH2XIx7AAwACuE/IQ4780rtf7GPZlHRTwF4RHYNW39Nhml8EqpICb/rR01UYeTAH lthPb41poW+wAxkw05kCs9PznsRFM4/WUryLayPehJqn+Im2kFRynqmDzIM7S9m5d3R2 HJu5E16JR94wUmCcuP0VY9QUchrD2pegpPCjZ2yqqnL2Els92MmooDHChr+ekyi2n+k9 4W5adznVitvitY5YZj8Ngh3cOlfLyC9gE9i3ph9fCXUmtsk9D44KoQPc3By28I48qYyt N3Ug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=rlO0gMD1; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id ay13-20020a170907900d00b0094f7a5aca36si6374308ejc.737.2023.04.19.04.24.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 04:24:03 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=rlO0gMD1; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 996C2385841E for ; Wed, 19 Apr 2023 11:24:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 996C2385841E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1681903442; bh=XsSrc4u1IiiXBf/2zUh3JitWfcWQucFP3AytSaaKstw=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=rlO0gMD1kCcXZfNCe3CnR9prHNZutE9WRMJUeVtqznUo3BE7+VpgkEBhQlTKJmrD4 DYv58Tny7xdUOFQ8Zxae76MxyEwfF1YUvevrYYoBDzOgAWQYwmecjfVjjqj8oH3zpd /mVEPOKvLQ5xJBPizDZ3ql2sboKSJBOGu8L8KGEM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by sourceware.org (Postfix) with ESMTPS id 4F92B3858D1E for ; Wed, 19 Apr 2023 11:23:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4F92B3858D1E X-IronPort-AV: E=McAfee;i="6600,9927,10684"; a="345422370" X-IronPort-AV: E=Sophos;i="5.99,208,1677571200"; d="scan'208";a="345422370" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Apr 2023 04:23:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10684"; a="684949022" X-IronPort-AV: E=Sophos;i="5.99,208,1677571200"; d="scan'208";a="684949022" Received: from shvmail02.sh.intel.com ([10.239.244.9]) by orsmga007.jf.intel.com with ESMTP; 19 Apr 2023 04:23:10 -0700 Received: from pli-ubuntu.sh.intel.com (pli-ubuntu.sh.intel.com [10.239.46.88]) by shvmail02.sh.intel.com (Postfix) with ESMTP id 90BD310080FA; Wed, 19 Apr 2023 19:23:08 +0800 (CST) To: gcc-patches@gcc.gnu.org Cc: juzhe.zhong@rivai.ai, kito.cheng@sifive.com, pan2.li@intel.com, yanzhang.wang@intel.com Subject: [PATCH v2] RISC-V: Allow VMS{Compare} (V1, V1) shortcut optimization Date: Wed, 19 Apr 2023 19:23:07 +0800 Message-Id: <20230419112307.3805682-1-pan2.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230419032117.930737-1-pan2.li@intel.com> References: <20230419032117.930737-1-pan2.li@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Pan Li via Gcc-patches From: "Li, Pan2 via Gcc-patches" Reply-To: pan2.li@intel.com Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763573268670065329?= X-GMAIL-MSGID: =?utf-8?q?1763603585300061040?= From: Pan Li This patch try to adjust the RISC-V Vector RTL for the generic shortcut optimization for RVV integer compare. It includes compare operator eq, ne, ltu, lt, leu, le, gtu, gt, geu and ge. Assume we have below test code. vbool1_t test_shortcut_for_riscv_vmslt_case_0(vint8m8_t v1, size_t vl) { return __riscv_vmslt_vv_i8m8_b1(v1, v1, vl); } Before this patch: vsetvli zero,a2,e8,m8,ta,ma vl8re8.v v24,0(a1) vmslt.vv v8,v24,v24 vsetvli a5,zero,e8,m8,ta,ma vsm.v v8,0(a0) ret After this patch: vsetvli zero,a2,e8,mf8,ta,ma vmclr.m v24 <- optimized to vmclr.m vsetvli zero,a5,e8,mf8,ta,ma vsm.v v24,0(a0) ret We would like to make it happen in the generic way for the optimization. The patch add one more operand(aka policy tail) to VMS{Compare} pattern, to match the pred_mov (aka vmset/vmclr) pattern. We would like to let the GCC to recognize (lt:(reg v) (reg v)) and lower it to (const_vector:0), and then map into the pred_mov and VMS{Compare} pattern for both the tail policy and avl operand. The pred_mov may looks like ...(unspec: [(match_operand 1 ...) (match_operand 4 ...) + (match_operand 5 ...) <- added policy tail (reg:SI VL) (reg:SI VTYPE)] ...) (match_operand 3 "vector_move_operand" ...) <-------+ (match_operand 2 "vector_undef_operand" ...) | | The pred_cmp may looks like | ...(unspec: | [(match_operand 1 ...) | (match_operand 6 ...) | (match_operand 7 ...) | (match_operand 8 ...) <- existing policy tail | (reg:SI VL) | (reg:SI VTYPE)] ...) lower to (match_operator 3 ...) ----+ | [(match_operator 4 ...) +-----+ (match_operator 5 "vector_arith_operand"])] ----+ (match_operand 2 "vector_undef_operand" ...) However, there some cases in the test files cannot be optimized right now. We will file separated patches to try to make it happen. gcc/ChangeLog: * config/riscv/riscv-v.cc (emit_pred_op): Add api to add the policy tail or policy mask separately. * config/riscv/riscv-vector-builtins-bases.cc: Change the VMS{Compare} default tail policy from false to true. * config/riscv/vector.md: Add the policy tail operand for the pred_mov, pred_cmp. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c: New test. Signed-off-by: Pan Li Co-authored-by: Ju-Zhe Zhong --- gcc/config/riscv/riscv-v.cc | 15 +- .../riscv/riscv-vector-builtins-bases.cc | 6 +- gcc/config/riscv/vector.md | 14 +- .../rvv/base/integer_compare_insn_shortcut.c | 291 ++++++++++++++++++ 4 files changed, 319 insertions(+), 7 deletions(-) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc index 392f5d02e17..c3881920812 100644 --- a/gcc/config/riscv/riscv-v.cc +++ b/gcc/config/riscv/riscv-v.cc @@ -71,12 +71,23 @@ public: add_input_operand (RVV_VUNDEF (mode), mode); } void add_policy_operand (enum tail_policy vta, enum mask_policy vma) + { + add_tail_policy_operand (vta); + add_mask_policy_operand (vma); + } + + void add_tail_policy_operand (enum tail_policy vta) { rtx tail_policy_rtx = gen_int_mode (vta, Pmode); - rtx mask_policy_rtx = gen_int_mode (vma, Pmode); add_input_operand (tail_policy_rtx, Pmode); + } + + void add_mask_policy_operand (enum mask_policy vma) + { + rtx mask_policy_rtx = gen_int_mode (vma, Pmode); add_input_operand (mask_policy_rtx, Pmode); } + void add_avl_type_operand (avl_type type) { add_input_operand (gen_int_mode (type, Pmode), Pmode); @@ -206,6 +217,8 @@ emit_pred_op (unsigned icode, rtx mask, rtx dest, rtx src, rtx len, if (GET_MODE_CLASS (mode) != MODE_VECTOR_BOOL) e.add_policy_operand (get_prefer_tail_policy (), get_prefer_mask_policy ()); + else + e.add_tail_policy_operand (get_prefer_tail_policy ()); if (vlmax_p) e.add_avl_type_operand (avl_type::VLMAX); diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc index 52467bbc961..7c6064a5a24 100644 --- a/gcc/config/riscv/riscv-vector-builtins-bases.cc +++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc @@ -756,7 +756,7 @@ template class mask_logic : public function_base { public: - bool apply_tail_policy_p () const override { return false; } + bool apply_tail_policy_p () const override { return true; } bool apply_mask_policy_p () const override { return false; } rtx expand (function_expander &e) const override @@ -768,7 +768,7 @@ template class mask_nlogic : public function_base { public: - bool apply_tail_policy_p () const override { return false; } + bool apply_tail_policy_p () const override { return true; } bool apply_mask_policy_p () const override { return false; } rtx expand (function_expander &e) const override @@ -780,7 +780,7 @@ template class mask_notlogic : public function_base { public: - bool apply_tail_policy_p () const override { return false; } + bool apply_tail_policy_p () const override { return true; } bool apply_mask_policy_p () const override { return false; } rtx expand (function_expander &e) const override diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index 0ecca98f20c..6819363b9ff 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -1032,6 +1032,7 @@ (define_insn_and_split "@pred_mov" [(match_operand:VB 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1") (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK") (match_operand 5 "const_int_operand" " i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (match_operand:VB 3 "vector_move_operand" " m, vr, vr, Wc0, Wc1") @@ -4113,7 +4114,8 @@ (define_expand "@pred_ge_scalar" if (satisfies_constraint_Wc1 (operands[1])) emit_insn ( gen_pred_mov (mode, operands[0], CONSTM1_RTX (mode), undef, - CONSTM1_RTX (mode), operands[6], operands[8])); + CONSTM1_RTX (mode), operands[6], operands[8], + gen_int_mode (riscv_vector::get_prefer_mask_policy (), Pmode))); else { /* If vmsgeu_mask with 0 immediate, expand it to vmor mask, maskedoff. @@ -4158,7 +4160,8 @@ (define_expand "@pred_ge_scalar" operands[6], operands[7], operands[8])); emit_insn (gen_pred_nand (operands[0], CONSTM1_RTX (mode), undef, operands[0], operands[0], - operands[6], operands[8])); + operands[6], operands[8], + gen_int_mode (riscv_vector::get_prefer_mask_policy (), Pmode))); } else { @@ -4173,7 +4176,8 @@ (define_expand "@pred_ge_scalar" operands[5], operands[6], operands[7], operands[8])); emit_insn ( gen_pred_andnot (operands[0], CONSTM1_RTX (mode), undef, - operands[1], reg, operands[6], operands[8])); + operands[1], reg, operands[6], operands[8], + gen_int_mode (riscv_vector::get_prefer_mask_policy (), Pmode))); } else { @@ -5196,6 +5200,7 @@ (define_insn "@pred_" [(match_operand:VB 1 "vector_all_trues_mask_operand" "Wc1") (match_operand 5 "vector_length_operand" " rK") (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (any_bitwise:VB @@ -5216,6 +5221,7 @@ (define_insn "@pred_n" [(match_operand:VB 1 "vector_all_trues_mask_operand" "Wc1") (match_operand 5 "vector_length_operand" " rK") (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (not:VB @@ -5237,6 +5243,7 @@ (define_insn "@pred_not" [(match_operand:VB 1 "vector_all_trues_mask_operand" "Wc1") (match_operand 5 "vector_length_operand" " rK") (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (and_ior:VB @@ -5258,6 +5265,7 @@ (define_insn "@pred_not" [(match_operand:VB 1 "vector_all_trues_mask_operand" "Wc1") (match_operand 4 "vector_length_operand" " rK") (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (not:VB diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c b/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c new file mode 100644 index 00000000000..495a0f11440 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c @@ -0,0 +1,291 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64 -O3" } */ + +#include "riscv_vector.h" + +vbool1_t test_shortcut_for_riscv_vmseq_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmseq_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmseq_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmseq_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmseq_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmseq_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmseq_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmseq_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmseq_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmseq_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmseq_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmseq_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmseq_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmseq_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsne_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmsne_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsne_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmsne_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsne_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmsne_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsne_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmsne_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsne_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmsne_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsne_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmsne_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsne_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmsne_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmslt_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmslt_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmslt_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmslt_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmslt_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmslt_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmslt_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmslt_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmslt_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmslt_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmslt_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmslt_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmslt_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmslt_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsltu_case_0(vuint8m8_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsltu_case_1(vuint8m4_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsltu_case_2(vuint8m2_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsltu_case_3(vuint8m1_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsltu_case_4(vuint8mf2_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsltu_case_5(vuint8mf4_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsltu_case_6(vuint8mf8_t v1, size_t vl) { + return __riscv_vmsltu_vv_u8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsle_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmsle_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsle_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmsle_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsle_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmsle_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsle_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmsle_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsle_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmsle_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsle_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmsle_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsle_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmsle_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsleu_case_0(vuint8m8_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsleu_case_1(vuint8m4_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsleu_case_2(vuint8m2_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsleu_case_3(vuint8m1_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsleu_case_4(vuint8mf2_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsleu_case_5(vuint8mf4_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsleu_case_6(vuint8mf8_t v1, size_t vl) { + return __riscv_vmsleu_vv_u8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsgt_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsgt_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsgt_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsgt_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsgt_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsgt_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsgt_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmsgt_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsgtu_case_0(vuint8m8_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsgtu_case_1(vuint8m4_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsgtu_case_2(vuint8m2_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsgtu_case_3(vuint8m1_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsgtu_case_4(vuint8mf2_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsgtu_case_5(vuint8mf4_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsgtu_case_6(vuint8mf8_t v1, size_t vl) { + return __riscv_vmsgtu_vv_u8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsge_case_0(vint8m8_t v1, size_t vl) { + return __riscv_vmsge_vv_i8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsge_case_1(vint8m4_t v1, size_t vl) { + return __riscv_vmsge_vv_i8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsge_case_2(vint8m2_t v1, size_t vl) { + return __riscv_vmsge_vv_i8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsge_case_3(vint8m1_t v1, size_t vl) { + return __riscv_vmsge_vv_i8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsge_case_4(vint8mf2_t v1, size_t vl) { + return __riscv_vmsge_vv_i8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsge_case_5(vint8mf4_t v1, size_t vl) { + return __riscv_vmsge_vv_i8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsge_case_6(vint8mf8_t v1, size_t vl) { + return __riscv_vmsge_vv_i8mf8_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmsgeu_case_0(vuint8m8_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8m8_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmsgeu_case_1(vuint8m4_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8m4_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmsgeu_case_2(vuint8m2_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8m2_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmsgeu_case_3(vuint8m1_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8m1_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmsgeu_case_4(vuint8mf2_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8mf2_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmsgeu_case_5(vuint8mf4_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8mf4_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmsgeu_case_6(vuint8mf8_t v1, size_t vl) { + return __riscv_vmsgeu_vv_u8mf8_b64(v1, v1, vl); +} + +/* { dg-final { scan-assembler-times {vmseq\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmsle\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmsleu\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmsge\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmsgeu\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmclr\.m\s+v[0-9]+} 35 } } */