From patchwork Fri Feb 3 07:34:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "juzhe.zhong@rivai.ai" X-Patchwork-Id: 52356 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp695671wrn; Thu, 2 Feb 2023 23:37:14 -0800 (PST) X-Google-Smtp-Source: AK7set8wz3y9G2w66olDx7po5m1CIkDbKSuQcjrESDiCRbbMLdlbQV+pqmaoOxwsxUx3Qw2ps+tN X-Received: by 2002:a05:6402:1bc6:b0:499:bcd7:a968 with SMTP id ch6-20020a0564021bc600b00499bcd7a968mr8897009edb.22.1675409833962; Thu, 02 Feb 2023 23:37:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675409833; cv=none; d=google.com; s=arc-20160816; b=dX2QKGHBJycJbt0MStK6ECmQMe6Fok0SjOlY+R0CXqPzbXKnKOVf6T9DlhlJxHivd0 Qv3o2aYf20Eqx2OyAUCvlm2d2/WLi9ONs1HV4Tx3JwAGXnwSBnadUtzs7DSTzhKIF3+h PGdRlGMlwqHeIqFzke6u16EsF8eTi259/gCfSZOjwjPToP07GFy3D8NArnRzQVjjrAE7 Ep2MQjiw7r/qgZ7hylmUhMiRLZq1203eKLJ4kzUxNY7Pn5UaV0LyuCCGA8yjLuC0hsgk wW9txJRwoZUf6OaU2BLLBmEg3d81asCvwDnHKkYJizzFgaZFNlbgiOm1Woecryx/eY8Q zkMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:feedback-id :content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:dmarc-filter:delivered-to; bh=URlKDg6xnFEDXPJNF24F2fvUXJJIsBczuM04jwZxsYE=; b=BTkH2k6BCMXzfVXKz2lyKQPg/QL86nnew05D4B3lGXKbz/4zSk6XsufuoCZiUl+Kxq fp7uJzvlHQyv1CCuqUyjjheak+o7p0/exPMN1FVAT0jx66YcFJbqeiwohCr8XxKLrek+ pH8LW47MrZL+jy1DIsiwsyRWdS5jQqJsY807s6yJ/EOvfyeYiajkWKLTdl+8VTRqcOe5 jRJgwr9cy9oUF686uW/SL1vJf562KlxRgQVne7jNWefmrR7SjajKHRBjPIgr4oJbA7Up WQmmhoSQsJsIz0WzmHXXbSywGIe2jUzljomv+K1zv1jI1iKSl9ucFITZClP5+dgMoQsB mYIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id w26-20020a056402129a00b004a08f080500si1627815edv.441.2023.02.02.23.37.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 23:37:13 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 645C93858023 for ; Fri, 3 Feb 2023 07:37:12 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbg156.qq.com (smtpbg156.qq.com [15.184.82.18]) by sourceware.org (Postfix) with ESMTPS id 4C2D13850409 for ; Fri, 3 Feb 2023 07:34:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4C2D13850409 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp69t1675409673thoszmja Received: from server1.localdomain ( [58.60.1.22]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 03 Feb 2023 15:34:32 +0800 (CST) X-QQ-SSF: 01400000000000E0L000000A0000000 X-QQ-FEAT: 0laiA9+vjACP/GY/GOQK931IIJ/r9VYEOXo4s7KOMUKpXuUU6KkM2FlPMxx0s MclmMlRMhtG1Mk2XZ0t1vzyT8Mn+LygIQSDlCKYzr7XoyxG6PVzwvfpHdzjxacHWLKbCN5i M4/lBn04+uNBCrTM3XuGn7mtHe5VtdluMJLdl9sFPjQaP+XKMceF/Tiqn0cfdvGeKSwBdSb Mj380w95cm6BCxnnvYQ3aYiyPbM+NfbLgZqhFcHzmk9UAXLpZIXgAWcC7D/TI+QVvoB4vOz ePitf67ZbXLMSam1xRGcFeDgUtlM9+j2FG73eR3fj1sA8U7POmmb6KKC0ALHIxs0eidwq0A bKC9LWBAyrbzBodD0148t8z2hJt8hAwYXS6MhYnSmT5tH5SlPlDOQOQsTE0v4n9gTR1tSjC X-QQ-GoodBg: 2 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, Ju-Zhe Zhong Subject: [PATCH] RISC-V: Add vand.vx C API tests Date: Fri, 3 Feb 2023 15:34:31 +0800 Message-Id: <20230203073431.190895-1-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.1 MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvr:qybglogicsvr7 X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756794541928560953?= X-GMAIL-MSGID: =?utf-8?q?1756794541928560953?= From: Ju-Zhe Zhong gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vand_vx_m_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_m_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_m_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_m_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_m_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_m_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_mu_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tu_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tum_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-3.c: New test. --- .../riscv/rvv/base/vand_vx_m_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_m_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_m_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_m_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_m_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_m_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_mu_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tu_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tum_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vand_vx_tumu_rv64-3.c | 292 ++++++++++++++++++ 36 files changed, 10458 insertions(+) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-3.c diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-1.c new file mode 100644 index 00000000000..8f4df862c97 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-2.c new file mode 100644 index 00000000000..f43197197da --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-3.c new file mode 100644 index 00000000000..3143a13b1ba --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-1.c new file mode 100644 index 00000000000..e446326f5b2 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-2.c new file mode 100644 index 00000000000..7bec6587894 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-3.c new file mode 100644 index 00000000000..5782012ac5c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_m_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_m(mask,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_m(mask,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_m(mask,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_m(mask,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_m(mask,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_m(mask,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_m(mask,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_m(mask,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_m(mask,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_m(mask,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_m(mask,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_m(mask,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_m(mask,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_m(mask,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_m(mask,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_m(mask,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_m(mask,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_m(mask,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_m(mask,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_m(mask,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_m(mask,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_m(mask,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_m(mask,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_m(mask,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_m(mask,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_m(mask,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_m(mask,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_m(mask,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_m(mask,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_m(mask,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_m(mask,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_m(mask,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_m(mask,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_m(mask,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_m(mask,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_m(mask,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_m(mask,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_m(mask,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_m(mask,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_m(mask,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_m(mask,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_m(mask,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_m(mask,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_m(mask,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-1.c new file mode 100644 index 00000000000..b7c707ff2fe --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-2.c new file mode 100644 index 00000000000..4838f943bc8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-3.c new file mode 100644 index 00000000000..ba42a0a2803 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-1.c new file mode 100644 index 00000000000..19f2c420b2c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-2.c new file mode 100644 index 00000000000..f81636824a9 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-3.c new file mode 100644 index 00000000000..e6feee2a86b --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_mu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_mu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_mu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_mu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_mu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_mu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_mu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_mu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_mu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_mu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_mu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_mu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_mu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_mu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_mu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_mu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_mu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_mu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_mu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_mu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_mu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_mu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_mu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_mu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_mu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_mu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_mu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_mu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_mu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_mu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_mu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_mu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_mu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_mu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_mu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_mu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_mu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_mu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_mu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_mu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_mu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_mu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_mu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-1.c new file mode 100644 index 00000000000..7411e96f901 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-2.c new file mode 100644 index 00000000000..568635cc899 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-3.c new file mode 100644 index 00000000000..f523d471828 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-1.c new file mode 100644 index 00000000000..9fde661ef01 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-2.c new file mode 100644 index 00000000000..3da692dea08 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-3.c new file mode 100644 index 00000000000..f3b498605f8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8(op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4(op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2(op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1(op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2(op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4(op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8(op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4(op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2(op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1(op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2(op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4(op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8(op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2(op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1(op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2(op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4(op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8(op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1(op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2(op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4(op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8(op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8(op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4(op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2(op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1(op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2(op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4(op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8(op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4(op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2(op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1(op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2(op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4(op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8(op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2(op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1(op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2(op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4(op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8(op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1(op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2(op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4(op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8(op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-1.c new file mode 100644 index 00000000000..77f083bed0b --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-2.c new file mode 100644 index 00000000000..a90c7b6ee7f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-3.c new file mode 100644 index 00000000000..f8953243e36 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-1.c new file mode 100644 index 00000000000..374a05767b0 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-2.c new file mode 100644 index 00000000000..6f51c255ddb --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-3.c new file mode 100644 index 00000000000..17d66c9295b --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tu(merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tu(merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tu(merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tu(merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tu(merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tu(merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tu(merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tu(merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tu(merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tu(merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tu(merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tu(merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tu(merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tu(merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tu(merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tu(merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tu(merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tu(merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tu(merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tu(merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tu(merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tu(merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tu(merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tu(merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tu(merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tu(merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tu(merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tu(merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tu(merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tu(merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tu(merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tu(merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tu(merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tu(merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tu(merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tu(merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tu(merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tu(merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tu(merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tu(merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tu(merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tu(merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tu(merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tu(merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-1.c new file mode 100644 index 00000000000..dff21684dd8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-2.c new file mode 100644 index 00000000000..58c2e507d2c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-3.c new file mode 100644 index 00000000000..9e2f1d22b4e --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-1.c new file mode 100644 index 00000000000..a49807a937d --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-2.c new file mode 100644 index 00000000000..adbd279fc2a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-3.c new file mode 100644 index 00000000000..cad7cdfc7c8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tum_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tum(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tum(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tum(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tum(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tum(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tum(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tum(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tum(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tum(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tum(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tum(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tum(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tum(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tum(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tum(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tum(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tum(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tum(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tum(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tum(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tum(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tum(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tum(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tum(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tum(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tum(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tum(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tum(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tum(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tum(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tum(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tum(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tum(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tum(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tum(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tum(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tum(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tum(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tum(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tum(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tum(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tum(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-1.c new file mode 100644 index 00000000000..8a28885f3cc --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-2.c new file mode 100644 index 00000000000..667940d0178 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-3.c new file mode 100644 index 00000000000..46093090ce0 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vand\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-1.c new file mode 100644 index 00000000000..09e45964fbc --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-2.c new file mode 100644 index 00000000000..a4d1d51eb92 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-3.c new file mode 100644 index 00000000000..4a05fd04f93 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vand_vx_tumu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vand_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf8_tumu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vand_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf4_tumu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vand_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8mf2_tumu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vand_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m1_tumu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vand_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m2_tumu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vand_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m4_tumu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vand_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vand_vx_i8m8_tumu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vand_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf4_tumu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vand_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16mf2_tumu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vand_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m1_tumu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vand_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m2_tumu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vand_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m4_tumu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vand_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vand_vx_i16m8_tumu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vand_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32mf2_tumu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vand_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m1_tumu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vand_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m2_tumu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vand_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m4_tumu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vand_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vand_vx_i32m8_tumu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vand_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m1_tumu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vand_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m2_tumu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vand_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m4_tumu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vand_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vand_vx_i64m8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vand_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vand_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vand_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vand_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m1_tumu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vand_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vand_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m4_tumu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vand_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vand_vx_u8m8_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vand_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vand_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vand_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m1_tumu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vand_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vand_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m4_tumu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vand_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vand_vx_u16m8_tumu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vand_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vand_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m1_tumu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vand_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vand_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m4_tumu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vand_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vand_vx_u32m8_tumu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vand_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m1_tumu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vand_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m2_tumu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vand_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m4_tumu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vand_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vand_vx_u64m8_tumu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vand\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */