From patchwork Wed Feb 8 02:32:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "juzhe.zhong@rivai.ai" X-Patchwork-Id: 54143 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp3208425wrn; Tue, 7 Feb 2023 18:33:27 -0800 (PST) X-Google-Smtp-Source: AK7set98LXgeurPClM+eafrllnaRIY+hXiHDgTjebvlKSnVpHgyGztDdaGVJcLgk8BlFxanct7EL X-Received: by 2002:a50:f68a:0:b0:4aa:b431:84e1 with SMTP id d10-20020a50f68a000000b004aab43184e1mr6287252edn.28.1675823607368; Tue, 07 Feb 2023 18:33:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675823607; cv=none; d=google.com; s=arc-20160816; b=ZYGHa+hfPEPvMGbm9fvLbH9nv5INgBsbXBeh4badIxwB9Sspwf9CWccuPxD6JV0scV H4MQaigY8HJpwLSlc90/BeWAPhlVoHELh4VkUfbG0OhANws5yEhUmZL3HhyQoHmJTGaG I568h66cYQ5G2F+gWEH6edy5rwFTFRXuWbtvjSjvP3+Ey4akU8Pey7mL3yIXIJHi8OZX sA1y5bxd6mvf6NrxXDlda97akN0+PYbwGwC4sW5mXkop/BlMunZyoesaBPJQQ3lKclDr mA9tKCTXuHxxV6gXoOBr6d4OdzzBhXVZj2ggn0FtvnIRoLHAM6FqvxUlfwBc6jfglB9u KEKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:feedback-id :content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:dmarc-filter:delivered-to; bh=fD0h5r1Ya3iCYF+9O71Gm+7rli35jvVy9RXvGAHZ0Qg=; b=PiryGz2VFpdmIUreEMX7ihA4zh3bDrnZAcDWGciT5Aqn24hmNTHLWA8I98/OBRBzKi 6eWfc3iJ9Ut5P8Joji9r88jSqt1DG4G4TncMWclpCAnuGmfwMI2JEkD/4YuXzYOR5vsG rB+l0LW6nGHNCW0YTbLzs/gDTm4viNkro5TL5l8h07wl+I3EKQi6ZQ+80iI3i6mUJi/B Owom/jqMXU/X19FyLRk+zst2Tlfo52i6f30XqVfapJKAkyrFseN7XOCEjqTMh3lPd2uO 2SDJ8C351dCUC+oZpw9FLmaSAVcnCoGQ8r8d7s9GmSZmrilFoevaLM4eQB3sz6LzQ917 HWjQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id w18-20020a056402269200b004a25d7eec37si24234357edd.452.2023.02.07.18.33.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Feb 2023 18:33:27 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B0F2A3858CDA for ; Wed, 8 Feb 2023 02:33:25 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbguseast1.qq.com (smtpbguseast1.qq.com [54.204.34.129]) by sourceware.org (Postfix) with ESMTPS id DF83A3858D39 for ; Wed, 8 Feb 2023 02:32:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org DF83A3858D39 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp81t1675823551t127rzkr Received: from server1.localdomain ( [58.60.1.22]) by bizesmtp.qq.com (ESMTP) with id ; Wed, 08 Feb 2023 10:32:30 +0800 (CST) X-QQ-SSF: 01400000000000E0L000000A0000000 X-QQ-FEAT: q+yjhizk/eLuNQsDchfE3BsDP/JuJgi8HtKfrXEG1p9rJkZyLYUVu28xv7CMB Gz3muuTnRtXzSTl3nPVnS6ebRR/NAqCaR7rvGutYECju/NbM3EnCxRJGy6X7lazQqg+C4rk 7i13+fZxzwwcOIg95niZh5GcIzwoMuAyUVD6S+gSGykAizZDNgrlUegNDvMETFKwiZ8actw hGf5cVxK+5LqCFXRrKZpUwB36t+OeSjHQz9TM56r73XkOTFwx6KMLaEthtZvcYS8loH382D FeqvnrAg7sEi8UQpC2B98eXzb9gQMmExnJ1dJ68WMrh8UAJzhWgifWX1m4TIOiReSRPUzgj Plouk5F2DvAnBTLJlb18pxBn00ME+kHHeZ9KtssHdIqDq8Ne9Q= X-QQ-GoodBg: 2 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, Ju-Zhe Zhong Subject: [PATCH] RISC-V: Add vadc.vvm/vadc.vxm C API tests Date: Wed, 8 Feb 2023 10:32:29 +0800 Message-Id: <20230208023229.225998-1-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.1 MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvr:qybglogicsvr7 X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757228415302718161?= X-GMAIL-MSGID: =?utf-8?q?1757228415302718161?= From: Ju-Zhe Zhong gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vadc-1.c: New test. * gcc.target/riscv/rvv/base/vadc-2.c: New test. * gcc.target/riscv/rvv/base/vadc-3.c: New test. * gcc.target/riscv/rvv/base/vadc-4.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm-3.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-3.c: New test. --- .../gcc.target/riscv/rvv/base/vadc-1.c | 27 ++ .../gcc.target/riscv/rvv/base/vadc-2.c | 48 +++ .../gcc.target/riscv/rvv/base/vadc-3.c | 78 +++++ .../gcc.target/riscv/rvv/base/vadc-4.c | 79 +++++ .../gcc.target/riscv/rvv/base/vadc_vvm-1.c | 292 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/vadc_vvm-2.c | 292 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/vadc_vvm-3.c | 292 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c | 292 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c | 292 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_rv64-3.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv32-1.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv32-2.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv32-3.c | 289 +++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv64-1.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv64-2.c | 292 ++++++++++++++++++ .../riscv/rvv/base/vadc_vxm_tu_rv64-3.c | 292 ++++++++++++++++++ 22 files changed, 5470 insertions(+) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc-4.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-3.c diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-1.c new file mode 100644 index 00000000000..ed3c4edc858 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-1.c @@ -0,0 +1,27 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */ +#include "riscv_vector.h" + +void f1 (void * in, void *out) +{ + vbool32_t mask = __riscv_vlm_v_b32 (in + 100, 4); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4); + vint32m1_t v3 = __riscv_vadc_vvm_i32m1 (v2, v2, mask, 4); + vint32m1_t v4 = __riscv_vadc_vvm_i32m1_tu (v3, v2, v2, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +void f2 (void * in, void *out) +{ + vbool32_t mask = *(vbool32_t*)in; + asm volatile ("":::"memory"); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4); + vint32m1_t v3 = __riscv_vadc_vvm_i32m1 (v2, v2, mask, 4); + vint32m1_t v4 = __riscv_vadc_vvm_i32m1_tu (v3, v2, v2, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 4 } } */ +/* { dg-final { scan-assembler-not {vmv} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-2.c new file mode 100644 index 00000000000..df83902f4a5 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-2.c @@ -0,0 +1,48 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */ +#include "riscv_vector.h" + +void f1 (void * in, void *out, int32_t x) +{ + vbool32_t mask = __riscv_vlm_v_b32 (in + 100, 4); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4); + vint32m1_t v3 = __riscv_vadc_vxm_i32m1 (v2, -16, mask, 4); + vint32m1_t v4 = __riscv_vadc_vxm_i32m1_tu (v3, v2, -16, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +void f2 (void * in, void *out, int32_t x) +{ + vbool32_t mask = __riscv_vlm_v_b32 (in + 100, 4); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4); + vint32m1_t v3 = __riscv_vadc_vxm_i32m1 (v2, 15, mask, 4); + vint32m1_t v4 = __riscv_vadc_vxm_i32m1_tu (v3, v2, 15, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +void f3 (void * in, void *out, int32_t x) +{ + vbool32_t mask = __riscv_vlm_v_b32 (in + 100, 4); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4); + vint32m1_t v3 = __riscv_vadc_vxm_i32m1 (v2, -17, mask, 4); + vint32m1_t v4 = __riscv_vadc_vxm_i32m1_tu (v3, v2, -17, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +void f4 (void * in, void *out, int32_t x) +{ + vbool32_t mask = __riscv_vlm_v_b32 (in + 100, 4); + vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4); + vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4); + vint32m1_t v3 = __riscv_vadc_vxm_i32m1 (v2, 16, mask, 4); + vint32m1_t v4 = __riscv_vadc_vxm_i32m1_tu (v3, v2, 16, mask, 4); + __riscv_vse32_v_i32m1 (out, v4, 4); +} + +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*-16,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*15,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 4 } } */ + diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-3.c new file mode 100644 index 00000000000..a0c2e0de8d6 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-3.c @@ -0,0 +1,78 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +void f0 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, -16, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, -16, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f1 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 15, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 15, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f2 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, -17, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, -17, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f3 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 16, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 16, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f4 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 0xAAAAAAAA, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 0xAAAAAAAA, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f5 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f6 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, x, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, x, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*-16,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*15,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 10 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-4.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-4.c new file mode 100644 index 00000000000..550834c1d30 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc-4.c @@ -0,0 +1,79 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32 -O3" } */ + +#include "riscv_vector.h" + +void f0 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, -16, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, -16, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f1 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 15, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 15, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f2 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, -17, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, -17, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f3 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 16, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 16, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f4 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 0xAAAAAAA, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 0xAAAAAAA, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f5 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +void f6 (void * in, void *out, int64_t x, int n) +{ + vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4); + vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4); + vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4); + vint64m1_t v3 = __riscv_vadc_vxm_i64m1 (v2, x, mask, 4); + vint64m1_t v4 = __riscv_vadc_vxm_i64m1 (v3, x, mask, 4); + __riscv_vse64_v_i64m1 (out + 2, v4, 4); +} + +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*-16,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vim\s+v[0-9]+,\s*v[0-9]+,\s*15,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 6 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 4 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-1.c new file mode 100644 index 00000000000..dc9bf5c2c9c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8(vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8(op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4(vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4(op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2(vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2(op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1(vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1(op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2(vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2(op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4(vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4(op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8(vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8(op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4(vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4(op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2(vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2(op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1(vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1(op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2(vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2(op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4(vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4(op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8(vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8(op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2(vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2(op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1(vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1(op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2(vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2(op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4(vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4(op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8(vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8(op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1(vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1(op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2(vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2(op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4(vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4(op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8(vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8(op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8(vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8(op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4(vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4(op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2(vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2(op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1(vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1(op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2(vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2(op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4(vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4(op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8(vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8(op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4(vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4(op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2(vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2(op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1(vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1(op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2(vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2(op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4(vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4(op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8(vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8(op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2(vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2(op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1(vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1(op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2(vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2(op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4(vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4(op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8(vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8(op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1(vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1(op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2(vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2(op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4(vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4(op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8(vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8(op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-2.c new file mode 100644 index 00000000000..90e36172280 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8(vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8(op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4(vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4(op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2(vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2(op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1(vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1(op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2(vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2(op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4(vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4(op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8(vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8(op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4(vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4(op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2(vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2(op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1(vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1(op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2(vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2(op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4(vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4(op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8(vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8(op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2(vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2(op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1(vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1(op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2(vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2(op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4(vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4(op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8(vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8(op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1(vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1(op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2(vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2(op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4(vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4(op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8(vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8(op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8(vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8(op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4(vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4(op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2(vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2(op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1(vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1(op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2(vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2(op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4(vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4(op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8(vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8(op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4(vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4(op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2(vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2(op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1(vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1(op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2(vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2(op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4(vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4(op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8(vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8(op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2(vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2(op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1(vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1(op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2(vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2(op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4(vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4(op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8(vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8(op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1(vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1(op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2(vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2(op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4(vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4(op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8(vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8(op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-3.c new file mode 100644 index 00000000000..c86399e4208 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8(vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8(op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4(vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4(op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2(vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2(op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1(vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1(op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2(vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2(op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4(vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4(op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8(vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8(op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4(vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4(op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2(vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2(op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1(vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1(op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2(vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2(op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4(vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4(op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8(vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8(op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2(vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2(op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1(vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1(op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2(vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2(op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4(vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4(op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8(vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8(op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1(vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1(op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2(vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2(op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4(vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4(op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8(vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8(op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8(vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8(op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4(vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4(op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2(vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2(op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1(vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1(op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2(vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2(op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4(vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4(op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8(vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8(op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4(vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4(op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2(vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2(op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1(vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1(op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2(vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2(op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4(vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4(op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8(vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8(op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2(vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2(op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1(vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1(op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2(vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2(op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4(vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4(op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8(vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8(op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1(vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1(op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2(vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2(op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4(vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4(op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8(vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8(op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c new file mode 100644 index 00000000000..efca6b31c0e --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c new file mode 100644 index 00000000000..94db1d88e2f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8_tu(maskedoff,op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c new file mode 100644 index 00000000000..ae885200952 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vvm_tu-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vvm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,vint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vvm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,vint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vvm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,vint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vvm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,vint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vvm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,vint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vvm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,vint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vvm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,vint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vvm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,vint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vvm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,vint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vvm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,vint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vvm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,vint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vvm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,vint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vvm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,vint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vvm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,vint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vvm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,vint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vvm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,vint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vvm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,vint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vvm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,vint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vvm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,vint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vvm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,vint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vvm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,vint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vvm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,vint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_i64m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vvm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,vuint8mf8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vvm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,vuint8mf4_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vvm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,vuint8mf2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vvm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,vuint8m1_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vvm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,vuint8m2_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vvm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,vuint8m4_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vvm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,vuint8m8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vvm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,vuint16mf4_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vvm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,vuint16mf2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vvm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,vuint16m1_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vvm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,vuint16m2_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vvm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,vuint16m4_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vvm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,vuint16m8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vvm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,vuint32mf2_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vvm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,vuint32m1_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vvm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,vuint32m2_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vvm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,vuint32m4_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vvm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,vuint32m8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vvm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,vuint64m1_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vvm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,vuint64m2_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vvm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,vuint64m4_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vvm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,vuint64m8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vvm_u64m8_tu(maskedoff,op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-1.c new file mode 100644 index 00000000000..6f1a9ff8c89 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-2.c new file mode 100644 index 00000000000..4821e55ef97 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-3.c new file mode 100644 index 00000000000..c7650fc8bb2 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-1.c new file mode 100644 index 00000000000..15662f35f90 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-2.c new file mode 100644 index 00000000000..4816316519a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-3.c new file mode 100644 index 00000000000..3aa0d4cbcbf --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8(vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8(op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4(vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4(op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2(vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2(op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1(vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1(op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2(vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2(op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4(vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4(op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8(vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8(op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4(vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4(op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2(vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2(op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1(vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1(op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2(vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2(op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4(vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4(op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8(vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8(op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2(vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2(op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1(vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1(op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2(vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2(op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4(vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4(op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8(vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8(op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1(vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1(op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2(vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2(op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4(vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4(op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8(vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8(op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8(vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8(op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4(vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4(op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2(vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2(op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1(vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1(op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2(vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2(op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4(vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4(op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8(vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8(op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4(vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4(op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2(vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2(op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1(vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1(op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2(vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2(op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4(vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4(op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8(vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8(op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2(vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2(op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1(vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1(op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2(vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2(op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4(vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4(op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8(vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8(op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1(vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1(op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2(vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2(op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4(vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4(op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8(vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8(op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-1.c new file mode 100644 index 00000000000..fd4537d373a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-2.c new file mode 100644 index 00000000000..e4a0a03aa10 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-3.c new file mode 100644 index 00000000000..208abf6d892 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vadc\.vvm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-1.c new file mode 100644 index 00000000000..b28647f2b4e --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,vl); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-2.c new file mode 100644 index 00000000000..088061a7b60 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,31); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-3.c new file mode 100644 index 00000000000..911707c261a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vadc_vxm_tu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vadc_vxm_i8mf8_tu(vint8mf8_t maskedoff,vint8mf8_t op1,int8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf4_t test___riscv_vadc_vxm_i8mf4_tu(vint8mf4_t maskedoff,vint8mf4_t op1,int8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8mf2_t test___riscv_vadc_vxm_i8mf2_tu(vint8mf2_t maskedoff,vint8mf2_t op1,int8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m1_t test___riscv_vadc_vxm_i8m1_tu(vint8m1_t maskedoff,vint8m1_t op1,int8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m2_t test___riscv_vadc_vxm_i8m2_tu(vint8m2_t maskedoff,vint8m2_t op1,int8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m4_t test___riscv_vadc_vxm_i8m4_tu(vint8m4_t maskedoff,vint8m4_t op1,int8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint8m8_t test___riscv_vadc_vxm_i8m8_tu(vint8m8_t maskedoff,vint8m8_t op1,int8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf4_t test___riscv_vadc_vxm_i16mf4_tu(vint16mf4_t maskedoff,vint16mf4_t op1,int16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16mf2_t test___riscv_vadc_vxm_i16mf2_tu(vint16mf2_t maskedoff,vint16mf2_t op1,int16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m1_t test___riscv_vadc_vxm_i16m1_tu(vint16m1_t maskedoff,vint16m1_t op1,int16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m2_t test___riscv_vadc_vxm_i16m2_tu(vint16m2_t maskedoff,vint16m2_t op1,int16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m4_t test___riscv_vadc_vxm_i16m4_tu(vint16m4_t maskedoff,vint16m4_t op1,int16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint16m8_t test___riscv_vadc_vxm_i16m8_tu(vint16m8_t maskedoff,vint16m8_t op1,int16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32mf2_t test___riscv_vadc_vxm_i32mf2_tu(vint32mf2_t maskedoff,vint32mf2_t op1,int32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m1_t test___riscv_vadc_vxm_i32m1_tu(vint32m1_t maskedoff,vint32m1_t op1,int32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m2_t test___riscv_vadc_vxm_i32m2_tu(vint32m2_t maskedoff,vint32m2_t op1,int32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m4_t test___riscv_vadc_vxm_i32m4_tu(vint32m4_t maskedoff,vint32m4_t op1,int32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint32m8_t test___riscv_vadc_vxm_i32m8_tu(vint32m8_t maskedoff,vint32m8_t op1,int32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m1_t test___riscv_vadc_vxm_i64m1_tu(vint64m1_t maskedoff,vint64m1_t op1,int64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m2_t test___riscv_vadc_vxm_i64m2_tu(vint64m2_t maskedoff,vint64m2_t op1,int64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m4_t test___riscv_vadc_vxm_i64m4_tu(vint64m4_t maskedoff,vint64m4_t op1,int64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vint64m8_t test___riscv_vadc_vxm_i64m8_tu(vint64m8_t maskedoff,vint64m8_t op1,int64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_i64m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf8_t test___riscv_vadc_vxm_u8mf8_tu(vuint8mf8_t maskedoff,vuint8mf8_t op1,uint8_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf4_t test___riscv_vadc_vxm_u8mf4_tu(vuint8mf4_t maskedoff,vuint8mf4_t op1,uint8_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8mf2_t test___riscv_vadc_vxm_u8mf2_tu(vuint8mf2_t maskedoff,vuint8mf2_t op1,uint8_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m1_t test___riscv_vadc_vxm_u8m1_tu(vuint8m1_t maskedoff,vuint8m1_t op1,uint8_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m2_t test___riscv_vadc_vxm_u8m2_tu(vuint8m2_t maskedoff,vuint8m2_t op1,uint8_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m4_t test___riscv_vadc_vxm_u8m4_tu(vuint8m4_t maskedoff,vuint8m4_t op1,uint8_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint8m8_t test___riscv_vadc_vxm_u8m8_tu(vuint8m8_t maskedoff,vuint8m8_t op1,uint8_t op2,vbool1_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u8m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf4_t test___riscv_vadc_vxm_u16mf4_tu(vuint16mf4_t maskedoff,vuint16mf4_t op1,uint16_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16mf2_t test___riscv_vadc_vxm_u16mf2_tu(vuint16mf2_t maskedoff,vuint16mf2_t op1,uint16_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m1_t test___riscv_vadc_vxm_u16m1_tu(vuint16m1_t maskedoff,vuint16m1_t op1,uint16_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m2_t test___riscv_vadc_vxm_u16m2_tu(vuint16m2_t maskedoff,vuint16m2_t op1,uint16_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m4_t test___riscv_vadc_vxm_u16m4_tu(vuint16m4_t maskedoff,vuint16m4_t op1,uint16_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint16m8_t test___riscv_vadc_vxm_u16m8_tu(vuint16m8_t maskedoff,vuint16m8_t op1,uint16_t op2,vbool2_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u16m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32mf2_t test___riscv_vadc_vxm_u32mf2_tu(vuint32mf2_t maskedoff,vuint32mf2_t op1,uint32_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32mf2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m1_t test___riscv_vadc_vxm_u32m1_tu(vuint32m1_t maskedoff,vuint32m1_t op1,uint32_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m2_t test___riscv_vadc_vxm_u32m2_tu(vuint32m2_t maskedoff,vuint32m2_t op1,uint32_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m4_t test___riscv_vadc_vxm_u32m4_tu(vuint32m4_t maskedoff,vuint32m4_t op1,uint32_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint32m8_t test___riscv_vadc_vxm_u32m8_tu(vuint32m8_t maskedoff,vuint32m8_t op1,uint32_t op2,vbool4_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u32m8_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m1_t test___riscv_vadc_vxm_u64m1_tu(vuint64m1_t maskedoff,vuint64m1_t op1,uint64_t op2,vbool64_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m1_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m2_t test___riscv_vadc_vxm_u64m2_tu(vuint64m2_t maskedoff,vuint64m2_t op1,uint64_t op2,vbool32_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m2_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m4_t test___riscv_vadc_vxm_u64m4_tu(vuint64m4_t maskedoff,vuint64m4_t op1,uint64_t op2,vbool16_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m4_tu(maskedoff,op1,op2,carryin,32); +} + + +vuint64m8_t test___riscv_vadc_vxm_u64m8_tu(vuint64m8_t maskedoff,vuint64m8_t op1,uint64_t op2,vbool8_t carryin,size_t vl) +{ + return __riscv_vadc_vxm_u64m8_tu(maskedoff,op1,op2,carryin,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vadc\.vxm\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+} 2 } } */