From patchwork Thu Nov 30 02:38:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "juzhe.zhong@rivai.ai" X-Patchwork-Id: 171704 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp114110vqy; Wed, 29 Nov 2023 18:39:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IFWeD+5d6pthKfi+c99kCsnJpTrHEwEpG+JJl6rLO+baHJ4dn0C9M/dy82RzLt9QtIJy5Z4 X-Received: by 2002:a67:c492:0:b0:464:543d:4342 with SMTP id d18-20020a67c492000000b00464543d4342mr2051123vsk.34.1701311962377; Wed, 29 Nov 2023 18:39:22 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1701311962; cv=pass; d=google.com; s=arc-20160816; b=xv0hZoRUqxyC2oGS/SFLEh/LQWVbyBnRpjdlJ60tCazlwadDD/A9oDvnOxcQa9aui8 UXH72zoYQtHrrGTxdKNUIncoO5zm+fmJ4E7w0D0SxfCeSoAjUllPUGtM9QB0zedX3GVZ xz+qeWTqBtXv/yG6EfTXslQCvNOyIl+kcqMo57jhwjO/6a+d/1DNP3KxI9m76XjdbaMl CFKeRgdMOAMYCKKacCXKbdEGY2S7L914Jp957/mQY52vgQUnV+Yru+jB6lE4eKjAl1iC /h5ncR9SiJ1nPpmFtF6tdn/Q0KoEp/vmSRxj7TzrbyT4QMswRBYYXuVHo8+fdkvdaPcB Ugxg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:feedback-id :content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:arc-filter:dmarc-filter:delivered-to; bh=6Bat3mIdHGaDpd17SxlGhPueiMTjhtK/bqmH6bl7g3M=; fh=idvV5TQ1gmHAoU8u1GUGfjilVySOK+BR5TeZLoSouN8=; b=0O8OwkXR3h5RCm0CuwDoeKb477fxbf2DkDI3eLEepCw60dgWWI/oFQnxDYBFYve29S 3tIaTKFsszgm9XYNlpkJIXCArdqGvvz/e9WU7OLtrgEdy27NfJJZePf9v7wDbJNXdmol Ien1/rcSXTx3iGi1ZG4WHfHdd3MK8+wA3YONOCjCAIASdxE/gqicinZFUf5QDwdrgBBp 5YTmLkaP8JqGMccL1zlqhb8sW6hNUPECLXtD8xOmEVZx1WrJuDcLCNCZ66S7GBaW5u5r FhrjhteJToZfQeSB8SnBbvRi0Qgt7yWk9misQXk4jtusXBKP7MNc0Y/av9+JDtmi2vpq nRyg== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id bx4-20020a05622a090400b0041816224c33si170007qtb.384.2023.11.29.18.39.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Nov 2023 18:39:22 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 20A713858CD1 for ; Thu, 30 Nov 2023 02:39:22 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbg151.qq.com (smtpbg151.qq.com [18.169.211.239]) by sourceware.org (Postfix) with ESMTPS id 394763858D1E for ; Thu, 30 Nov 2023 02:38:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 394763858D1E Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 394763858D1E Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=18.169.211.239 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701311937; cv=none; b=ZiyNG5yBLGiU/zfM9VRd63K0Ani/E4BYFmRS3EoPbD8nk2hVMmBgIKIrt0kY3mbPx84DsNRLb51kT4s4OdjGI0WQtabVh8nGopDcuehspFm2AaA7W7jgnja6bqhrFPH7MlLo6EQBHh2GhMWQLSvLhVtLtXzGtUaUswISt6J59EI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701311937; c=relaxed/simple; bh=nJLXt0dzoxJETcqkihDQtMapkcrbqUjj6r5N5wvDAO4=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=ZYQPT5ly8C18xbTUifYLIy78p1uvxz65Y662e3a5LCavE4kbAbgQ1tDv0dUuZMmZUqoLU95l0Fbf9zp0DVkWrK6sLYBpjlWcFX+ik+GcGI0Wid/TA0UzkW2sxu85GeC8WSLhD7L5kEP0ozHkQAU1r3TqUnH10I5mn0MweGqWm2A= ARC-Authentication-Results: i=1; server2.sourceware.org X-QQ-mid: bizesmtp72t1701311925tmx32ecy Received: from rios-cad122.hadoop.rioslab.org ( [58.60.1.26]) by bizesmtp.qq.com (ESMTP) with id ; Thu, 30 Nov 2023 10:38:43 +0800 (CST) X-QQ-SSF: 01400000000000G0V000000A0000000 X-QQ-FEAT: znfcQSa1hKaazQveQAf6mc4tpYYrXO880E9K3oKvrewAm4hSlel33pnhpEXd4 aCnUTzeg4lP/qvV5gUSMucrCL7EcBoLNgV+Azl6euMDquFtNotselvO07LPAzoF3cmZvXl1 9JnlLYABdu80xRFZiyEPJHoLYc7Du2oAwaNuS53EhY0aezLK/k333b/QZmPSaTlfLEoJd99 zeFK1f/UeMGrbx4GiqjN3MAYauNBwATd5gHNC6Efjbv7EO8ZJ0mhwhT9A+Qleat7BLlgxJD Rvt2GpM3EPCmGNt8aHV8KlntTr+337Bb4zStJqJMpE01QWSZDKM+1HNOpWhvSWnHrPOtZgH FF7lLliPKPGSAMfDGQ6o5X+utPjjURF6w/tvlzl8v3WTJLv3zplKI0kAFFBZg== X-QQ-GoodBg: 2 X-BIZMAIL-ID: 7219731203971350756 From: Juzhe-Zhong To: gcc-patches@gcc.gnu.org Cc: Juzhe-Zhong Subject: [Committed] RISC-V: Support highpart overlap for floating-point widen instructions Date: Thu, 30 Nov 2023 10:38:42 +0800 Message-Id: <20231130023842.2332222-1-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.3 MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvrgz:qybglogicsvrgz7a-one-0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783954892516408178 X-GMAIL-MSGID: 1783954892516408178 This patch leverages the approach of vwcvt/vext.vf2 which has been approved. Their approaches are totally the same. Tested no regression and committed. PR target/112431 gcc/ChangeLog: * config/riscv/vector.md: Add widenning overlap. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/pr112431-10.c: New test. * gcc.target/riscv/rvv/base/pr112431-11.c: New test. * gcc.target/riscv/rvv/base/pr112431-12.c: New test. * gcc.target/riscv/rvv/base/pr112431-13.c: New test. * gcc.target/riscv/rvv/base/pr112431-14.c: New test. * gcc.target/riscv/rvv/base/pr112431-15.c: New test. * gcc.target/riscv/rvv/base/pr112431-7.c: New test. * gcc.target/riscv/rvv/base/pr112431-8.c: New test. * gcc.target/riscv/rvv/base/pr112431-9.c: New test. --- gcc/config/riscv/vector.md | 78 ++++---- .../gcc.target/riscv/rvv/base/pr112431-10.c | 104 ++++++++++ .../gcc.target/riscv/rvv/base/pr112431-11.c | 68 +++++++ .../gcc.target/riscv/rvv/base/pr112431-12.c | 51 +++++ .../gcc.target/riscv/rvv/base/pr112431-13.c | 188 ++++++++++++++++++ .../gcc.target/riscv/rvv/base/pr112431-14.c | 119 +++++++++++ .../gcc.target/riscv/rvv/base/pr112431-15.c | 86 ++++++++ .../gcc.target/riscv/rvv/base/pr112431-7.c | 106 ++++++++++ .../gcc.target/riscv/rvv/base/pr112431-8.c | 68 +++++++ .../gcc.target/riscv/rvv/base/pr112431-9.c | 51 +++++ 10 files changed, 882 insertions(+), 37 deletions(-) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-10.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-11.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-12.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-13.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-14.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-15.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-7.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-8.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-9.c diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index 74716c73e98..6b891c11324 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -7622,84 +7622,88 @@ ;; ------------------------------------------------------------------------------- (define_insn "@pred_widen_fcvt_x_f" - [(set (match_operand:VWCONVERTI 0 "register_operand" "=&vr, &vr") + [(set (match_operand:VWCONVERTI 0 "register_operand" "=vr, vr, vr, vr, vr, vr, ?&vr, ?&vr") (if_then_else:VWCONVERTI (unspec: - [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") - (match_operand 4 "vector_length_operand" " rK, rK") - (match_operand 5 "const_int_operand" " i, i") - (match_operand 6 "const_int_operand" " i, i") - (match_operand 7 "const_int_operand" " i, i") - (match_operand 8 "const_int_operand" " i, i") + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK") + (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM) (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE) (unspec:VWCONVERTI - [(match_operand: 3 "register_operand" " vr, vr")] VFCVTS) - (match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0")))] + [(match_operand: 3 "register_operand" " W21, W21, W42, W42, W84, W84, vr, vr")] VFCVTS) + (match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0, vu, 0")))] "TARGET_VECTOR" "vfwcvt.x.f.v\t%0,%3%p1" [(set_attr "type" "vfwcvtftoi") (set_attr "mode" "") (set (attr "frm_mode") - (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))]) + (symbol_ref "riscv_vector::get_frm_mode (operands[8])")) + (set_attr "group_overlap" "W21,W21,W42,W42,W84,W84,none,none")]) (define_insn "@pred_widen_" - [(set (match_operand:VWCONVERTI 0 "register_operand" "=&vr, &vr") + [(set (match_operand:VWCONVERTI 0 "register_operand" "=vr, vr, vr, vr, vr, vr, ?&vr, ?&vr") (if_then_else:VWCONVERTI (unspec: - [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") - (match_operand 4 "vector_length_operand" " rK, rK") - (match_operand 5 "const_int_operand" " i, i") - (match_operand 6 "const_int_operand" " i, i") - (match_operand 7 "const_int_operand" " i, i") + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK") + (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (any_fix:VWCONVERTI - (match_operand: 3 "register_operand" " vr, vr")) - (match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0")))] + (match_operand: 3 "register_operand" " W21, W21, W42, W42, W84, W84, vr, vr")) + (match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0, vu, 0")))] "TARGET_VECTOR" "vfwcvt.rtz.x.f.v\t%0,%3%p1" [(set_attr "type" "vfwcvtftoi") - (set_attr "mode" "")]) + (set_attr "mode" "") + (set_attr "group_overlap" "W21,W21,W42,W42,W84,W84,none,none")]) (define_insn "@pred_widen_" - [(set (match_operand:V_VLSF 0 "register_operand" "=&vr, &vr") + [(set (match_operand:V_VLSF 0 "register_operand" "=vr, vr, vr, vr, vr, vr, ?&vr, ?&vr") (if_then_else:V_VLSF (unspec: - [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") - (match_operand 4 "vector_length_operand" " rK, rK") - (match_operand 5 "const_int_operand" " i, i") - (match_operand 6 "const_int_operand" " i, i") - (match_operand 7 "const_int_operand" " i, i") + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK") + (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (any_float:V_VLSF - (match_operand: 3 "register_operand" " vr, vr")) - (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0")))] + (match_operand: 3 "register_operand" " W21, W21, W42, W42, W84, W84, vr, vr")) + (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0, vu, 0")))] "TARGET_VECTOR" "vfwcvt.f.x.v\t%0,%3%p1" [(set_attr "type" "vfwcvtitof") - (set_attr "mode" "")]) + (set_attr "mode" "") + (set_attr "group_overlap" "W21,W21,W42,W42,W84,W84,none,none")]) (define_insn "@pred_extend" - [(set (match_operand:VWEXTF_ZVFHMIN 0 "register_operand" "=&vr, &vr") + [(set (match_operand:VWEXTF_ZVFHMIN 0 "register_operand" "=vr, vr, vr, vr, vr, vr, ?&vr, ?&vr") (if_then_else:VWEXTF_ZVFHMIN (unspec: - [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") - (match_operand 4 "vector_length_operand" " rK, rK") - (match_operand 5 "const_int_operand" " i, i") - (match_operand 6 "const_int_operand" " i, i") - (match_operand 7 "const_int_operand" " i, i") + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK") + (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) (float_extend:VWEXTF_ZVFHMIN - (match_operand: 3 "register_operand" " vr, vr")) - (match_operand:VWEXTF_ZVFHMIN 2 "vector_merge_operand" " vu, 0")))] + (match_operand: 3 "register_operand" " W21, W21, W42, W42, W84, W84, vr, vr")) + (match_operand:VWEXTF_ZVFHMIN 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0, vu, 0")))] "TARGET_VECTOR" "vfwcvt.f.f.v\t%0,%3%p1" [(set_attr "type" "vfwcvtftof") - (set_attr "mode" "")]) + (set_attr "mode" "") + (set_attr "group_overlap" "W21,W21,W42,W42,W84,W84,none,none")]) ;; ------------------------------------------------------------------------------- ;; ---- Predicated floating-point narrow conversions diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-10.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-10.c new file mode 100644 index 00000000000..5f161b31fa1 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-10.c @@ -0,0 +1,104 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7, double sum8, double sum9, + double sum10, double sum11, double sum12, double sum13, double sum14, + double sum15) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7 + sum8 + sum9 + + sum10 + sum11 + sum12 + sum13 + sum14 + sum15; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vint32m1_t v0 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v1 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v2 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v3 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v4 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v5 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v6 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v7 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v8 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v9 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v10 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v11 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v12 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v13 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v14 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + vint32m1_t v15 = __riscv_vle32_v_i32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m2_t vw0 = __riscv_vfwcvt_f_x_v_f64m2 (v0, vl); + vfloat64m2_t vw1 = __riscv_vfwcvt_f_x_v_f64m2 (v1, vl); + vfloat64m2_t vw2 = __riscv_vfwcvt_f_x_v_f64m2 (v2, vl); + vfloat64m2_t vw3 = __riscv_vfwcvt_f_x_v_f64m2 (v3, vl); + vfloat64m2_t vw4 = __riscv_vfwcvt_f_x_v_f64m2 (v4, vl); + vfloat64m2_t vw5 = __riscv_vfwcvt_f_x_v_f64m2 (v5, vl); + vfloat64m2_t vw6 = __riscv_vfwcvt_f_x_v_f64m2 (v6, vl); + vfloat64m2_t vw7 = __riscv_vfwcvt_f_x_v_f64m2 (v7, vl); + vfloat64m2_t vw8 = __riscv_vfwcvt_f_x_v_f64m2 (v8, vl); + vfloat64m2_t vw9 = __riscv_vfwcvt_f_x_v_f64m2 (v9, vl); + vfloat64m2_t vw10 = __riscv_vfwcvt_f_x_v_f64m2 (v10, vl); + vfloat64m2_t vw11 = __riscv_vfwcvt_f_x_v_f64m2 (v11, vl); + vfloat64m2_t vw12 = __riscv_vfwcvt_f_x_v_f64m2 (v12, vl); + vfloat64m2_t vw13 = __riscv_vfwcvt_f_x_v_f64m2 (v13, vl); + vfloat64m2_t vw14 = __riscv_vfwcvt_f_x_v_f64m2 (v14, vl); + vfloat64m2_t vw15 = __riscv_vfwcvt_f_x_v_f64m2 (v15, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m2_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m2_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m2_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m2_f64 (vw3); + double sum4 = __riscv_vfmv_f_s_f64m2_f64 (vw4); + double sum5 = __riscv_vfmv_f_s_f64m2_f64 (vw5); + double sum6 = __riscv_vfmv_f_s_f64m2_f64 (vw6); + double sum7 = __riscv_vfmv_f_s_f64m2_f64 (vw7); + double sum8 = __riscv_vfmv_f_s_f64m2_f64 (vw8); + double sum9 = __riscv_vfmv_f_s_f64m2_f64 (vw9); + double sum10 = __riscv_vfmv_f_s_f64m2_f64 (vw10); + double sum11 = __riscv_vfmv_f_s_f64m2_f64 (vw11); + double sum12 = __riscv_vfmv_f_s_f64m2_f64 (vw12); + double sum13 = __riscv_vfmv_f_s_f64m2_f64 (vw13); + double sum14 = __riscv_vfmv_f_s_f64m2_f64 (vw14); + double sum15 = __riscv_vfmv_f_s_f64m2_f64 (vw15); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7, sum8, + sum9, sum10, sum11, sum12, sum13, sum14, sum15); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-11.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-11.c new file mode 100644 index 00000000000..82827d14e34 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-11.c @@ -0,0 +1,68 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vint32m2_t v0 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v1 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v2 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v3 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v4 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v5 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v6 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + vint32m2_t v7 = __riscv_vle32_v_i32m2 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m4_t vw0 = __riscv_vfwcvt_f_x_v_f64m4 (v0, vl); + vfloat64m4_t vw1 = __riscv_vfwcvt_f_x_v_f64m4 (v1, vl); + vfloat64m4_t vw2 = __riscv_vfwcvt_f_x_v_f64m4 (v2, vl); + vfloat64m4_t vw3 = __riscv_vfwcvt_f_x_v_f64m4 (v3, vl); + vfloat64m4_t vw4 = __riscv_vfwcvt_f_x_v_f64m4 (v4, vl); + vfloat64m4_t vw5 = __riscv_vfwcvt_f_x_v_f64m4 (v5, vl); + vfloat64m4_t vw6 = __riscv_vfwcvt_f_x_v_f64m4 (v6, vl); + vfloat64m4_t vw7 = __riscv_vfwcvt_f_x_v_f64m4 (v7, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m4_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m4_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m4_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m4_f64 (vw3); + double sum4 = __riscv_vfmv_f_s_f64m4_f64 (vw4); + double sum5 = __riscv_vfmv_f_s_f64m4_f64 (vw5); + double sum6 = __riscv_vfmv_f_s_f64m4_f64 (vw6); + double sum7 = __riscv_vfmv_f_s_f64m4_f64 (vw7); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-12.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-12.c new file mode 100644 index 00000000000..c4ae60755ea --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-12.c @@ -0,0 +1,51 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3) +{ + return sum0 + sum1 + sum2 + sum3; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vint32m4_t v0 = __riscv_vle32_v_i32m4 ((void *) it, vl); + it += vl; + vint32m4_t v1 = __riscv_vle32_v_i32m4 ((void *) it, vl); + it += vl; + vint32m4_t v2 = __riscv_vle32_v_i32m4 ((void *) it, vl); + it += vl; + vint32m4_t v3 = __riscv_vle32_v_i32m4 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m8_t vw0 = __riscv_vfwcvt_f_x_v_f64m8 (v0, vl); + vfloat64m8_t vw1 = __riscv_vfwcvt_f_x_v_f64m8 (v1, vl); + vfloat64m8_t vw2 = __riscv_vfwcvt_f_x_v_f64m8 (v2, vl); + vfloat64m8_t vw3 = __riscv_vfwcvt_f_x_v_f64m8 (v3, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m8_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m8_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m8_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m8_f64 (vw3); + + sum += sumation (sum0, sum1, sum2, sum3); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-13.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-13.c new file mode 100644 index 00000000000..fde7076d34f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-13.c @@ -0,0 +1,188 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7, double sum8, double sum9, + double sum10, double sum11, double sum12, double sum13, double sum14, + double sum15) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7 + sum8 + sum9 + + sum10 + sum11 + sum12 + sum13 + sum14 + sum15; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v4 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v5 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v6 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v7 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v8 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v9 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v10 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v11 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v12 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v13 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v14 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v15 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v3, vl); + vint64m2_t vw4 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v4, vl); + vint64m2_t vw5 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v5, vl); + vint64m2_t vw6 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v6, vl); + vint64m2_t vw7 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v7, vl); + vint64m2_t vw8 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v8, vl); + vint64m2_t vw9 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v9, vl); + vint64m2_t vw10 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v10, vl); + vint64m2_t vw11 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v11, vl); + vint64m2_t vw12 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v12, vl); + vint64m2_t vw13 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v13, vl); + vint64m2_t vw14 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v14, vl); + vint64m2_t vw15 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v15, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + double sum4 = __riscv_vmv_x_s_i64m2_i64 (vw4); + double sum5 = __riscv_vmv_x_s_i64m2_i64 (vw5); + double sum6 = __riscv_vmv_x_s_i64m2_i64 (vw6); + double sum7 = __riscv_vmv_x_s_i64m2_i64 (vw7); + double sum8 = __riscv_vmv_x_s_i64m2_i64 (vw8); + double sum9 = __riscv_vmv_x_s_i64m2_i64 (vw9); + double sum10 = __riscv_vmv_x_s_i64m2_i64 (vw10); + double sum11 = __riscv_vmv_x_s_i64m2_i64 (vw11); + double sum12 = __riscv_vmv_x_s_i64m2_i64 (vw12); + double sum13 = __riscv_vmv_x_s_i64m2_i64 (vw13); + double sum14 = __riscv_vmv_x_s_i64m2_i64 (vw14); + double sum15 = __riscv_vmv_x_s_i64m2_i64 (vw15); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7, sum8, + sum9, sum10, sum11, sum12, sum13, sum14, sum15); + } + return sum; +} + +double +foo2 (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v4 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v5 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v6 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v7 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v8 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v9 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v10 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v11 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v12 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v13 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v14 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v15 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_x_f_v_i64m2 (v3, vl); + vint64m2_t vw4 = __riscv_vfwcvt_x_f_v_i64m2 (v4, vl); + vint64m2_t vw5 = __riscv_vfwcvt_x_f_v_i64m2 (v5, vl); + vint64m2_t vw6 = __riscv_vfwcvt_x_f_v_i64m2 (v6, vl); + vint64m2_t vw7 = __riscv_vfwcvt_x_f_v_i64m2 (v7, vl); + vint64m2_t vw8 = __riscv_vfwcvt_x_f_v_i64m2 (v8, vl); + vint64m2_t vw9 = __riscv_vfwcvt_x_f_v_i64m2 (v9, vl); + vint64m2_t vw10 = __riscv_vfwcvt_x_f_v_i64m2 (v10, vl); + vint64m2_t vw11 = __riscv_vfwcvt_x_f_v_i64m2 (v11, vl); + vint64m2_t vw12 = __riscv_vfwcvt_x_f_v_i64m2 (v12, vl); + vint64m2_t vw13 = __riscv_vfwcvt_x_f_v_i64m2 (v13, vl); + vint64m2_t vw14 = __riscv_vfwcvt_x_f_v_i64m2 (v14, vl); + vint64m2_t vw15 = __riscv_vfwcvt_x_f_v_i64m2 (v15, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + double sum4 = __riscv_vmv_x_s_i64m2_i64 (vw4); + double sum5 = __riscv_vmv_x_s_i64m2_i64 (vw5); + double sum6 = __riscv_vmv_x_s_i64m2_i64 (vw6); + double sum7 = __riscv_vmv_x_s_i64m2_i64 (vw7); + double sum8 = __riscv_vmv_x_s_i64m2_i64 (vw8); + double sum9 = __riscv_vmv_x_s_i64m2_i64 (vw9); + double sum10 = __riscv_vmv_x_s_i64m2_i64 (vw10); + double sum11 = __riscv_vmv_x_s_i64m2_i64 (vw11); + double sum12 = __riscv_vmv_x_s_i64m2_i64 (vw12); + double sum13 = __riscv_vmv_x_s_i64m2_i64 (vw13); + double sum14 = __riscv_vmv_x_s_i64m2_i64 (vw14); + double sum15 = __riscv_vmv_x_s_i64m2_i64 (vw15); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7, sum8, + sum9, sum10, sum11, sum12, sum13, sum14, sum15); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-14.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-14.c new file mode 100644 index 00000000000..535ea7ce34b --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-14.c @@ -0,0 +1,119 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v4 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v5 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v6 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v7 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v3, vl); + vint64m2_t vw4 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v4, vl); + vint64m2_t vw5 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v5, vl); + vint64m2_t vw6 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v6, vl); + vint64m2_t vw7 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v7, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + double sum4 = __riscv_vmv_x_s_i64m2_i64 (vw4); + double sum5 = __riscv_vmv_x_s_i64m2_i64 (vw5); + double sum6 = __riscv_vmv_x_s_i64m2_i64 (vw6); + double sum7 = __riscv_vmv_x_s_i64m2_i64 (vw7); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7); + } + return sum; +} + +double +foo2 (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v4 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v5 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v6 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v7 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_x_f_v_i64m2 (v3, vl); + vint64m2_t vw4 = __riscv_vfwcvt_x_f_v_i64m2 (v4, vl); + vint64m2_t vw5 = __riscv_vfwcvt_x_f_v_i64m2 (v5, vl); + vint64m2_t vw6 = __riscv_vfwcvt_x_f_v_i64m2 (v6, vl); + vint64m2_t vw7 = __riscv_vfwcvt_x_f_v_i64m2 (v7, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + double sum4 = __riscv_vmv_x_s_i64m2_i64 (vw4); + double sum5 = __riscv_vmv_x_s_i64m2_i64 (vw5); + double sum6 = __riscv_vmv_x_s_i64m2_i64 (vw6); + double sum7 = __riscv_vmv_x_s_i64m2_i64 (vw7); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-15.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-15.c new file mode 100644 index 00000000000..3d46e4a829a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-15.c @@ -0,0 +1,86 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3) +{ + return sum0 + sum1 + sum2 + sum3; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_rtz_x_f_v_i64m2 (v3, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + + sum += sumation (sum0, sum1, sum2, sum3); + } + return sum; +} + +double +foo2 (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vint64m2_t vw0 = __riscv_vfwcvt_x_f_v_i64m2 (v0, vl); + vint64m2_t vw1 = __riscv_vfwcvt_x_f_v_i64m2 (v1, vl); + vint64m2_t vw2 = __riscv_vfwcvt_x_f_v_i64m2 (v2, vl); + vint64m2_t vw3 = __riscv_vfwcvt_x_f_v_i64m2 (v3, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vmv_x_s_i64m2_i64 (vw0); + double sum1 = __riscv_vmv_x_s_i64m2_i64 (vw1); + double sum2 = __riscv_vmv_x_s_i64m2_i64 (vw2); + double sum3 = __riscv_vmv_x_s_i64m2_i64 (vw3); + + sum += sumation (sum0, sum1, sum2, sum3); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-7.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-7.c new file mode 100644 index 00000000000..7064471496c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-7.c @@ -0,0 +1,106 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7, double sum8, double sum9, + double sum10, double sum11, double sum12, double sum13, double sum14, + double sum15) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7 + sum8 + sum9 + + sum10 + sum11 + sum12 + sum13 + sum14 + sum15; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m1_t v0 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v1 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v2 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v3 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v4 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v5 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v6 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v7 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v8 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v9 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v10 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v11 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v12 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v13 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v14 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + vfloat32m1_t v15 = __riscv_vle32_v_f32m1 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m2_t vw0 = __riscv_vfwcvt_f_f_v_f64m2 (v0, vl); + vfloat64m2_t vw1 = __riscv_vfwcvt_f_f_v_f64m2 (v1, vl); + vfloat64m2_t vw2 = __riscv_vfwcvt_f_f_v_f64m2 (v2, vl); + vfloat64m2_t vw3 = __riscv_vfwcvt_f_f_v_f64m2 (v3, vl); + vfloat64m2_t vw4 = __riscv_vfwcvt_f_f_v_f64m2 (v4, vl); + vfloat64m2_t vw5 = __riscv_vfwcvt_f_f_v_f64m2 (v5, vl); + vfloat64m2_t vw6 = __riscv_vfwcvt_f_f_v_f64m2 (v6, vl); + vfloat64m2_t vw7 = __riscv_vfwcvt_f_f_v_f64m2 (v7, vl); + vfloat64m2_t vw8 = __riscv_vfwcvt_f_f_v_f64m2 (v8, vl); + vfloat64m2_t vw9 = __riscv_vfwcvt_f_f_v_f64m2 (v9, vl); + vfloat64m2_t vw10 = __riscv_vfwcvt_f_f_v_f64m2 (v10, vl); + vfloat64m2_t vw11 = __riscv_vfwcvt_f_f_v_f64m2 (v11, vl); + vfloat64m2_t vw12 = __riscv_vfwcvt_f_f_v_f64m2 (v12, vl); + vfloat64m2_t vw13 = __riscv_vfwcvt_f_f_v_f64m2 (v13, vl); + vfloat64m2_t vw14 = __riscv_vfwcvt_f_f_v_f64m2 (v14, vl); + vfloat64m2_t vw15 = __riscv_vfwcvt_f_f_v_f64m2 (v15, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m2_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m2_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m2_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m2_f64 (vw3); + double sum4 = __riscv_vfmv_f_s_f64m2_f64 (vw4); + double sum5 = __riscv_vfmv_f_s_f64m2_f64 (vw5); + double sum6 = __riscv_vfmv_f_s_f64m2_f64 (vw6); + double sum7 = __riscv_vfmv_f_s_f64m2_f64 (vw7); + double sum8 = __riscv_vfmv_f_s_f64m2_f64 (vw8); + double sum9 = __riscv_vfmv_f_s_f64m2_f64 (vw9); + double sum10 = __riscv_vfmv_f_s_f64m2_f64 (vw10); + double sum11 = __riscv_vfmv_f_s_f64m2_f64 (vw11); + double sum12 = __riscv_vfmv_f_s_f64m2_f64 (vw12); + double sum13 = __riscv_vfmv_f_s_f64m2_f64 (vw13); + double sum14 = __riscv_vfmv_f_s_f64m2_f64 (vw14); + double sum15 = __riscv_vfmv_f_s_f64m2_f64 (vw15); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7, sum8, + sum9, sum10, sum11, sum12, sum13, sum14, sum15); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ + + diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-8.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-8.c new file mode 100644 index 00000000000..ab56d0d69af --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-8.c @@ -0,0 +1,68 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3, double sum4, + double sum5, double sum6, double sum7) +{ + return sum0 + sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m2_t v0 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v1 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v2 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v3 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v4 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v5 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v6 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + vfloat32m2_t v7 = __riscv_vle32_v_f32m2 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m4_t vw0 = __riscv_vfwcvt_f_f_v_f64m4 (v0, vl); + vfloat64m4_t vw1 = __riscv_vfwcvt_f_f_v_f64m4 (v1, vl); + vfloat64m4_t vw2 = __riscv_vfwcvt_f_f_v_f64m4 (v2, vl); + vfloat64m4_t vw3 = __riscv_vfwcvt_f_f_v_f64m4 (v3, vl); + vfloat64m4_t vw4 = __riscv_vfwcvt_f_f_v_f64m4 (v4, vl); + vfloat64m4_t vw5 = __riscv_vfwcvt_f_f_v_f64m4 (v5, vl); + vfloat64m4_t vw6 = __riscv_vfwcvt_f_f_v_f64m4 (v6, vl); + vfloat64m4_t vw7 = __riscv_vfwcvt_f_f_v_f64m4 (v7, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m4_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m4_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m4_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m4_f64 (vw3); + double sum4 = __riscv_vfmv_f_s_f64m4_f64 (vw4); + double sum5 = __riscv_vfmv_f_s_f64m4_f64 (vw5); + double sum6 = __riscv_vfmv_f_s_f64m4_f64 (vw6); + double sum7 = __riscv_vfmv_f_s_f64m4_f64 (vw7); + + sum += sumation (sum0, sum1, sum2, sum3, sum4, sum5, sum6, sum7); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-9.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-9.c new file mode 100644 index 00000000000..82f369c0cd9 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr112431-9.c @@ -0,0 +1,51 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3" } */ + +#include "riscv_vector.h" + +double __attribute__ ((noinline)) +sumation (double sum0, double sum1, double sum2, double sum3) +{ + return sum0 + sum1 + sum2 + sum3; +} + +double +foo (char const *buf, size_t len) +{ + double sum = 0; + size_t vl = __riscv_vsetvlmax_e8m8 (); + size_t step = vl * 4; + const char *it = buf, *end = buf + len; + for (; it + step <= end;) + { + vfloat32m4_t v0 = __riscv_vle32_v_f32m4 ((void *) it, vl); + it += vl; + vfloat32m4_t v1 = __riscv_vle32_v_f32m4 ((void *) it, vl); + it += vl; + vfloat32m4_t v2 = __riscv_vle32_v_f32m4 ((void *) it, vl); + it += vl; + vfloat32m4_t v3 = __riscv_vle32_v_f32m4 ((void *) it, vl); + it += vl; + + asm volatile("nop" ::: "memory"); + vfloat64m8_t vw0 = __riscv_vfwcvt_f_f_v_f64m8 (v0, vl); + vfloat64m8_t vw1 = __riscv_vfwcvt_f_f_v_f64m8 (v1, vl); + vfloat64m8_t vw2 = __riscv_vfwcvt_f_f_v_f64m8 (v2, vl); + vfloat64m8_t vw3 = __riscv_vfwcvt_f_f_v_f64m8 (v3, vl); + + asm volatile("nop" ::: "memory"); + double sum0 = __riscv_vfmv_f_s_f64m8_f64 (vw0); + double sum1 = __riscv_vfmv_f_s_f64m8_f64 (vw1); + double sum2 = __riscv_vfmv_f_s_f64m8_f64 (vw2); + double sum3 = __riscv_vfmv_f_s_f64m8_f64 (vw3); + + sum += sumation (sum0, sum1, sum2, sum3); + } + return sum; +} + +/* { dg-final { scan-assembler-not {vmv1r} } } */ +/* { dg-final { scan-assembler-not {vmv2r} } } */ +/* { dg-final { scan-assembler-not {vmv4r} } } */ +/* { dg-final { scan-assembler-not {vmv8r} } } */ +/* { dg-final { scan-assembler-not {csrr} } } */