From patchwork Wed Dec 6 02:45:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Wang X-Patchwork-Id: 174278 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp3846021vqy; Tue, 5 Dec 2023 18:47:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IFArTfTCztWurMdSxKtF+sGfpHyv/50BppuWsdARg0rpYZY4TmhpSRu7V2M0Imi/hneLYAb X-Received: by 2002:ad4:420d:0:b0:67a:97f4:830d with SMTP id k13-20020ad4420d000000b0067a97f4830dmr249392qvp.33.1701830840493; Tue, 05 Dec 2023 18:47:20 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1701830840; cv=pass; d=google.com; s=arc-20160816; b=kFZM4k18zcAVS2z2iRcnGg/5tztoVu9ItYvUSeDug9w9xINhssye2vUXrjqsyVGC1W vkV5BWFJktHP6uVQbZ3XdOcNThAkHqUv8O3K50sSWIFHPNbgcyAvE9I9j+xsRorPZIU6 01sp9NcQXbpStTPBAIglLA/ZaYWtKvYQaho1Hvk+RWvrMTNHWhkcqKZZnhONzeaVeyeb WAyhlPTwqWvb2drH9MUq7AtiAWVlCDV/+kJPMzIB8RBF70znxqAspIuaeoa8tRtzQIZh 68EXBjyeCvpaREbSgozJH+7r5DZ1AvBz50YCjsC4nv8IHdYAUE4VM1LpucSlzpN3Pbqk /jcw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:references:in-reply-to :message-id:date:subject:cc:to:from:arc-filter:dmarc-filter :delivered-to; bh=H+xkRdPolriHF3eOXm/icUHiGub6hkYsCQNR04yU1lM=; fh=LD/eg044r5myWG9+PuCx38QZOcH49ea16O/aCRBQ/Gk=; b=HKmFWKSN8hch77XvnNmeXXlFBxA2+ejaYt0NffOxHJresHK1vvw1DOYAkqyG6f5zxZ ygROIZpNpHt1MTgJtqfG9l+pw0VX7/AbLnBeW9fTOtmLCfOiB8asHXlL0/0QkoVw3HRL V6EwG1VH9n0+16YhJI9ozKb1E3CU51I1IThHtKzuwL6poHXXcXJe+weof5ImfnTXAtMO 4zOqXVkYBlReGcwCsyy2hreeivfMyS6Bw+n2Dz83OVCZ2OeGOX1THL8/BlDa1pcMeE6t oiKozJlIKv3P8tqZYmefWpLealIyzxnF58N1IX14/6tgkHtguIo1V2RN1+/WIfEmhF36 9s+A== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id gu13-20020a056214260d00b0067a27cbb5d5si6177199qvb.206.2023.12.05.18.47.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 18:47:20 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 435943864C5B for ; Wed, 6 Dec 2023 02:47:20 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from azure-sdnproxy.icoremail.net (azure-sdnproxy.icoremail.net [20.231.56.155]) by sourceware.org (Postfix) with ESMTP id 5D361385F009 for ; Wed, 6 Dec 2023 02:46:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5D361385F009 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=eswincomputing.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=eswincomputing.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5D361385F009 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=20.231.56.155 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701830797; cv=none; b=f/vLgUxXVatv7wOeV8+tQn1bL+4Q3zRYo4MsDS/Yq5rJVt7Q1v5oMgdT7+wttYciQIaOBnpTftnjFHcuWCrWbHiDknJIxj/j81CDOXXHyqIH9W8X/E/1sQZvXHi5wU20TrGzq4m5CYeNaY4wRUM6fn4IvWbJndDpa6KUKTtozUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701830797; c=relaxed/simple; bh=mN1HmgTeU9mfvGnLzANIAj1zpqxwfYL9PXJjFYxmGbY=; h=From:To:Subject:Date:Message-Id; b=ZAnHN6rPoXPddieLNbo5wt0ANSRchiHu5gjt8DwgaYmb9OaqZzDXcqhlOehDW4yUHSXjAvKE4CeZ3H21mCVvnpS0t5Gswm8w1jQEk5oNjMvvrrRDn50jumDZvAujkXHQjWAw77MM1tZiyeHOIp27r16tv/dzaLZF+4OIG2yKBLc= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from localhost.localdomain (unknown [10.12.130.31]) by app2 (Coremail) with SMTP id TQJkCgAnttQ54G9llmIAAA--.5861S6; Wed, 06 Dec 2023 10:45:21 +0800 (CST) From: Feng Wang To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, jeffreyalaw@gmail.com, juzhe.zhong@rivai.ai, zhusonghe@eswincomputing.com, panciyan@eswincomputing.com, Feng Wang Subject: [PATCH 3/4] RISC-V: Add crypto vector machine descriptions Date: Wed, 6 Dec 2023 02:45:23 +0000 Message-Id: <20231206024524.10792-3-wangfeng@eswincomputing.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231206024524.10792-1-wangfeng@eswincomputing.com> References: <20231206024524.10792-1-wangfeng@eswincomputing.com> X-CM-TRANSID: TQJkCgAnttQ54G9llmIAAA--.5861S6 X-Coremail-Antispam: 1UD129KBjvAXoWftw1UXrWfZF18XFW3Jr1DKFg_yoW5urWxCo WrKr4kAF18WFy09390ka1fGr1kXF43AF1xJayfKr1Yvan8JrZ8trnF9a13Z343trsrXw4D Wr97u3WUXFW8Jw4fn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUYA7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r15M28IrcIa0x kI8VCY1x0267AKxVW8JVW5JwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l84AC jcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr 1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1l e2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI 8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwAC jcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lc2xSY4AK6svPMxAIw28Icx kI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2Iq xVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42 IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY 6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aV CY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUjYiiDUUUUU== X-CM-SenderInfo: pzdqwwxhqjqvxvzl0uprps33xlqjhudrp/ X-Spam-Status: No, score=-12.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784498975294019412 X-GMAIL-MSGID: 1784498975294019412 This patch add the crypto machine descriptions(vector-crypto.md) and some new iterators which are used by crypto vector ext. Co-Authored by: Songhe Zhu Co-Authored by: Ciyan Pan gcc/ChangeLog: * config/riscv/iterators.md: Add rotate insn name. * config/riscv/riscv.md: Add new insns name for crypto vector. * config/riscv/vector-iterators.md: Add new iterators for crypto vector. * config/riscv/vector.md: Add the corresponding attr for crypto vector. * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector. --- gcc/config/riscv/iterators.md | 4 +- gcc/config/riscv/riscv.md | 33 +- gcc/config/riscv/vector-crypto.md | 500 +++++++++++++++++++++++++++ gcc/config/riscv/vector-iterators.md | 41 +++ gcc/config/riscv/vector.md | 49 ++- 5 files changed, 607 insertions(+), 20 deletions(-) create mode 100755 gcc/config/riscv/vector-crypto.md diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md index ecf033f2fa7..f332fba7031 100644 --- a/gcc/config/riscv/iterators.md +++ b/gcc/config/riscv/iterators.md @@ -304,7 +304,9 @@ (umax "maxu") (clz "clz") (ctz "ctz") - (popcount "cpop")]) + (popcount "cpop") + (rotate "rol") + (rotatert "ror")]) ;; ------------------------------------------------------------------- ;; Int Iterators. diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md index 935eeb7fd8e..a887f3cd412 100644 --- a/gcc/config/riscv/riscv.md +++ b/gcc/config/riscv/riscv.md @@ -428,6 +428,34 @@ ;; vcompress vector compress instruction ;; vmov whole vector register move ;; vector unknown vector instruction +;; 17. Crypto Vector instructions +;; vandn crypto vector bitwise and-not instructions +;; vbrev crypto vector reverse bits in elements instructions +;; vbrev8 crypto vector reverse bits in bytes instructions +;; vrev8 crypto vector reverse bytes instructions +;; vclz crypto vector count leading Zeros instructions +;; vctz crypto vector count lrailing Zeros instructions +;; vrol crypto vector rotate left instructions +;; vror crypto vector rotate right instructions +;; vwsll crypto vector widening shift left logical instructions +;; vclmul crypto vector carry-less multiply - return low half instructions +;; vclmulh crypto vector carry-less multiply - return high half instructions +;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions +;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions +;; vaesef crypto vector AES final-round encryption instructions +;; vaesem crypto vector AES middle-round encryption instructions +;; vaesdf crypto vector AES final-round decryption instructions +;; vaesdm crypto vector AES middle-round decryption instructions +;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions +;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions +;; vaesz crypto vector AES round zero encryption/decryption instructions +;; vsha2ms crypto vector SHA-2 message schedule instructions +;; vsha2ch crypto vector SHA-2 two rounds of compression instructions +;; vsha2cl crypto vector SHA-2 two rounds of compression instructions +;; vsm4k crypto vector SM4 KeyExpansion instructions +;; vsm4r crypto vector SM4 Rounds instructions +;; vsm3me crypto vector SM3 Message Expansion instructions +;; vsm3c crypto vector SM3 Compression instructions (define_attr "type" "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore, mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul, @@ -447,7 +475,9 @@ vired,viwred,vfredu,vfredo,vfwredu,vfwredo, vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv, vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down, - vgather,vcompress,vmov,vector" + vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll, + vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz, + vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c" (cond [(eq_attr "got" "load") (const_string "load") ;; If a doubleword move uses these expensive instructions, @@ -3747,6 +3777,7 @@ (include "thead.md") (include "generic-ooo.md") (include "vector.md") +(include "vector-crypto.md") (include "zicond.md") (include "zc.md") (include "corev.md") diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md new file mode 100755 index 00000000000..a40ecef4342 --- /dev/null +++ b/gcc/config/riscv/vector-crypto.md @@ -0,0 +1,500 @@ +(define_c_enum "unspec" [ + ;; Zvbb unspecs + UNSPEC_VBREV + UNSPEC_VBREV8 + UNSPEC_VREV8 + UNSPEC_VCLMUL + UNSPEC_VCLMULH + UNSPEC_VGHSH + UNSPEC_VGMUL + UNSPEC_VAESEF + UNSPEC_VAESEFVV + UNSPEC_VAESEFVS + UNSPEC_VAESEM + UNSPEC_VAESEMVV + UNSPEC_VAESEMVS + UNSPEC_VAESDF + UNSPEC_VAESDFVV + UNSPEC_VAESDFVS + UNSPEC_VAESDM + UNSPEC_VAESDMVV + UNSPEC_VAESDMVS + UNSPEC_VAESZ + UNSPEC_VAESZVVNULL + UNSPEC_VAESZVS + UNSPEC_VAESKF1 + UNSPEC_VAESKF2 + UNSPEC_VSHA2MS + UNSPEC_VSHA2CH + UNSPEC_VSHA2CL + UNSPEC_VSM4K + UNSPEC_VSM4R + UNSPEC_VSM4RVV + UNSPEC_VSM4RVS + UNSPEC_VSM3ME + UNSPEC_VSM3C +]) + +(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")]) + +(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")]) + +(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef") + (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf") + (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef") + (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf") + (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" ) + (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )]) + +(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms") + (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")]) + +(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")]) + +(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")]) + +(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv") + (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv") + (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs") + (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs") + (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs") + (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")]) + +(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8]) + +(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH]) + +(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV + UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS + UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS + UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS]) + +(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL]) + +(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K]) + +(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C]) + +;; zvbb instructions patterns. +;; vandn.vv vandn.vx vrol.vv vrol.vx +;; vror.vv vror.vx vror.vi +;; vwsll.vv vwsll.vx vwsll.vi + +(define_insn "@pred_vandn" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not:VI (match_operand:VI 4 "register_operand" "vr,vr"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vv\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "")]) + +(define_insn "@pred_vandn_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not: + (match_operand: 4 "register_operand" "r,r"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vx\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "")]) + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (match_operand:VI 4 "register_operand" "vr,vr")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.vv\t%0,%3,%4%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_v_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd, vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "pmode_register_operand" "r, r")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.vx\t%0,%3,%4%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "*pred_vror_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd, vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (rotatert:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "const_csr_operand" "K, K")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vror.vi\t%0,%3,%4%p1" + [(set_attr "type" "vror") + (set_attr "mode" "")]) + +(define_insn "@pred_vwsll" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") + (if_then_else:VWEXTI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand: 3 "register_operand" "vr")) + (match_operand: 4 "register_operand" "vr")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.vv\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "")]) + +(define_insn "@pred_vwsll_scalar" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") + (if_then_else:VWEXTI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand: 3 "register_operand" "vr")) + (match_operand: 4 "pmode_reg_or_uimm5_operand" "rK")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.v%o4\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "")]) + +;; vbrev.v vbrev8.v vrev8.v + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" "rK,rK") + (match_operand 5 "const_int_operand" "i, i") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.v\t%0,%3%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vclz.v vctz.v + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vd") + (clz_ctz_pcnt:VI + (parallel + [(match_operand:VI 2 "register_operand" "vr") + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 3 "vector_length_operand" " rK") + (match_operand 4 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))] + "TARGET_ZVBB" + "v.v\t%0,%2%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvbc instructions patterns. +;; vclmul.vv vclmul.vx +;; vclmulh.vv vclmulh.vx + +(define_insn "@pred_vclmul" + [(set (match_operand:VDI 0 "register_operand" "=vd,vd") + (if_then_else:VDI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul.vv\t%0,%3,%4%p1" + [(set_attr "type" "vclmul") + (set_attr "mode" "")]) + +(define_insn "@pred_vclmul_scalar" + [(set (match_operand:VDI 0 "register_operand" "=vd,vd") + (if_then_else:VDI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand: 4 "register_operand" "r,r")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul.vx\t%0,%3,%4%p1" + [(set_attr "type" "vclmul") + (set_attr "mode" "")]) + +;; zvknh[ab] and zvkg instructions patterns. +;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv + +(define_insn "@pred_v" + [(set (match_operand:VQEXTI 0 "register_operand" "=vd") + (if_then_else:VQEXTI + (unspec: + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VQEXTI + [(match_operand:VQEXTI 1 "register_operand" " 0") + (match_operand:VQEXTI 2 "register_operand" "vr") + (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB) + (match_dup 1)))] + "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG" + "v.vv\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvkned instructions patterns. +;; vgmul.vv vaesz.vs +;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs] +(define_insn "@pred_crypto_vv" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKG || TARGET_ZVKNED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx1_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx2_scalar" + [(set (match_operand: 0 "register_operand" "=vd") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx4_scalar" + [(set (match_operand: 0 "register_operand" "=vd") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx8_scalar" + [(set (match_operand: 0 "register_operand" "=vd") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx16_scalar" + [(set (match_operand: 0 "register_operand" "=vd") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vaeskf1.vi vsm4k.vi +(define_insn "@pred_crypto_vi_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd, vd") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand: 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.vi\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vaeskf2.vi vsm3c.vi +(define_insn "@pred_vi_nomaskedoff_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" "0") + (match_operand:VSI 2 "register_operand" "vr") + (match_operand: 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSH" + "v.vi\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvksh instructions patterns. +;; vsm3me.vv + +(define_insn "@pred_vsm3me" + [(set (match_operand:VSI 0 "register_operand" "=vd, vd") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKSH" + "vsm3me.vv\t%0,%2,%3" + [(set_attr "type" "vsm3me") + (set_attr "mode" "")]) \ No newline at end of file diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md index 56080ed1f5f..1b16b476035 100644 --- a/gcc/config/riscv/vector-iterators.md +++ b/gcc/config/riscv/vector-iterators.md @@ -3916,3 +3916,44 @@ (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024") (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048") (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")]) + +(define_mode_iterator VSI [ + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX2_SI [ + RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX4_SI [ + RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX8_SI [ + RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX16_SI [ + (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_attr VSIX2 [ + (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI") +]) + +(define_mode_attr VSIX4 [ + (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI") +]) + +(define_mode_attr VSIX8 [ + (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI") +]) + +(define_mode_attr VSIX16 [ + (RVVMF2SI "RVVM8SI") +]) + +(define_mode_iterator VDI [ + (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") + (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") +]) \ No newline at end of file diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index ba9c9e5a9b6..36cb7510ec6 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -52,7 +52,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -74,7 +76,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -698,10 +702,12 @@ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\ vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff") + vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\ + vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh") (const_int 2) - (eq_attr "type" "vimerge,vfmerge,vcompress") + (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_int 1) (eq_attr "type" "vimuladd,vfmuladd") @@ -740,7 +746,8 @@ vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\ vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\ vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\ - vlsegde,vssegts,vssegtux,vssegtox,vlsegdff") + vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\ + vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (const_int 4) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -755,13 +762,15 @@ vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vmidx,vssegte") + (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaesz,vsm4r") (const_int 3)] (const_int INVALID_ATTRIBUTE))) @@ -770,7 +779,8 @@ (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\ vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\ vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\ - vcompress,vldff,vlsegde,vlsegdff") + vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\ + vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (symbol_ref "riscv_vector::get_ta(operands[5])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -786,13 +796,13 @@ vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\ vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ta(operands[6])") (eq_attr "type" "vimuladd,vfmuladd") (symbol_ref "riscv_vector::get_ta(operands[7])") - (eq_attr "type" "vmidx") + (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r") (symbol_ref "riscv_vector::get_ta(operands[4])")] (const_int INVALID_ATTRIBUTE))) @@ -800,7 +810,7 @@ (define_attr "ma" "" (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\ vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\ - vfncvtftof,vfclass,vldff,vlsegde,vlsegdff") + vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (symbol_ref "riscv_vector::get_ma(operands[6])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -815,7 +825,8 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\ vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\ vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\ - viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ma(operands[7])") (eq_attr "type" "vimuladd,vfmuladd") @@ -831,9 +842,10 @@ vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\ vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ - vimovxv,vfmovfv,vlsegde,vlsegdff") + vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (const_int 7) - (eq_attr "type" "vldm,vstm,vmalu,vmalu") + (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\ + vsm4r") (const_int 5) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -848,18 +860,19 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll") (const_int 8) - (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox") + (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vimuladd,vfmuladd") (const_int 9) - (eq_attr "type" "vmsfs,vmidx,vcompress") + (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\ + vsm4k,vsm3me,vsm3c") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vssegte") + (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz") (const_int 4)] (const_int INVALID_ATTRIBUTE)))