From patchwork Fri Apr 14 03:25:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Pan2 via Gcc-patches" X-Patchwork-Id: 83204 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp100966vqo; Thu, 13 Apr 2023 20:26:10 -0700 (PDT) X-Google-Smtp-Source: AKy350Y50CdHmeHED+ST4K+0uj0i5hrOhfQjtldzsgLuwM5E8hJxHqVmhTvS9DHuwHadg6Y8ujcL X-Received: by 2002:a17:907:d041:b0:94a:9e5e:e425 with SMTP id vb1-20020a170907d04100b0094a9e5ee425mr5291615ejc.1.1681442770497; Thu, 13 Apr 2023 20:26:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681442770; cv=none; d=google.com; s=arc-20160816; b=VSFbTfmNWdZYWeydUmpCTfpT6vqEvJLE/bCb3P00ckNpeJqJxl+XxvbO8qFHQEuh7q f70BAjeUWfiki9G9awERvgEH5XmrnPfUmJWpmZZQU3Rgp3OQsM7DOqQyGv7v9K8VEqfd dgit7L8cimvbxyfldFmOKvR7yDGm4tacgBhwfPXntRNZ1kfltlXlMG3bYcrC4OzezfXF EL/EWuvcpjAfkS/+CtsoLMSAGdkyQ2LL7pYGbgQvk3Ja9f1tbcWaBHSMvYYownYNyQa0 fTMd3WmloFSoXxS3+GrizO1kiBtKdplocrkTPeaJEV9AHNvh1S9sSKPkMUProd5c4qU/ 32tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=hsfB1sZJcrShq9aihPoF9wUVgJnKIAw+qNJAaNqZ268=; b=bNO35zqBvyZwCiR6/xli1THXBK/NL44Gkl6ls2LJBprMyMj2gYJyILzwcs78nIKEFM Vz5bQr0rkAL0AQMmip45y/ct3uJGjN/z2G7dodDY5JRxfCUDea+6BgjvPmlwXfCSJqVq 2EbaZIsphw3VAIUTk312JFKXvNmNXjouUhDyAxc3AlbNbKNt95k4iMq63mtBYJxHdHD9 0SDy6pfFv6ztZEkGjfeJGoMuyV1vuoJH3bCzBZSbsXT5YvDaqKI8/GK0kkgcHPmNdgkl rkVupIQQaGVu8wYzuOJMcbouz462xCETC8ZfNGKxhj6eMt+vSWLVZ52Lnj/S2urj5HnA x67A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=HrppXDJm; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id vs5-20020a170907138500b0094a9116306fsi3152828ejb.471.2023.04.13.20.26.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Apr 2023 20:26:10 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=HrppXDJm; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6A1093858C66 for ; Fri, 14 Apr 2023 03:26:09 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6A1093858C66 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1681442769; bh=hsfB1sZJcrShq9aihPoF9wUVgJnKIAw+qNJAaNqZ268=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=HrppXDJmU/Nsk/Oq18j8ptIqzylSKuofi1HzYn9/6mjDXVSBkMe1p2hVuAakrwXyl Jk+GmzdByNeNYEgEtA76obiK3zwOKQUV5oWyuAPkhCXdAevt0ezJvCAxhGCKFNcm/T NK+9mj8jbWlAOSPUiYxmeVm3xR1rK1WKD+4fpiIg= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by sourceware.org (Postfix) with ESMTPS id 1FBC33858D33 for ; Fri, 14 Apr 2023 03:25:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1FBC33858D33 X-IronPort-AV: E=McAfee;i="6600,9927,10679"; a="372233181" X-IronPort-AV: E=Sophos;i="5.99,195,1677571200"; d="scan'208";a="372233181" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 20:25:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10679"; a="813706655" X-IronPort-AV: E=Sophos;i="5.99,195,1677571200"; d="scan'208";a="813706655" Received: from shvmail02.sh.intel.com ([10.239.244.9]) by orsmga004.jf.intel.com with ESMTP; 13 Apr 2023 20:25:13 -0700 Received: from pli-ubuntu.sh.intel.com (pli-ubuntu.sh.intel.com [10.239.46.88]) by shvmail02.sh.intel.com (Postfix) with ESMTP id 4304D1007EBE; Fri, 14 Apr 2023 11:25:13 +0800 (CST) To: gcc-patches@gcc.gnu.org Cc: juzhe.zhong@rivai.ai, kito.cheng@sifive.com, yanzhang.wang@intel.com, pan2.li@intel.com Subject: [PATCH v3] RISC-V: Add test cases for the RVV mask insn shortcut. Date: Fri, 14 Apr 2023 11:25:11 +0800 Message-Id: <20230414032511.2958280-1-pan2.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230414023238.2921142-1-pan2.li@intel.com> References: <20230414023238.2921142-1-pan2.li@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Pan Li via Gcc-patches From: "Li, Pan2 via Gcc-patches" Reply-To: pan2.li@intel.com Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763117224328996346?= X-GMAIL-MSGID: =?utf-8?q?1763120534718531695?= From: Pan Li There are sorts of shortcut codegen for the RVV mask insn. For example. vmxor vd, va, va => vmclr vd. We would like to add more optimization like this but first of all we must add the tests for the existing shortcut optimization, to ensure we don't break existing optimization from underlying shortcut optimization. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/mask_insn_shortcut.c: New test. Signed-off-by: Pan Li --- .../riscv/rvv/base/mask_insn_shortcut.c | 241 ++++++++++++++++++ 1 file changed, 241 insertions(+) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c new file mode 100644 index 00000000000..83cc4a1b5a5 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c @@ -0,0 +1,241 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64 -O3" } */ + +#include "riscv_vector.h" + +vbool1_t test_shortcut_for_riscv_vmand_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmand_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmand_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmand_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmand_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmand_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmand_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmand_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmand_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmand_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmand_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmand_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmand_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmand_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmnand_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmnand_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmnand_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmnand_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmnand_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmnand_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmnand_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmnand_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmnand_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmnand_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmnand_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmnand_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmnand_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmnand_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmandn_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmandn_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmandn_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmandn_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmandn_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmandn_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmandn_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmandn_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmandn_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmandn_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmandn_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmandn_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmandn_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmandn_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmxor_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmxor_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmxor_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmxor_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmxor_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmxor_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmxor_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmxor_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmxor_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmxor_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmxor_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmxor_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmxor_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmxor_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmor_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmor_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmor_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmor_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmor_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmor_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmor_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmor_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmor_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmor_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmor_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmor_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmor_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmor_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmnor_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmnor_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmnor_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmnor_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmnor_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmnor_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmnor_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmnor_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmnor_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmnor_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmnor_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmnor_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmnor_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmnor_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmorn_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmorn_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmorn_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmorn_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmorn_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmorn_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmorn_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmorn_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmorn_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmorn_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmorn_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmorn_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmorn_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmorn_mm_b64(v1, v1, vl); +} + +vbool1_t test_shortcut_for_riscv_vmxnor_case_0(vbool1_t v1, size_t vl) { + return __riscv_vmxnor_mm_b1(v1, v1, vl); +} + +vbool2_t test_shortcut_for_riscv_vmxnor_case_1(vbool2_t v1, size_t vl) { + return __riscv_vmxnor_mm_b2(v1, v1, vl); +} + +vbool4_t test_shortcut_for_riscv_vmxnor_case_2(vbool4_t v1, size_t vl) { + return __riscv_vmxnor_mm_b4(v1, v1, vl); +} + +vbool8_t test_shortcut_for_riscv_vmxnor_case_3(vbool8_t v1, size_t vl) { + return __riscv_vmxnor_mm_b8(v1, v1, vl); +} + +vbool16_t test_shortcut_for_riscv_vmxnor_case_4(vbool16_t v1, size_t vl) { + return __riscv_vmxnor_mm_b16(v1, v1, vl); +} + +vbool32_t test_shortcut_for_riscv_vmxnor_case_5(vbool32_t v1, size_t vl) { + return __riscv_vmxnor_mm_b32(v1, v1, vl); +} + +vbool64_t test_shortcut_for_riscv_vmxnor_case_6(vbool64_t v1, size_t vl) { + return __riscv_vmxnor_mm_b64(v1, v1, vl); +} + +/* { dg-final { scan-assembler-not {vmand\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-not {vmnand\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-not {vmnandn\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-not {vmxor\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-not {vmor\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-not {vmnor\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-times {vmorn\.mm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-not {vmxnor\.mm\s+v[0-9]+,\s*v[0-9]+} } } */ +/* { dg-final { scan-assembler-times {vmclr\.m\s+v[0-9]+} 14 } } */ +/* { dg-final { scan-assembler-times {vmset\.m\s+v[0-9]+} 7 } } */ +/* { dg-final { scan-assembler-times {vmmv\.m\s+v[0-9]+,\s*v[0-9]+} 14 } } */ +/* { dg-final { scan-assembler-times {vmnot\.m\s+v[0-9]+,\s*v[0-9]+} 14 } } */