From patchwork Mon Nov 20 00:47:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Law X-Patchwork-Id: 166915 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9910:0:b0:403:3b70:6f57 with SMTP id i16csp1909247vqn; Sun, 19 Nov 2023 16:49:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IElwe+2iW2ZXbjs98NHZjZ+E4lG5BCkea3TxaEh/bzfLATfi+fsbi1711qhJAr2+wXnUogQ X-Received: by 2002:a1f:6d42:0:b0:49c:b45:6cba with SMTP id i63-20020a1f6d42000000b0049c0b456cbamr3762292vkc.12.1700441370599; Sun, 19 Nov 2023 16:49:30 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1700441370; cv=pass; d=google.com; s=arc-20160816; b=Ex3U/tH2JS0Lv6cGDPLbAGVFzZ19a1zCWH9pVJj4IuU9uswmhGBTwR2msJYwUcgsHo MfoPjpljfODxsC+3bMeVAG8StNGS2CiYRvtCOiZt1C57tsrW3yQLzhuC4MM4JzPPDgmH xdMek3xA5YpUaHHQdGxdTHpC4AM13YQd+Ow8gxsuk3m9NWPFf3m0JmtG3zFuny2xSMhl YUvRFr74RzG224eSNGg3x46Gw2srU5eTYZme/haVH5pVLd5SyJULH4nlT8ukINnNxIhs 2XFtCqyg6eSVLGLINLIvgJUwh2yM0GYi3Hl51IvQVWzDN5EZ/JsD2SvbDHTzL1v+qI2h nMNw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:cc:to:subject:from :content-language:user-agent:mime-version:date:message-id :dkim-signature:arc-filter:dmarc-filter:delivered-to; bh=e4ZGI1MLewMhh67Dx4+L89F2K5Yx3CkOcoa8Ewo1ToM=; fh=faDQTAbjSiYU8SeqtW+4JYegeYRbMCzEYHa5v2UC+Jg=; b=LQsdwN166n1GjL+/sZSLioS+lUbTfrfGbwUQvQB2C1PjNSIRUwG7U2lfxZxicUrevr 34FdeR3mvjiQDO0PWDG8uAwZqN42o5izakUbrzsocAfHgTJBRbBrGDEb+hGVy0YA6yTj 6Aot577PvA6afqapi5ge8B7roObTl9NBvge9uOV9o5IwnlhdA2OxwgCqTyYpcLKwF6Vz /rnm3HrmljqFgiasHy2SP32YDxj/DA9aCgY2N6eRshLk/ibf9QJqF0u0QxzvkdE1/dIb Uc0AP7hWRbvFDSzga7MKoMCGc8yrHCcq0o7JIx+SRb+n9ukCcWFBlQJGdmmvx8c9mLKD fKjA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@ventanamicro.com header.s=google header.b=Im0yws9T; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id x19-20020ac85f13000000b004197f4454a6si6385922qta.610.2023.11.19.16.49.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Nov 2023 16:49:30 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@ventanamicro.com header.s=google header.b=Im0yws9T; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id DF72E3861870 for ; Mon, 20 Nov 2023 00:49:00 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by sourceware.org (Postfix) with ESMTPS id 9269C385783E for ; Mon, 20 Nov 2023 00:47:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9269C385783E Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=ventanamicro.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 9269C385783E Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2001:4860:4864:20::2c ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700441284; cv=none; b=DELdZVhzoIjkCly8wlVoMX5T7ElKMXdQTJCt6InaEttt3+7hrARPiLmiSnriwzJ2hLsSBz4f0Fp5CRmjsutishw9/rMnFdDBCI/XwVcdSsl2nZuCiPFJ4RJCuRmEA0QxCcLEj5KpFdwqJwnhM1QYEcBBqpwNzkHHIzreQfOnEzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700441284; c=relaxed/simple; bh=oS8C1Owt5RBy0/W15VTUN36YX1JKBpa4xymJv2BShcY=; h=DKIM-Signature:Message-ID:Date:MIME-Version:From:Subject:To; b=hZZ7GahgoSx1/X9r9uiFYcyzuoXPUUIZSEXxCBZiuFEif62b+2ph2MMU/pLPifPUw5ZYcnJPQEHdkgT0s26d+TCnTnOsXNCZbokXTraKnvrAQyQc2i4mUaxzuR/Ce7BNZk5gf2Oa07eT1aYNgXSdbXeDJH5/nL3Q4neUMmboLNo= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-1f0f94a08a0so2346430fac.2 for ; Sun, 19 Nov 2023 16:47:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1700441278; x=1701046078; darn=gcc.gnu.org; h=cc:to:subject:from:content-language:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=e4ZGI1MLewMhh67Dx4+L89F2K5Yx3CkOcoa8Ewo1ToM=; b=Im0yws9T/R7YCSgLPhRDqUx1M8nLCDkDgHw5ywIs/lme5DLrglDCs2wKI+m4WYF5ZL loKpTBuNi2UsduS2J7bIv7hBLhHewqDGU4RS37tVKMEoFOQE3GahCCuWMjF77vHqEiwZ O/XrbNQhqA429drUNi6cLgHSsRKke54SnimL4OD4wn8XVtwkiku1AQAt+5HC4hdHMV82 v6p0aA3t8+BDXwEQ4WPl67NJpwmHzgG6DFn33iWi9Hvyyf7+Eo4lsegF/lzoF6t6cwya aOyoVaFX/xn1kUdmIz9Zml0hxTyXcQl12nE5rH4/7QGIlNNjA5YQ8gVITKQYpOvl1mDd oYBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700441278; x=1701046078; h=cc:to:subject:from:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=e4ZGI1MLewMhh67Dx4+L89F2K5Yx3CkOcoa8Ewo1ToM=; b=LQROH90zwZ9xuVQrPf/iZqpuW8HSrUhk39kPEG04Gr8gPCFe9x6CANylKnuXzJnVYo 7z70vCm9fR0EUpWfwPYRKKUJxdidZjFlisq1tVKfRj6afEpX8WzFuU3RrCyBwuxAEVhN P9pGbau+2qtmVTdrfOYxxNp+ASi4nbRuTIs13nq/nvKdVTk7UEDrBB8c2xydJOBOnDXv cPeskISSgZzd05JZn1sFJDCdJkZIm53fhdHC/vR72DX/QKDz3NKB02LrqfYyNsX7TqEp 9G/LstMO+vXjaCtqOJTU2plwfMTtgwxEZXQIwSzJV/tw6YIBoFODpwr4kAf/uv4H99qP tapQ== X-Gm-Message-State: AOJu0YzLSpObvrUkOPZsoph2LZ8hv5zf0Lfmdh4lThwvv0fRZkprXKYP igbKIoxSr+6SrN2yCCWP43Un4zr46dPb3Mh0CAo= X-Received: by 2002:a05:6871:152:b0:1f4:ae6e:a4e1 with SMTP id z18-20020a056871015200b001f4ae6ea4e1mr7746617oab.56.1700441277879; Sun, 19 Nov 2023 16:47:57 -0800 (PST) Received: from [172.31.0.109] ([136.36.130.248]) by smtp.gmail.com with ESMTPSA id v25-20020a0568301bd900b006ce2e1a6cb2sm1024366ota.44.2023.11.19.16.47.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 19 Nov 2023 16:47:57 -0800 (PST) Message-ID: <6d5f8ba7-0c60-4789-87ae-68617ce6ac2c@ventanamicro.com> Date: Sun, 19 Nov 2023 17:47:56 -0700 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US From: Jeff Law Subject: [RFA] New pass for sign/zero extension elimination To: "gcc-patches@gcc.gnu.org" Cc: Jivan Hakobyan X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPAM_BODY, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783042010620829678 X-GMAIL-MSGID: 1783042010620829678 This is work originally started by Joern @ Embecosm. There's been a long standing sense that we're generating too many sign/zero extensions on the RISC-V port. REE is useful, but it's really focused on a relatively narrow part of the extension problem. What Joern's patch does is introduce a new pass which tracks liveness of chunks of pseudo regs. Specifically it tracks bits 0..7, 8..15, 16..31 and 32..63. If it encounters a sign/zero extend that sets bits that are never read, then it replaces the sign/zero extension with a narrowing subreg. The narrowing subreg usually gets eliminated by subsequent passes (it's just a copy after all). Jivan has done some analysis and found that it eliminates roughly 1% of the dynamic instruction stream for x264 as well as some redundant extensions in the coremark benchmark (both on rv64). In my own testing as I worked through issues on other architectures I clearly saw it helping in various places within GCC itself or in the testsuite. The basic structure is to first do a fairly standard liveness analysis on the chunks, seeding original state with the liveness data from DF. Once that's stable, we do a final pass to identify the useless extensions and transform them into narrowing subregs. A few key points to remember. For destination processing it is always safe to ignore a destination. Ignoring a destination merely means that whatever was live after the given insn will continue to be live before the insn. What is not safe is to clear a bit in the LIVENOW bitmap for a destination chunk that is not set. This comes into play with things like STRICT_LOW_PART. For source processing the safe thing to do is to set all the chunks in a register as live. It is never safe to fail to process a source operand. When a destination object is not fully live, we try to transfer that limited liveness to the source operands. So for example if bits 16..63 are dead in a destination of a PLUS, we need not mark bits 16..63 as live for the source operands. We have to be careful -- consider a shift count on a target without SHIFT_COUNT_TRUNCATED set. So we have both a list of RTL codes where we can transfer liveness and a few codes where one of the operands may need to be fully live (ex, a shift count) while the other input may not need to be fully live (value left shifted). Locally we have had this enabled at -O1 and above to encourage testing, but I'm thinking that for the trunk enabling at -O2 and above is the right thing to do. This has (of course) been tested on rv64. It's also been bootstrapped and regression tested on x86. Bootstrap and regression tested (C only) for m68k, sh4, sh4eb, alpha. Earlier versions were also bootstrapped and regression tested on ppc, hppa and s390x (C only for those as well). It's also been tested on the various crosses in my tester. So we've got reasonable coverage of 16, 32 and 64 bit targets, big and little endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other oddities. The included tests are for RISC-V only because not all targets are going to have extraneous extensions. There's tests from coremark, x264 and GCC's bz database. It probably wouldn't be hard to add aarch64 testscases. The BZs listed are improved by this patch for aarch64. Given the amount of work Jivan and I have done, I'm not comfortable self-approving at this time. I'd much rather have another set of eyes on the code. Hopefully the code is documented well enough for that to be useful exercise. So, no need to work from Pago Pago for this patch. I may make another attempt at the eswin conditional move work while working virtually in Pago Pago though. Thoughts, comments, recommendations? Jeff PR target/95650 PR rtl-optimization/96031 PR rtl-optimization/104387 PR rtl-optimization/111384 gcc/ * Makefile.in (OBJS): Add ext-dce.o. * common.opt (ext-dce): Add new option. * df-scan.cc (df_get_exit_block_use_set): No longer static. * df.h (df_get_exit_block_use_set): Prototype. * ext-dce.cc: New file. * passes.def: Add ext-dce before combine. * tree-pass.h (make_pass_ext_dce): Prototype.. gcc/testsuite * gcc.target/riscv/core_bench_list.c: New test. * gcc.target/riscv/core_init_matrix.c: New test. * gcc.target/riscv/core_list_init.c: New test. * gcc.target/riscv/matrix_add_const.c: New test. * gcc.target/riscv/mem-extend.c: New test. * gcc.target/riscv/pr111384.c: New test. diff --git a/gcc/Makefile.in b/gcc/Makefile.in index 753f2f36618..af6f1415507 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -1451,6 +1451,7 @@ OBJS = \ explow.o \ expmed.o \ expr.o \ + ext-dce.o \ fibonacci_heap.o \ file-prefix-map.o \ final.o \ diff --git a/gcc/common.opt b/gcc/common.opt index d21db5d4a20..141dfdf14fd 100644 --- a/gcc/common.opt +++ b/gcc/common.opt @@ -3766,4 +3766,8 @@ fipa-ra Common Var(flag_ipa_ra) Optimization Use caller save register across calls if possible. +fext-dce +Common Var(flag_ext_dce, 1) Optimization Init(0) +Perform dead code elimination on zero and sign extensions with special dataflow analysis. + ; This comment is to ensure we retain the blank line above. diff --git a/gcc/df-scan.cc b/gcc/df-scan.cc index 9515740728c..87729ab0f44 100644 --- a/gcc/df-scan.cc +++ b/gcc/df-scan.cc @@ -78,7 +78,6 @@ static void df_get_eh_block_artificial_uses (bitmap); static void df_record_entry_block_defs (bitmap); static void df_record_exit_block_uses (bitmap); -static void df_get_exit_block_use_set (bitmap); static void df_get_entry_block_def_set (bitmap); static void df_grow_ref_info (struct df_ref_info *, unsigned int); static void df_ref_chain_delete_du_chain (df_ref); @@ -3642,7 +3641,7 @@ df_epilogue_uses_p (unsigned int regno) /* Set the bit for regs that are considered being used at the exit. */ -static void +void df_get_exit_block_use_set (bitmap exit_block_uses) { unsigned int i; diff --git a/gcc/df.h b/gcc/df.h index 402657a7076..abcbb097734 100644 --- a/gcc/df.h +++ b/gcc/df.h @@ -1091,6 +1091,7 @@ extern bool df_epilogue_uses_p (unsigned int); extern void df_set_regs_ever_live (unsigned int, bool); extern void df_compute_regs_ever_live (bool); extern void df_scan_verify (void); +extern void df_get_exit_block_use_set (bitmap); /*---------------------------------------------------------------------------- diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc new file mode 100644 index 00000000000..a6fe2683dcd --- /dev/null +++ b/gcc/ext-dce.cc @@ -0,0 +1,880 @@ +/* RTL dead zero/sign extension (code) elimination. + Copyright (C) 2000-2022 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +You should have received a copy of the GNU General Public License +along with GCC; see the file COPYING3. If not see +. */ + +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "backend.h" +#include "rtl.h" +#include "tree.h" +#include "memmodel.h" +#include "insn-config.h" +#include "emit-rtl.h" +#include "recog.h" +#include "cfganal.h" +#include "tree-pass.h" +#include "cfgrtl.h" +#include "rtl-iter.h" +#include "df.h" +#include "print-rtl.h" + +/* We consider four bit groups for liveness: + bit 0..7 (least significant byte) + bit 8..15 (second least significant byte) + bit 16..31 + bit 32..BITS_PER_WORD-1 */ + +/* Note this pass could be used to narrow memory loads too. It's + not clear if that's profitable or not in general. */ + +#define UNSPEC_P(X) (GET_CODE (X) == UNSPEC || GET_CODE (X) == UNSPEC_VOLATILE) + +/* If we know the destination of CODE only uses some low bits + (say just the QI bits of an SI operation), then return true + if we can propagate the need for just the subset of bits + from the destination to the sources. */ + +static bool +safe_for_live_propagation (rtx_code code) +{ + /* First handle rtx classes which as a whole are known to + be either safe or unsafe. */ + switch (GET_RTX_CLASS (code)) + { + case RTX_OBJ: + return true; + + case RTX_COMPARE: + case RTX_COMM_COMPARE: + case RTX_TERNARY: + return false; + + default: + break; + } + + /* What's left are specific codes. We only need to identify those + which are safe. */ + switch (code) + { + /* These are trivially safe. */ + case SUBREG: + case NOT: + case ZERO_EXTEND: + case SIGN_EXTEND: + case TRUNCATE: + case SS_TRUNCATE: + case US_TRUNCATE: + case PLUS: + case MULT: + case SS_MULT: + case US_MULT: + case SMUL_HIGHPART: + case UMUL_HIGHPART: + case AND: + case IOR: + case XOR: + case SS_PLUS: + case US_PLUS: + return true; + + /* We can propagate for the shifted operand, but not the shift + count. The count is handled specially. */ + case SS_ASHIFT: + case US_ASHIFT: + case ASHIFT: + return true; + + /* There may be other safe codes. If so they can be added + individually when discovered. */ + default: + return false; + } +} + +/* Clear bits in LIVENOW and set bits in LIVE_TMP for objects + set/clobbered by INSN. + + Conceptually it is always safe to ignore a particular destination + here as that will result in more chunks of data being considered + live. That's what happens when we "continue" the main loop when + we see something we don't know how to handle such as a vector + mode destination. + + The more accurate we are in identifying what objects (and chunks + within an object) are set by INSN, the more aggressive the + optimziation phase during use handling will be. */ + +static void +ext_dce_process_sets (rtx insn, bitmap livenow, bitmap live_tmp) +{ + subrtx_iterator::array_type array; + rtx pat = PATTERN (insn); + FOR_EACH_SUBRTX (iter, array, pat, NONCONST) + { + const_rtx x = *iter; + + /* An EXPR_LIST (from call fusage) ends in NULL_RTX. */ + if (x == NULL_RTX) + continue; + + if (UNSPEC_P (x)) + continue; + + if (GET_CODE (x) == SET || GET_CODE (x) == CLOBBER) + { + unsigned bit = 0; + x = SET_DEST (x); + + /* We don't support vector destinations or destinations + wider than DImode. It is safe to continue this loop. + At worst, it will leave things live which could have + been made dead. */ + if (VECTOR_MODE_P (GET_MODE (x)) || GET_MODE (x) > E_DImode) + continue; + + /* We could have (strict_low_part (subreg ...)). We can not just + strip the STRICT_LOW_PART as that would result in clearing + some bits in LIVENOW that are still live. So process the + STRICT_LOW_PART specially. */ + if (GET_CODE (x) == STRICT_LOW_PART) + { + x = XEXP (x, 0); + + /* The only valid operand of a STRICT_LOW_PART is a non + paradoxical SUBREG. */ + gcc_assert (SUBREG_P (x) + && !paradoxical_subreg_p (x) + && SUBREG_BYTE (x).is_constant ()); + + /* I think we should always see a REG here. But let's + be sure. */ + gcc_assert (REG_P (SUBREG_REG (x))); + + /* We don't track values larger than DImode. */ + gcc_assert (GET_MODE (x) <= E_DImode); + + /* But the inner mode might be larger, just punt for + that case. Remember, we can not just continue to process + the inner RTXs due to the STRICT_LOW_PART. */ + if (GET_MODE (SUBREG_REG (x)) > E_DImode) + { + /* Skip the subrtxs of the STRICT_LOW_PART. We can't + process them because it'll set objects as no longer + live when they are in fact still live. */ + iter.skip_subrtxes (); + continue; + } + + /* Transfer all the LIVENOW bits for X into LIVE_TMP. */ + HOST_WIDE_INT rn = REGNO (SUBREG_REG (x)); + for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++) + if (bitmap_bit_p (livenow, i)) + bitmap_set_bit (live_tmp, i); + + /* The mode of the SUBREG tells us how many bits we can + clear. */ + machine_mode mode = GET_MODE (x); + HOST_WIDE_INT size = GET_MODE_SIZE (mode).to_constant (); + bitmap_clear_range (livenow, 4 * rn, size); + + /* We have fully processed this destination. */ + iter.skip_subrtxes (); + continue; + } + + /* We can safely strip a paradoxical subreg. The inner mode will + be narrower than the outer mode. We'll clear fewer bits in + LIVENOW than we'd like, but that's always safe. */ + if (paradoxical_subreg_p (x)) + x = XEXP (x, 0); + + /* If we have a SUBREG that is too wide, just continue the loop + and let the iterator go down into SUBREG_REG. */ + if (SUBREG_P (x) && GET_MODE (SUBREG_REG (x)) > E_DImode) + continue; + + /* Phase one of destination handling. First remove any wrapper + such as SUBREG or ZERO_EXTRACT. */ + unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x)); + if (SUBREG_P (x) + && !paradoxical_subreg_p (x) + && SUBREG_BYTE (x).is_constant ()) + { + bit = SUBREG_BYTE (x).to_constant () * BITS_PER_UNIT; + if (WORDS_BIG_ENDIAN) + bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))).to_constant () + - GET_MODE_BITSIZE (GET_MODE (x)).to_constant () - bit); + + /* Catch big endian correctness issues rather than triggering + undefined behavior. */ + gcc_assert (bit < sizeof (HOST_WIDE_INT) * 8); + + mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit; + if (!mask) + mask = -0x100000000ULL; + x = SUBREG_REG (x); + } + + if (GET_CODE (x) == ZERO_EXTRACT) + { + /* If either the size or the start position is unknown, + then assume we know nothing about what is overwritten. + This is overly conservative, but safe. */ + if (!CONST_INT_P (XEXP (x, 1)) || !CONST_INT_P (XEXP (x, 2))) + continue; + mask = (1ULL << INTVAL (XEXP (x, 1))) - 1; + bit = INTVAL (XEXP (x, 2)); + if (BITS_BIG_ENDIAN) + bit = (GET_MODE_BITSIZE (GET_MODE (x)) + - INTVAL (XEXP (x, 1)) - bit).to_constant (); + x = XEXP (x, 0); + + /* We can certainly get (zero_extract (subreg ...)). The + mode of the zero_extract and location should be sufficient + and we can just strip the SUBREG. */ + if (GET_CODE (x) == SUBREG) + x = SUBREG_REG (x); + } + + /* BIT >= 64 indicates something went horribly wrong. */ + gcc_assert (bit <= 63); + + /* Now handle the actual object that was changed. */ + if (REG_P (x)) + { + /* Transfer the appropriate bits from LIVENOW into + LIVE_TMP. */ + HOST_WIDE_INT rn = REGNO (x); + for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++) + if (bitmap_bit_p (livenow, i)) + bitmap_set_bit (live_tmp, i); + + /* Now clear the bits known written by this instruction. + Note that BIT need not be a power of two, consider a + ZERO_EXTRACT destination. */ + int start = (bit < 8 ? 0 : bit < 16 ? 1 : bit < 32 ? 2 : 3); + int end = ((mask & ~0xffffffffULL) ? 4 + : (mask & 0xffff0000ULL) ? 3 + : (mask & 0xff00) ? 2 : 1); + bitmap_clear_range (livenow, 4 * rn + start, end - start); + } + /* Some ports generate (clobber (const_int)). */ + else if (CONST_INT_P (x)) + continue; + else + gcc_assert (CALL_P (insn) + || MEM_P (x) + || x == pc_rtx + || GET_CODE (x) == SCRATCH); + + iter.skip_subrtxes (); + } + else if (GET_CODE (x) == COND_EXEC) + { + /* This isn't ideal, but may not be so bad in practice. */ + iter.skip_subrtxes (); + } + } +} + +/* INSN has a sign/zero extended source inside SET that we will + try to turn into a SUBREG. */ +static void +ext_dce_try_optimize_insn (rtx_insn *insn, rtx set, bitmap changed_pseudos) +{ + rtx src = SET_SRC (set); + rtx inner = XEXP (src, 0); + + /* Avoid (subreg (mem)) and other constructs which may are valid RTL, but + not useful for this optimization. */ + if (!REG_P (inner) && !SUBREG_P (inner)) + return; + + rtx new_pattern; + if (dump_file) + { + fprintf (dump_file, "Processing insn:\n"); + dump_insn_slim (dump_file, insn); + fprintf (dump_file, "Trying to simplify pattern:\n"); + print_rtl_single (dump_file, SET_SRC (set)); + } + + new_pattern = simplify_gen_subreg (GET_MODE (src), inner, + GET_MODE (inner), 0); + /* simplify_gen_subreg may fail in which case NEW_PATTERN will be NULL. + We must not pass that as a replacement pattern to validate_change. */ + if (new_pattern) + { + int ok = validate_change (insn, &SET_SRC (set), new_pattern, false); + + if (ok) + bitmap_set_bit (changed_pseudos, REGNO (SET_DEST (set))); + + if (dump_file) + { + if (ok) + fprintf (dump_file, "Successfully transformed to:\n"); + else + fprintf (dump_file, "Failed transformation to:\n"); + + print_rtl_single (dump_file, new_pattern); + fprintf (dump_file, "\n"); + } + } + else + { + if (dump_file) + fprintf (dump_file, "Unable to generate valid SUBREG expression.\n"); + } +} + +/* Some operators imply that their second operand is fully live, + regardless of how many bits in the output are live. An example + would be the shift count on a target without SHIFT_COUNT_TRUCATED + defined. + + Return TRUE if CODE is such an operator. FALSE otherwise. */ + +static bool +binop_implies_op2_fully_live (rtx_code code) +{ + switch (code) + { + case ASHIFT: + case LSHIFTRT: + case ASHIFTRT: + case ROTATE: + case ROTATERT: + return !SHIFT_COUNT_TRUNCATED; + + default: + return false; + } +} + +/* Process uses in INSN. Set appropriate bits in LIVENOW for any chunks of + pseudos that become live, potentially filtering using bits from LIVE_TMP. + + If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when + the extended bits are never read and mark pseudos which had extensions + eliminated in CHANGED_PSEUDOS. */ + +static void +ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp, + bool modify, bitmap changed_pseudos) +{ + /* A nonlocal goto implicitly uses the frame pointer. */ + if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX)) + { + bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4); + if (!HARD_FRAME_POINTER_IS_FRAME_POINTER) + bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4); + } + + subrtx_var_iterator::array_type array_var; + rtx pat = PATTERN (insn); + FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST) + { + /* An EXPR_LIST (from call fusage) ends in NULL_RTX. */ + rtx x = *iter; + if (x == NULL_RTX) + continue; + + /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to + handle SETs explicitly, possibly propagating live information + into the uses. + + We may continue the loop at various points which will cause + iteration into the next level of RTL. Breaking from the loop + is never safe as it can lead us to fail to process some of the + RTL and thus not make objects live when necessary. */ + enum rtx_code xcode = GET_CODE (x); + if (xcode == SET) + { + const_rtx dst = SET_DEST (x); + rtx src = SET_SRC (x); + const_rtx y; + unsigned HOST_WIDE_INT bit = 0; + + /* The code of the RHS of a SET. */ + enum rtx_code code = GET_CODE (src); + + /* ?!? How much of this should mirror SET handling, potentially + being shared? */ + if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst)) + { + bit = SUBREG_BYTE (dst).to_constant () * BITS_PER_UNIT; + if (WORDS_BIG_ENDIAN) + bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (dst))).to_constant () + - GET_MODE_BITSIZE (GET_MODE (dst)).to_constant () - bit); + if (bit >= HOST_BITS_PER_WIDE_INT) + bit = HOST_BITS_PER_WIDE_INT - 1; + dst = SUBREG_REG (dst); + } + else if (GET_CODE (dst) == ZERO_EXTRACT + || GET_CODE (dst) == STRICT_LOW_PART) + dst = XEXP (dst, 0); + + /* Main processing of the uses. Two major goals here. + + First, we want to try and propagate liveness (or the lack + thereof) from the destination register to the source + register(s). + + Second, if the source is an extension, try to optimize + it into a SUBREG. The SUBREG form indicates we don't + care about the upper bits and will usually be copy + propagated away. + + If we fail to handle something in here, the expectation + is the iterator will dive into the sub-components and + mark all the chunks in any found REGs as live. */ + if (REG_P (dst) && safe_for_live_propagation (code)) + { + /* Create a mask representing the bits of this output + operand that are live after this insn. We can use + this information to refine the live in state of + inputs to this insn in many cases. + + We have to do this on a per SET basis, we might have + an INSN with multiple SETS, some of which can narrow + the source operand liveness, some of which may not. */ + unsigned HOST_WIDE_INT dst_mask = 0; + HOST_WIDE_INT rn = REGNO (dst); + unsigned HOST_WIDE_INT mask_array[] + = { 0xff, 0xff00, 0xffff0000ULL, -0x100000000ULL }; + for (int i = 0; i < 4; i++) + if (bitmap_bit_p (live_tmp, 4 * rn + i)) + dst_mask |= mask_array[i]; + dst_mask >>= bit; + + /* ??? Could also handle ZERO_EXTRACT / SIGN_EXTRACT + of the source specially to improve optimization. */ + if (code == SIGN_EXTEND || code == ZERO_EXTEND) + { + rtx inner = XEXP (src, 0); + unsigned HOST_WIDE_INT src_mask + = GET_MODE_MASK (GET_MODE (inner)); + + /* DST_MASK could be zero if we had something in the SET + that we couldn't handle. */ + if (modify && dst_mask && (dst_mask & ~src_mask) == 0) + ext_dce_try_optimize_insn (as_a (insn), + x, changed_pseudos); + + dst_mask &= src_mask; + src = XEXP (src, 0); + code = GET_CODE (src); + } + + /* Optimization is done at this point. We just want to make + sure everything that should get marked as live is marked + from here onward. */ + + /* ?!? What is the point of this adjustment to DST_MASK? */ + if (code == PLUS || code == MINUS + || code == MULT || code == ASHIFT) + dst_mask + = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0; + + /* We will handle the other operand of a binary operator + at the bottom of the loop by resetting Y. */ + if (BINARY_P (src)) + y = XEXP (src, 0); + else + y = src; + + /* We're inside a SET and want to process the source operands + making things live. Breaking from this loop will cause + the iterator to work on sub-rtxs, so it is safe to break + if we see something we don't know how to handle. */ + for (;;) + { + /* Strip an outer STRICT_LOW_PART or paradoxical subreg. + That has the effect of making the whole referenced + register live. We might be able to avoid that for + STRICT_LOW_PART at some point. */ + if (GET_CODE (x) == STRICT_LOW_PART + || paradoxical_subreg_p (x)) + x = XEXP (x, 0); + else if (SUBREG_P (y) && SUBREG_BYTE (y).is_constant ()) + { + /* For anything but (subreg (reg)), break the inner loop + and process normally (conservatively). */ + if (!REG_P (SUBREG_REG (y))) + break; + bit = (SUBREG_BYTE (y).to_constant () * BITS_PER_UNIT); + if (WORDS_BIG_ENDIAN) + bit = (GET_MODE_BITSIZE + (GET_MODE (SUBREG_REG (y))).to_constant () + - GET_MODE_BITSIZE (GET_MODE (y)).to_constant () - bit); + if (dst_mask) + { + dst_mask <<= bit; + if (!dst_mask) + dst_mask = -0x100000000ULL; + } + y = SUBREG_REG (y); + } + + if (REG_P (y)) + { + /* We have found the use of a register. We need to mark + the appropriate chunks of the register live. The mode + of the REG is a starting point. We may refine that + based on what chunks in the output were live. */ + rn = 4 * REGNO (y); + unsigned HOST_WIDE_INT tmp_mask = dst_mask; + + /* If the RTX code for the SET_SRC is not one we can + propagate destination liveness through, then just + set the mask to the mode's mask. */ + if (!safe_for_live_propagation (code)) + tmp_mask = GET_MODE_MASK (GET_MODE (y)); + + if (tmp_mask & 0xff) + bitmap_set_bit (livenow, rn); + if (tmp_mask & 0xff00) + bitmap_set_bit (livenow, rn + 1); + if (tmp_mask & 0xffff0000ULL) + bitmap_set_bit (livenow, rn + 2); + if (tmp_mask & -0x100000000ULL) + bitmap_set_bit (livenow, rn + 3); + + /* Some operators imply their second operand + is fully live, break this inner loop which + will cause the iterator to descent into the + sub-rtxs outside the SET processing. */ + if (binop_implies_op2_fully_live (code)) + break; + } + else if (!CONSTANT_P (y)) + break; + /* We might have (ashift (const_int 1) (reg...)) */ + else if (CONSTANT_P (y) + && binop_implies_op2_fully_live (GET_CODE (src))) + break; + + /* If this was anything but a binary operand, break the inner + loop. This is conservatively correct as it will cause the + iterator to look at the sub-rtxs outside the SET context. */ + if (!BINARY_P (src)) + break; + + /* We processed the first operand of a binary operator. Now + handle the second. */ + y = XEXP (src, 1), src = pc_rtx; + } + + /* These are leaf nodes, no need to iterate down into them. */ + if (REG_P (y) || CONSTANT_P (y)) + iter.skip_subrtxes (); + } + } + /* If we are reading the low part of a SUBREG, then we can + refine liveness of the input register, otherwise let the + iterator continue into SUBREG_REG. */ + else if (xcode == SUBREG + && REG_P (SUBREG_REG (x)) + && subreg_lowpart_p (x) + && GET_MODE_BITSIZE (GET_MODE (x)).is_constant () + && GET_MODE_BITSIZE (GET_MODE (x)).to_constant () <= 32) + { + HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE (x)).to_constant (); + HOST_WIDE_INT rn = 4 * REGNO (SUBREG_REG (x)); + + bitmap_set_bit (livenow, rn); + if (size > 8) + bitmap_set_bit (livenow, rn + 1); + if (size > 16) + bitmap_set_bit (livenow, rn + 2); + if (size > 32) + bitmap_set_bit (livenow, rn + 3); + iter.skip_subrtxes (); + } + /* If we have a register reference that is not otherwise handled, + just assume all the chunks are live. */ + else if (REG_P (x)) + bitmap_set_range (livenow, REGNO (x) * 4, 4); + } +} + +/* Process a single basic block BB with current liveness information + in LIVENOW, returning updated liveness information. + + If MODIFY is true, then this is the last pass and unnecessary + extensions should be eliminated when possible. If an extension + is removed, the source pseudo is marked in CHANGED_PSEUDOS. */ + +static bitmap +ext_dce_process_bb (basic_block bb, bitmap livenow, + bool modify, bitmap changed_pseudos) +{ + rtx_insn *insn; + + FOR_BB_INSNS_REVERSE (bb, insn) + { + if (!NONDEBUG_INSN_P (insn)) + continue; + + /* Live-out state of the destination of this insn. We can + use this to refine the live-in state of the sources of + this insn in many cases. */ + bitmap live_tmp = BITMAP_ALLOC (NULL); + + /* First process any sets/clobbers in INSN. */ + ext_dce_process_sets (insn, livenow, live_tmp); + + /* CALL_INSNs need processing their fusage data. */ + if (GET_CODE (insn) == CALL_INSN) + ext_dce_process_sets (CALL_INSN_FUNCTION_USAGE (insn), + livenow, live_tmp); + + /* And now uses, optimizing away SIGN/ZERO extensions as we go. */ + ext_dce_process_uses (insn, livenow, live_tmp, modify, changed_pseudos); + + /* And process fusage data for the use as well. */ + if (GET_CODE (insn) == CALL_INSN) + { + if (!FAKE_CALL_P (insn)) + bitmap_set_range (livenow, STACK_POINTER_REGNUM * 4, 4); + + /* If this is not a call to a const fucntion, then assume it + can read any global register. */ + if (!RTL_CONST_CALL_P (insn)) + for (unsigned i = 0; i < FIRST_PSEUDO_REGISTER; i++) + if (global_regs[i]) + bitmap_set_range (livenow, i * 4, 4); + + ext_dce_process_uses (CALL_INSN_FUNCTION_USAGE (insn), + livenow, live_tmp, modify, changed_pseudos); + } + + BITMAP_FREE (live_tmp); + } + return livenow; +} + +/* We optimize away sign/zero extensions in this pass and replace + them with SUBREGs indicating certain bits are don't cares. + + This changes the SUBREG_PROMOTED_VAR_P state of the object. + It is fairly painful to fix this on the fly, so we have + recorded which pseudos are affected and we look for SUBREGs + of those pseudos and fix them up. */ + +static void +reset_subreg_promoted_p (bitmap changed_pseudos) +{ + /* If we removed an extension, that changed the promoted state + of the destination of that extension. Thus we need to go + find any SUBREGs that reference that pseudo and adjust their + SUBREG_PROMOTED_P state. */ + for (rtx_insn *insn = get_insns(); insn; insn = NEXT_INSN (insn)) + { + if (!NONDEBUG_INSN_P (insn)) + continue; + + rtx pat = PATTERN (insn); + subrtx_var_iterator::array_type array; + FOR_EACH_SUBRTX_VAR (iter, array, pat, NONCONST) + { + rtx sub = *iter; + + /* We only care about SUBREGs. */ + if (GET_CODE (sub) != SUBREG) + continue; + + const_rtx x = SUBREG_REG (sub); + + /* We only care if the inner object is a REG. */ + if (!REG_P (x)) + continue; + + /* And only if the SUBREG is a promoted var. */ + if (!SUBREG_PROMOTED_VAR_P (sub)) + continue; + + if (bitmap_bit_p (changed_pseudos, REGNO (x))) + SUBREG_PROMOTED_VAR_P (sub) = 0; + } + } +} + +/* Use lifetime analyis to identify extensions that set bits that + are never read. Turn such extensions into SUBREGs instead which + can often be propagated away. */ + +static void +ext_dce (void) +{ + basic_block bb, *worklist, *qin, *qout, *qend; + unsigned int qlen; + vec livein; + bitmap livenow; + bitmap changed_pseudos; + + livein.create (last_basic_block_for_fn (cfun)); + livein.quick_grow_cleared (last_basic_block_for_fn (cfun)); + for (int i = 0; i < last_basic_block_for_fn (cfun); i++) + bitmap_initialize (&livein[i], &bitmap_default_obstack); + + auto_bitmap refs (&bitmap_default_obstack); + df_get_exit_block_use_set (refs); + + unsigned i; + bitmap_iterator bi; + EXECUTE_IF_SET_IN_BITMAP (refs, 0, i, bi) + { + for (int j = 0; j < 4; j++) + bitmap_set_bit (&livein[EXIT_BLOCK], i * 4 + j); + } + + livenow = BITMAP_ALLOC (NULL); + changed_pseudos = BITMAP_ALLOC (NULL); + + worklist + = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS); + + int modify = 0; + + do + { + qin = qout = worklist; + + /* Put every block on the worklist. */ + int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, rpo); + for (int i = 0; i < n; ++i) + { + bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); + if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) + || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)) + continue; + *qin++ = bb; + bb->aux = bb; + } + free (rpo); + + qin = worklist; + qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS]; + qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; + + /* Iterate until the worklist is empty. */ + while (qlen) + { + /* Take the first entry off the worklist. */ + bb = *qout++; + qlen--; + + if (qout >= qend) + qout = worklist; + + /* Clear the aux field of this block so that it can be added to + the worklist again if necessary. */ + bb->aux = NULL; + + bitmap_clear (livenow); + /* Make everything live that's live in the successors. */ + edge_iterator ei; + edge e; + + FOR_EACH_EDGE (e, ei, bb->succs) + bitmap_ior_into (livenow, &livein[e->dest->index]); + + livenow = ext_dce_process_bb (bb, livenow, + modify > 0, changed_pseudos); + + if (!bitmap_equal_p (&livein[bb->index], livenow)) + { + gcc_assert (!modify); + bitmap tmp = BITMAP_ALLOC (NULL); + gcc_assert (!bitmap_and_compl (tmp, &livein[bb->index], livenow)); + + bitmap_copy (&livein[bb->index], livenow); + + edge_iterator ei; + edge e; + + FOR_EACH_EDGE (e, ei, bb->preds) + if (!e->src->aux && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)) + { + *qin++ = e->src; + e->src->aux = e; + qlen++; + if (qin >= qend) + qin = worklist; + } + } + } + } while (!modify++); + + reset_subreg_promoted_p (changed_pseudos); + + /* Clean up. */ + BITMAP_FREE (changed_pseudos); + BITMAP_FREE (livenow); + unsigned len = livein.length (); + for (unsigned i = 0; i < len; i++) + bitmap_clear (&livein[i]); + livein.release (); + clear_aux_for_blocks (); + free (worklist); +} + +namespace { + +const pass_data pass_data_ext_dce = +{ + RTL_PASS, /* type */ + "ext_dce", /* name */ + OPTGROUP_NONE, /* optinfo_flags */ + TV_NONE, /* tv_id */ + PROP_cfglayout, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + TODO_df_finish, /* todo_flags_finish */ +}; + +class pass_ext_dce : public rtl_opt_pass +{ +public: + pass_ext_dce (gcc::context *ctxt) + : rtl_opt_pass (pass_data_ext_dce, ctxt) + {} + + /* opt_pass methods: */ + virtual bool gate (function *) { return optimize > 0; } + virtual unsigned int execute (function *) + { + ext_dce (); + return 0; + } + +}; // class pass_combine + +} // anon namespace + +rtl_opt_pass * +make_pass_ext_dce (gcc::context *ctxt) +{ + return new pass_ext_dce (ctxt); +} diff --git a/gcc/passes.def b/gcc/passes.def index 1e1950bdb39..c075c70d42c 100644 --- a/gcc/passes.def +++ b/gcc/passes.def @@ -487,6 +487,7 @@ along with GCC; see the file COPYING3. If not see NEXT_PASS (pass_inc_dec); NEXT_PASS (pass_initialize_regs); NEXT_PASS (pass_ud_rtl_dce); + NEXT_PASS (pass_ext_dce); NEXT_PASS (pass_combine); NEXT_PASS (pass_if_after_combine); NEXT_PASS (pass_jump_after_combine); diff --git a/gcc/testsuite/gcc.target/riscv/core_bench_list.c b/gcc/testsuite/gcc.target/riscv/core_bench_list.c new file mode 100644 index 00000000000..957e9c841ed --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/core_bench_list.c @@ -0,0 +1,15 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-rtl-ext_dce" } */ +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */ + +short +core_bench_list (int N) { + + short a = 0; + for (int i = 0; i < 4; i++) { + if (i > N) { + a++; + } + } + return a * 4; +} diff --git a/gcc/testsuite/gcc.target/riscv/core_init_matrix.c b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c new file mode 100644 index 00000000000..9289244c71f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c @@ -0,0 +1,17 @@ +/* { dg-do compile } */ +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */ +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */ + +void +core_init_matrix(short* A, short* B, int seed) { + int order = 1; + + for (int i = 0; i < seed; i++) { + for (int j = 0; j < seed; j++) { + short val = seed + order; + B[i] = val; + A[i] = val; + order++; + } + } +} diff --git a/gcc/testsuite/gcc.target/riscv/core_list_init.c b/gcc/testsuite/gcc.target/riscv/core_list_init.c new file mode 100644 index 00000000000..2f36dae85aa --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/core_list_init.c @@ -0,0 +1,18 @@ +/* { dg-do compile } */ +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */ +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */ + +unsigned short +core_list_init (int size, short seed) { + + for (int i = 0; i < size; i++) { + unsigned short datpat = ((unsigned short)(seed ^ i) & 0xf); + unsigned short dat = (datpat << 3) | (i & 0x7); + if (i > seed) { + return dat; + } + } + + return 0; + +} diff --git a/gcc/testsuite/gcc.target/riscv/matrix_add_const.c b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c new file mode 100644 index 00000000000..9a2dd53b17a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c @@ -0,0 +1,11 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-rtl-ext_dce" } */ +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */ + +void +matrix_add_const(int N, short *A, short val) +{ + for (int j = 0; j < N; j++) { + A[j] += val; + } +} diff --git a/gcc/testsuite/gcc.target/riscv/mem-extend.c b/gcc/testsuite/gcc.target/riscv/mem-extend.c new file mode 100644 index 00000000000..c67f12dfc35 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/mem-extend.c @@ -0,0 +1,13 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gc_zbb" } */ +/* { dg-skip-if "" { *-*-* } { "-O0" } } */ + +void +foo(short *d, short *tmp) { + int x = d[0] + d[1]; + int y = d[2] + d[3]; + tmp[0] = x + y; + tmp[1] = x - y; +} + +/* { dg-final { scan-assembler-not {\mzext\.h\M} } } */ diff --git a/gcc/testsuite/gcc.target/riscv/pr111384.c b/gcc/testsuite/gcc.target/riscv/pr111384.c new file mode 100644 index 00000000000..a4e77d4aeb6 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/pr111384.c @@ -0,0 +1,11 @@ +/* { dg-do compile } */ +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */ +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */ + +void +foo(unsigned int src, unsigned short *dst1, unsigned short *dst2) +{ + *dst1 = src; + *dst2 = src; +} + diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h index 09e6ada5b2f..773301d731f 100644 --- a/gcc/tree-pass.h +++ b/gcc/tree-pass.h @@ -591,6 +591,7 @@ extern rtl_opt_pass *make_pass_reginfo_init (gcc::context *ctxt); extern rtl_opt_pass *make_pass_inc_dec (gcc::context *ctxt); extern rtl_opt_pass *make_pass_stack_ptr_mod (gcc::context *ctxt); extern rtl_opt_pass *make_pass_initialize_regs (gcc::context *ctxt); +extern rtl_opt_pass *make_pass_ext_dce (gcc::context *ctxt); extern rtl_opt_pass *make_pass_combine (gcc::context *ctxt); extern rtl_opt_pass *make_pass_if_after_combine (gcc::context *ctxt); extern rtl_opt_pass *make_pass_jump_after_combine (gcc::context *ctxt);