From patchwork Thu Nov 30 21:31:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Anagnostakis X-Patchwork-Id: 172160 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp758906vqy; Thu, 30 Nov 2023 15:40:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IFa9KG7qQXUfF2i3LK9atjoKgHN3sOoskUNRdBmkn3VVv/qnTpWEkfCJOrJmTONqKt/NZdD X-Received: by 2002:a05:622a:2299:b0:423:9b25:3943 with SMTP id ay25-20020a05622a229900b004239b253943mr25973732qtb.65.1701387610845; Thu, 30 Nov 2023 15:40:10 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1701387610; cv=pass; d=google.com; s=arc-20160816; b=CZucI9/p1mi578Yzi97Z3k4fp3WV9+vrHGb/llUf9ILdjx94AJRGNv8kiW3fm5rz4i lVPwrPw2pVNJwQxR8vztB3M0+pL4LDKfv/Lx46JYKgBvwY/kdkrIPnbDEoDQatd0vSF4 iEjrNQoZC6O3MmOryVn0OhpZb8BDAfrjONJSNUqr44dYqFFkFzSFjBw3idfRVMs5RGEg hIHyePZ8AP4n25v5/fTQvc7hc1pgbjJrvVqfuyQTe5rS8TjisRrHPIMG+bM9gqI5yyKt rN4hjJ3YGK+WlmpZpuv+ImUxMoNjq91dFHXGHH8blzTy41IIVNdPo261Pb2/vnAT0tua tHFw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from:dkim-signature :arc-filter:dmarc-filter:delivered-to; bh=xunIRTs4CBnpSbJLAgitRVOFqsgMgARHguNp4uUkddk=; fh=x1UUcxplf2Gzmk7fTkY2iDC5B652m1jpebR/4eyiDac=; b=S67gcz3+M3cVHybtL2ktwIlPlzny1m6dg5U+2DDme2yAhSxCwAfOgQGzbZsRkEdH2F pox5vF8iNqVna8SmiNcVOpsM7t3W/baLzsYkJfFvCaAEzgr93UDgqQGAc8pGni/UJ02u IaHZY+/L5HwSpHrxAkueECiYWOrm7iCpvYvNlId4TFEolKD/qJGOLtgIRglfrxyUZj0+ uiGclL2rw+XpR953IUWRjCuEnhrvducsBkqwXzezWocYY8E4KN9pbfRoUtZqz6XW7ILW 5zgw2vkDOtp9+TFxqkai8q+7kEzOp6xft6bcMt29mkJd2YV1JdiOqh8XO3RN403002LT nqIg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@vrull.eu header.s=google header.b=WOSvwxyE; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id ev11-20020a05622a510b00b00419900396c6si2244988qtb.207.2023.11.30.15.40.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 15:40:10 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@vrull.eu header.s=google header.b=WOSvwxyE; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 826953870C03 for ; Thu, 30 Nov 2023 21:32:41 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by sourceware.org (Postfix) with ESMTPS id 4EA7C3861841 for ; Thu, 30 Nov 2023 21:32:08 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4EA7C3861841 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=vrull.eu Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=vrull.eu ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 4EA7C3861841 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::642 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701379932; cv=none; b=VFQ+AnxnlVGMBZ1tUtHhFZVG2q1jhVWiAt/kQM+B4TPmP/AA+X9j6GT2oWdeGKtKJs5ALalXL+GKEZd/b9dDW8uXuJX/hPRGRiCEE88AU26By5JBSVGQjKcnxnXrkTMWsqrNK2oe9OIZ+qyqlfBEY7gondR/dEG7OUSEFgzhmwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701379932; c=relaxed/simple; bh=ljYIhFCD6ksuBtfpAt0zVAKREtCtXscffysWVy+Xaok=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=TSVK5/2K72u/DuUzDWP7znl+d2FUgdGoglqzgRcCA6Dnc3UaxmapTf35sd3K3Ru/j9tqJxoSpCSpSgOYbnyfsUKzz2Yef89ygP/9ZdLSJY/nQIi00Tm4hFuvGlfiAPgy57Has6CLwK1u9RhSGC1pj+saeBMviFQzfVT9LBDdVX8= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ej1-x642.google.com with SMTP id a640c23a62f3a-a186e5d1056so176300366b.0 for ; Thu, 30 Nov 2023 13:32:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vrull.eu; s=google; t=1701379926; x=1701984726; darn=gcc.gnu.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=xunIRTs4CBnpSbJLAgitRVOFqsgMgARHguNp4uUkddk=; b=WOSvwxyE83qYhRzv+xhEefkZeFIvqs/M0o4/pLvE8IQvugJHikkoK6eudvF0HieCuh ErxTOpVi7Q6EP/aiCPXXdZau/IESjq4m43qGmiBsT8dqkSvhB5s1SZSL8+1h56HuOwXL VRdlBwb1mw0Uv0Lg0NhmsQp6TDairTGlKpwiVG/ZCV3FVg/VbQkPIxS9uHI4NZaWn9B5 8436na/wSEW7rrEIlJl5Lj0Pxz0JEfDKTrnNZH75lemqr9NApM63QkmiCe2OzLmcYuIf X3w+l2j6lyNJ7N7+sE4vbJlwFFpuLfsHVJ6BfnGANcinTg0FudqQL9bm8U5i/seBjkb5 xX4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701379926; x=1701984726; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xunIRTs4CBnpSbJLAgitRVOFqsgMgARHguNp4uUkddk=; b=NnXT3gssKPyCUWvU9EPQPnzAVdK1yLIciVKsGPR7KWZKG3vp3yKAHWtSl3JQSCrBOD eomuSsTsTprrq6kI6AQHUsVLNqiPTARIwwQkxUpIcsrc8YlQy7Ru11UXtVBJMAYk5S6t 553FkYekriKFbqKwQgupqfOWo9sETdvLgGJQlwnfTVyMsD5IQbckluPWtYEOZitnwv8S lHozQi3aKmnbL4cJAZ7h1skFB1kpOEMFU+V0oen0Mj/ZXmn24zNgK3ZW1YkAbdb/KaeT 0i5XkODST/wpKfj0Mf+9fzW4VtSEQweVtVM1RWJf+t1oYt7uVqpXWWf6zjJblCMQze2m e7Rg== X-Gm-Message-State: AOJu0Yy2W9fp2ghovoZGOeGKxHAq/fbwybQb3NH6wbFKbFG8wm2801OY Hp1qYNwZUN4R5dtG8HlE3VKKmxojit+FB9qQCsyTfW7P X-Received: by 2002:a17:907:cb1f:b0:a19:a19b:55da with SMTP id um31-20020a170907cb1f00b00a19a19b55damr126667ejc.106.1701379926006; Thu, 30 Nov 2023 13:32:06 -0800 (PST) Received: from manos-laptop.. ([188.73.236.24]) by smtp.gmail.com with ESMTPSA id uz14-20020a170907118e00b00a0a2cb33ee0sm1104805ejb.203.2023.11.30.13.32.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 13:32:05 -0800 (PST) From: Manos Anagnostakis To: gcc-patches@gcc.gnu.org Cc: Philipp Tomsich , Richard Sandiford , Manos Anagnostakis , Manolis Tsamis Subject: [PATCH v3] aarch64: New RTL optimization pass avoid-store-forwarding. Date: Thu, 30 Nov 2023 23:31:49 +0200 Message-Id: <20231130213149.25254-1-manos.anagnostakis@vrull.eu> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784034215393789266 X-GMAIL-MSGID: 1784034215393789266 This is an RTL pass that detects store forwarding from stores to larger loads (load pairs). This optimization is SPEC2017-driven and was found to be beneficial for some benchmarks, through testing on ampere1/ampere1a machines. For example, it can transform cases like str d5, [sp, #320] fmul d5, d31, d29 ldp d31, d17, [sp, #312] # Large load from small store to str d5, [sp, #320] fmul d5, d31, d29 ldr d31, [sp, #312] ldr d17, [sp, #320] Currently, the pass is disabled by default on all architectures and enabled by a target-specific option. If deemed beneficial enough for a default, it will be enabled on ampere1/ampere1a, or other architectures as well, without needing to be turned on by this option. Bootstrapped and regtested on aarch64-linux. gcc/ChangeLog: * config.gcc: Add aarch64-store-forwarding.o to extra_objs. * config/aarch64/aarch64-passes.def (INSERT_PASS_AFTER): New pass. * config/aarch64/aarch64-protos.h (make_pass_avoid_store_forwarding): Declare. * config/aarch64/aarch64.opt (mavoid-store-forwarding): New option. (aarch64-store-forwarding-threshold): New param. * config/aarch64/t-aarch64: Add aarch64-store-forwarding.o * doc/invoke.texi: Document new option and new param. * config/aarch64/aarch64-store-forwarding.cc: New file. gcc/testsuite/ChangeLog: * gcc.target/aarch64/ldp_ssll_no_overlap_address.c: New test. * gcc.target/aarch64/ldp_ssll_no_overlap_offset.c: New test. * gcc.target/aarch64/ldp_ssll_overlap.c: New test. Signed-off-by: Manos Anagnostakis Co-Authored-By: Manolis Tsamis Co-Authored-By: Philipp Tomsich --- Changes in v3: - Removed obvious zext/sext check in SET_DEST. - Replaced map/vector with list/struct. - Replaced current threshold check algorithm with the suggested, more efficient one (actually an inline check). - Added usage of cselib_subst_to_values, which works well with cselib_process_insn. - Removed canon_rtx, but cannot remove init/end_alias_analysis, as this is required for cselib to work properly (see cselib_init in gcc/cselib.cc). - Adjusted comments according to the new changes. gcc/config.gcc | 1 + gcc/config/aarch64/aarch64-passes.def | 1 + gcc/config/aarch64/aarch64-protos.h | 1 + .../aarch64/aarch64-store-forwarding.cc | 319 ++++++++++++++++++ gcc/config/aarch64/aarch64.opt | 9 + gcc/config/aarch64/t-aarch64 | 10 + gcc/doc/invoke.texi | 12 +- .../aarch64/ldp_ssll_no_overlap_address.c | 33 ++ .../aarch64/ldp_ssll_no_overlap_offset.c | 33 ++ .../gcc.target/aarch64/ldp_ssll_overlap.c | 33 ++ 10 files changed, 451 insertions(+), 1 deletion(-) create mode 100644 gcc/config/aarch64/aarch64-store-forwarding.cc create mode 100644 gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_address.c create mode 100644 gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_offset.c create mode 100644 gcc/testsuite/gcc.target/aarch64/ldp_ssll_overlap.c -- 2.41.0 diff --git a/gcc/config.gcc b/gcc/config.gcc index 748430194f3..2ee3b61c4fa 100644 --- a/gcc/config.gcc +++ b/gcc/config.gcc @@ -350,6 +350,7 @@ aarch64*-*-*) cxx_target_objs="aarch64-c.o" d_target_objs="aarch64-d.o" extra_objs="aarch64-builtins.o aarch-common.o aarch64-sve-builtins.o aarch64-sve-builtins-shapes.o aarch64-sve-builtins-base.o aarch64-sve-builtins-sve2.o cortex-a57-fma-steering.o aarch64-speculation.o falkor-tag-collision-avoidance.o aarch-bti-insert.o aarch64-cc-fusion.o" + extra_objs="${extra_objs} aarch64-store-forwarding.o" target_gtfiles="\$(srcdir)/config/aarch64/aarch64-builtins.cc \$(srcdir)/config/aarch64/aarch64-sve-builtins.h \$(srcdir)/config/aarch64/aarch64-sve-builtins.cc" target_has_targetm_common=yes ;; diff --git a/gcc/config/aarch64/aarch64-passes.def b/gcc/config/aarch64/aarch64-passes.def index 6ace797b738..fa79e8adca8 100644 --- a/gcc/config/aarch64/aarch64-passes.def +++ b/gcc/config/aarch64/aarch64-passes.def @@ -23,3 +23,4 @@ INSERT_PASS_BEFORE (pass_reorder_blocks, 1, pass_track_speculation); INSERT_PASS_AFTER (pass_machine_reorg, 1, pass_tag_collision_avoidance); INSERT_PASS_BEFORE (pass_shorten_branches, 1, pass_insert_bti); INSERT_PASS_AFTER (pass_if_after_combine, 1, pass_cc_fusion); +INSERT_PASS_AFTER (pass_peephole2, 1, pass_avoid_store_forwarding); diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index d2718cc87b3..7d9dfa06af9 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -1050,6 +1050,7 @@ rtl_opt_pass *make_pass_track_speculation (gcc::context *); rtl_opt_pass *make_pass_tag_collision_avoidance (gcc::context *); rtl_opt_pass *make_pass_insert_bti (gcc::context *ctxt); rtl_opt_pass *make_pass_cc_fusion (gcc::context *ctxt); +rtl_opt_pass *make_pass_avoid_store_forwarding (gcc::context *ctxt); poly_uint64 aarch64_regmode_natural_size (machine_mode); diff --git a/gcc/config/aarch64/aarch64-store-forwarding.cc b/gcc/config/aarch64/aarch64-store-forwarding.cc new file mode 100644 index 00000000000..77c376c1b8d --- /dev/null +++ b/gcc/config/aarch64/aarch64-store-forwarding.cc @@ -0,0 +1,319 @@ +/* Avoid store forwarding optimization pass. + Copyright (C) 2023 Free Software Foundation, Inc. + Contributed by VRULL GmbH. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + GCC is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + General Public License for more details. + + You should have received a copy of the GNU General Public License + along with GCC; see the file COPYING3. If not see + . */ + +#define IN_TARGET_CODE 1 + +#include "config.h" +#define INCLUDE_LIST +#include "system.h" +#include "coretypes.h" +#include "backend.h" +#include "rtl.h" +#include "alias.h" +#include "rtlanal.h" +#include "tree-pass.h" +#include "cselib.h" + +/* This is an RTL pass that detects store forwarding from stores to larger + loads (load pairs). For example, it can transform cases like + + str d5, [sp, #320] + fmul d5, d31, d29 + ldp d31, d17, [sp, #312] # Large load from small store + + to + + str d5, [sp, #320] + fmul d5, d31, d29 + ldr d31, [sp, #312] + ldr d17, [sp, #320] + + Design: The pass follows a straightforward design. It starts by + initializing the alias analysis and the cselib. Both of these are used to + find stores and larger loads with overlapping addresses, which are + candidates for store forwarding optimizations. It then scans on basic block + level to find stores that forward to larger loads and handles them + accordingly as described in the above example. Finally, the alias analysis + and the cselib library are closed. */ + +typedef struct +{ + rtx_insn *store_insn; + unsigned int insn_cnt; +} store_info; + +typedef std::list list_store_info; + +/* Statistics counters. */ +static unsigned int stats_store_count = 0; +static unsigned int stats_ldp_count = 0; +static unsigned int stats_ssll_count = 0; +static unsigned int stats_transformed_count = 0; + +/* Default. */ +static rtx dummy; +static bool is_load (rtx expr, rtx &op_1=dummy); + +/* Return true if SET expression EXPR is a store; otherwise false. */ + +static bool +is_store (rtx expr) +{ + return MEM_P (SET_DEST (expr)); +} + +/* Return true if SET expression EXPR is a load; otherwise false. OP_1 will + contain the MEM operand of the load. */ + +static bool +is_load (rtx expr, rtx &op_1) +{ + op_1 = SET_SRC (expr); + + if (GET_CODE (op_1) == ZERO_EXTEND + || GET_CODE (op_1) == SIGN_EXTEND) + op_1 = XEXP (op_1, 0); + + return MEM_P (op_1); +} + +/* Return true if STORE_MEM is forwarding to LOAD_MEM; otherwise false. */ + +static bool +is_forwarding (rtx store_mem, rtx load_mem) +{ + gcc_checking_assert (MEM_P (store_mem) && MEM_P (load_mem)); + + rtx store_mem_addr = get_addr (XEXP (store_mem, 0)); + machine_mode store_mem_mode = GET_MODE (store_mem); + store_mem_addr = cselib_lookup (store_mem_addr, store_mem_mode, 1, + store_mem_mode) + ? cselib_subst_to_values (store_mem_addr, store_mem_mode) + : NULL_RTX; + + rtx load_mem_addr = get_addr (XEXP (load_mem, 0)); + machine_mode load_mem_mode = GET_MODE (load_mem); + load_mem_addr = cselib_lookup (load_mem_addr, load_mem_mode, 1, + load_mem_mode) + ? cselib_subst_to_values (load_mem_addr, load_mem_mode) + : NULL_RTX; + + if (!store_mem_addr || !load_mem_addr) + return false; + + return rtx_equal_for_cselib_1 (store_mem_addr, + load_mem_addr, + store_mem_mode, 0); +} + +/* Return true if INSN is a load pair, preceded by a store forwarding to it; + otherwise false. STORE_EXPRS contains the stores. */ + +static bool +is_small_store_to_large_load (list_store_info store_exprs, rtx_insn *insn) +{ + unsigned int load_count = 0; + bool forwarding = false; + rtx expr = PATTERN (insn); + + if (GET_CODE (expr) != PARALLEL + || XVECLEN (expr, 0) != 2) + return false; + + for (int i = 0; i < XVECLEN (expr, 0); i++) + { + rtx op_1; + rtx out_exp = XVECEXP (expr, 0, i); + + if (GET_CODE (out_exp) != SET) + continue; + + if (!is_load (out_exp, op_1)) + continue; + + load_count++; + + for (store_info str : store_exprs) + { + if (!is_forwarding (SET_DEST (PATTERN (str.store_insn)), op_1)) + continue; + + if (dump_file) + { + fprintf (dump_file, + "Store forwarding to PARALLEL with loads:\n"); + fprintf (dump_file, " From: "); + print_rtl_single (dump_file, str.store_insn); + fprintf (dump_file, " To: "); + print_rtl_single (dump_file, insn); + } + + forwarding = true; + } + } + + if (load_count == 2) + stats_ldp_count++; + + return load_count == 2 && forwarding; +} + +/* Break a load pair into its 2 distinct loads, except if the base source + address to load from is overwriten in the first load. INSN should be the + PARALLEL of the load pair. */ + +static void +break_ldp (rtx_insn *insn) +{ + rtx expr = PATTERN (insn); + + gcc_checking_assert (GET_CODE (expr) == PARALLEL && XVECLEN (expr, 0) == 2); + + rtx load_0 = XVECEXP (expr, 0, 0); + rtx load_1 = XVECEXP (expr, 0, 1); + + gcc_checking_assert (is_load (load_0) && is_load (load_1)); + + /* The base address was overwriten in the first load. */ + if (reg_mentioned_p (SET_DEST (load_0), SET_SRC (load_1))) + return; + + emit_insn_before (load_0, insn); + emit_insn_before (load_1, insn); + remove_insn (insn); + + stats_transformed_count++; +} + +static void +scan_and_transform_bb_level () +{ + rtx_insn *insn, *next; + basic_block bb; + FOR_EACH_BB_FN (bb, cfun) + { + list_store_info store_exprs; + unsigned int insn_cnt = 0; + for (insn = BB_HEAD (bb); insn != NEXT_INSN (BB_END (bb)); insn = next) + { + next = NEXT_INSN (insn); + + /* If we cross a CALL_P insn, clear the list, because the + small-store-to-large-load is unlikely to cause performance + difference. */ + if (CALL_P (insn)) + store_exprs.clear (); + + if (!NONJUMP_INSN_P (insn)) + continue; + + cselib_process_insn (insn); + + rtx expr = single_set (insn); + + /* If a store is encountered, append it to the store_exprs list to + check it later. */ + if (expr && is_store (expr)) + { + store_exprs.push_back ({ insn, insn_cnt }); + stats_store_count++; + } + + /* Check for small-store-to-large-load. */ + if (is_small_store_to_large_load (store_exprs, insn)) + { + stats_ssll_count++; + break_ldp (insn); + } + + insn_cnt++; + + /* Pop the first store from the list if it's distance crosses the + maximum accepted threshold. The list contains unique values + sorted in ascending order, meaning that only one distance can be + off at a time. */ + if (!store_exprs.empty () + && (insn_cnt - store_exprs.front ().insn_cnt + > (unsigned int) aarch64_store_forwarding_threshold_param)) + store_exprs.pop_front (); + } + } +} + +static void +execute_avoid_store_forwarding () +{ + init_alias_analysis (); + cselib_init (0); + scan_and_transform_bb_level (); + end_alias_analysis (); + cselib_finish (); + statistics_counter_event (cfun, "Number of stores identified: ", + stats_store_count); + statistics_counter_event (cfun, "Number of load pairs identified: ", + stats_ldp_count); + statistics_counter_event (cfun, + "Number of forwarding cases identified: ", + stats_ssll_count); + statistics_counter_event (cfun, "Number of trasformed cases: ", + stats_transformed_count); +} + +const pass_data pass_data_avoid_store_forwarding = +{ + RTL_PASS, /* type. */ + "avoid_store_forwarding", /* name. */ + OPTGROUP_NONE, /* optinfo_flags. */ + TV_NONE, /* tv_id. */ + 0, /* properties_required. */ + 0, /* properties_provided. */ + 0, /* properties_destroyed. */ + 0, /* todo_flags_start. */ + 0 /* todo_flags_finish. */ +}; + +class pass_avoid_store_forwarding : public rtl_opt_pass +{ +public: + pass_avoid_store_forwarding (gcc::context *ctxt) + : rtl_opt_pass (pass_data_avoid_store_forwarding, ctxt) + {} + + /* opt_pass methods: */ + virtual bool gate (function *) + { + return aarch64_flag_avoid_store_forwarding && optimize >= 2; + } + + virtual unsigned int execute (function *) + { + execute_avoid_store_forwarding (); + return 0; + } + +}; // class pass_avoid_store_forwarding + +/* Create a new avoid store forwarding pass instance. */ + +rtl_opt_pass * +make_pass_avoid_store_forwarding (gcc::context *ctxt) +{ + return new pass_avoid_store_forwarding (ctxt); +} diff --git a/gcc/config/aarch64/aarch64.opt b/gcc/config/aarch64/aarch64.opt index f5a518202a1..a9e944a272f 100644 --- a/gcc/config/aarch64/aarch64.opt +++ b/gcc/config/aarch64/aarch64.opt @@ -304,6 +304,10 @@ moutline-atomics Target Var(aarch64_flag_outline_atomics) Init(2) Save Generate local calls to out-of-line atomic operations. +mavoid-store-forwarding +Target Bool Var(aarch64_flag_avoid_store_forwarding) Init(0) Optimization +Avoid store forwarding to load pairs. + -param=aarch64-sve-compare-costs= Target Joined UInteger Var(aarch64_sve_compare_costs) Init(1) IntegerRange(0, 1) Param When vectorizing for SVE, consider using unpacked vectors for smaller elements and use the cost model to pick the cheapest approach. Also use the cost model to choose between SVE and Advanced SIMD vectorization. @@ -360,3 +364,8 @@ Enum(aarch64_ldp_stp_policy) String(never) Value(AARCH64_LDP_STP_POLICY_NEVER) EnumValue Enum(aarch64_ldp_stp_policy) String(aligned) Value(AARCH64_LDP_STP_POLICY_ALIGNED) + +-param=aarch64-store-forwarding-threshold= +Target Joined UInteger Var(aarch64_store_forwarding_threshold_param) Init(20) Param +Maximum instruction distance allowed between a store and a load pair for this to be +considered a candidate to avoid when using aarch64-avoid-store-forwarding. diff --git a/gcc/config/aarch64/t-aarch64 b/gcc/config/aarch64/t-aarch64 index a9a244ab6d6..7639b50358d 100644 --- a/gcc/config/aarch64/t-aarch64 +++ b/gcc/config/aarch64/t-aarch64 @@ -176,6 +176,16 @@ aarch64-cc-fusion.o: $(srcdir)/config/aarch64/aarch64-cc-fusion.cc \ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ $(srcdir)/config/aarch64/aarch64-cc-fusion.cc +aarch64-store-forwarding.o: \ + $(srcdir)/config/aarch64/aarch64-store-forwarding.cc \ + $(CONFIG_H) $(SYSTEM_H) $(TM_H) $(REGS_H) insn-config.h $(RTL_BASE_H) \ + dominance.h cfg.h cfganal.h $(BASIC_BLOCK_H) $(INSN_ATTR_H) $(RECOG_H) \ + output.h hash-map.h $(DF_H) $(OBSTACK_H) $(TARGET_H) $(RTL_H) \ + $(CONTEXT_H) $(TREE_PASS_H) regrename.h \ + $(srcdir)/config/aarch64/aarch64-protos.h + $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ + $(srcdir)/config/aarch64/aarch64-store-forwarding.cc + comma=, MULTILIB_OPTIONS = $(subst $(comma),/, $(patsubst %, mabi=%, $(subst $(comma),$(comma)mabi=,$(TM_MULTILIB_CONFIG)))) MULTILIB_DIRNAMES = $(subst $(comma), ,$(TM_MULTILIB_CONFIG)) diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 4e3d89caab1..cc7229b8567 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -797,7 +797,7 @@ Objective-C and Objective-C++ Dialects}. -moverride=@var{string} -mverbose-cost-dump -mstack-protector-guard=@var{guard} -mstack-protector-guard-reg=@var{sysreg} -mstack-protector-guard-offset=@var{offset} -mtrack-speculation --moutline-atomics } +-moutline-atomics -mavoid-store-forwarding} @emph{Adapteva Epiphany Options} @gccoptlist{-mhalf-reg-file -mprefer-short-insn-regs @@ -16699,6 +16699,11 @@ With @option{--param=aarch64-stp-policy=never}, do not emit stp. With @option{--param=aarch64-stp-policy=aligned}, emit stp only if the source pointer is aligned to at least double the alignment of the type. +@item aarch64-store-forwarding-threshold +Maximum allowed instruction distance between a store and a load pair for +this to be considered a candidate to avoid when using +aarch64-avoid-store-forwarding. + @item aarch64-loop-vect-issue-rate-niters The tuning for some AArch64 CPUs tries to take both latencies and issue rates into account when deciding whether a loop should be vectorized @@ -20724,6 +20729,11 @@ Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler. +@item -mavoid-store-forwarding +@itemx -mno-avoid-store-forwarding +Avoid store forwarding to load pairs. +@option{-mavoid-store-forwarding} at levels @option{-O2}, @option{-O3}. + @opindex mlittle-endian @item -mlittle-endian Generate little-endian code. This is the default when GCC is configured for an diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_address.c b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_address.c new file mode 100644 index 00000000000..b77de6c64b6 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_address.c @@ -0,0 +1,33 @@ +/* { dg-options "-O2 -mcpu=generic -mavoid-store-forwarding" } */ + +#include + +typedef int v4si __attribute__ ((vector_size (16))); + +/* Different address, same offset, no overlap */ + +#define LDP_SSLL_NO_OVERLAP_ADDRESS(TYPE) \ +TYPE ldp_ssll_no_overlap_address_##TYPE(TYPE *ld_arr, TYPE *st_arr, TYPE *st_arr_2, TYPE i, TYPE dummy){ \ + TYPE r, y; \ + st_arr[0] = i; \ + ld_arr[0] = dummy; \ + r = st_arr_2[0]; \ + y = st_arr_2[1]; \ + return r + y; \ +} + +LDP_SSLL_NO_OVERLAP_ADDRESS(uint32_t) +LDP_SSLL_NO_OVERLAP_ADDRESS(uint64_t) +LDP_SSLL_NO_OVERLAP_ADDRESS(int32_t) +LDP_SSLL_NO_OVERLAP_ADDRESS(int64_t) +LDP_SSLL_NO_OVERLAP_ADDRESS(int) +LDP_SSLL_NO_OVERLAP_ADDRESS(long) +LDP_SSLL_NO_OVERLAP_ADDRESS(float) +LDP_SSLL_NO_OVERLAP_ADDRESS(double) +LDP_SSLL_NO_OVERLAP_ADDRESS(v4si) + +/* { dg-final { scan-assembler-times "ldp\tw\[0-9\]+, w\[0-9\]" 3 } } */ +/* { dg-final { scan-assembler-times "ldp\tx\[0-9\]+, x\[0-9\]" 3 } } */ +/* { dg-final { scan-assembler-times "ldp\ts\[0-9\]+, s\[0-9\]" 1 } } */ +/* { dg-final { scan-assembler-times "ldp\td\[0-9\]+, d\[0-9\]" 1 } } */ +/* { dg-final { scan-assembler-times "ldp\tq\[0-9\]+, q\[0-9\]" 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_offset.c b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_offset.c new file mode 100644 index 00000000000..f1b3a66abfd --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_no_overlap_offset.c @@ -0,0 +1,33 @@ +/* { dg-options "-O2 -mcpu=generic -mavoid-store-forwarding" } */ + +#include + +typedef int v4si __attribute__ ((vector_size (16))); + +/* Same address, different offset, no overlap */ + +#define LDP_SSLL_NO_OVERLAP_OFFSET(TYPE) \ +TYPE ldp_ssll_no_overlap_offset_##TYPE(TYPE *ld_arr, TYPE *st_arr, TYPE i, TYPE dummy){ \ + TYPE r, y; \ + st_arr[0] = i; \ + ld_arr[0] = dummy; \ + r = st_arr[10]; \ + y = st_arr[11]; \ + return r + y; \ +} + +LDP_SSLL_NO_OVERLAP_OFFSET(uint32_t) +LDP_SSLL_NO_OVERLAP_OFFSET(uint64_t) +LDP_SSLL_NO_OVERLAP_OFFSET(int32_t) +LDP_SSLL_NO_OVERLAP_OFFSET(int64_t) +LDP_SSLL_NO_OVERLAP_OFFSET(int) +LDP_SSLL_NO_OVERLAP_OFFSET(long) +LDP_SSLL_NO_OVERLAP_OFFSET(float) +LDP_SSLL_NO_OVERLAP_OFFSET(double) +LDP_SSLL_NO_OVERLAP_OFFSET(v4si) + +/* { dg-final { scan-assembler-times "ldp\tw\[0-9\]+, w\[0-9\]" 3 } } */ +/* { dg-final { scan-assembler-times "ldp\tx\[0-9\]+, x\[0-9\]" 3 } } */ +/* { dg-final { scan-assembler-times "ldp\ts\[0-9\]+, s\[0-9\]" 1 } } */ +/* { dg-final { scan-assembler-times "ldp\td\[0-9\]+, d\[0-9\]" 1 } } */ +/* { dg-final { scan-assembler-times "ldp\tq\[0-9\]+, q\[0-9\]" 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_ssll_overlap.c b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_overlap.c new file mode 100644 index 00000000000..8d5ce5cc87e --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/ldp_ssll_overlap.c @@ -0,0 +1,33 @@ +/* { dg-options "-O2 -mcpu=generic -mavoid-store-forwarding" } */ + +#include + +typedef int v4si __attribute__ ((vector_size (16))); + +/* Same address, same offset, overlap */ + +#define LDP_SSLL_OVERLAP(TYPE) \ +TYPE ldp_ssll_overlap_##TYPE(TYPE *ld_arr, TYPE *st_arr, TYPE i, TYPE dummy){ \ + TYPE r, y; \ + st_arr[0] = i; \ + ld_arr[0] = dummy; \ + r = st_arr[0]; \ + y = st_arr[1]; \ + return r + y; \ +} + +LDP_SSLL_OVERLAP(uint32_t) +LDP_SSLL_OVERLAP(uint64_t) +LDP_SSLL_OVERLAP(int32_t) +LDP_SSLL_OVERLAP(int64_t) +LDP_SSLL_OVERLAP(int) +LDP_SSLL_OVERLAP(long) +LDP_SSLL_OVERLAP(float) +LDP_SSLL_OVERLAP(double) +LDP_SSLL_OVERLAP(v4si) + +/* { dg-final { scan-assembler-times "ldp\tw\[0-9\]+, w\[0-9\]" 0 } } */ +/* { dg-final { scan-assembler-times "ldp\tx\[0-9\]+, x\[0-9\]" 0 } } */ +/* { dg-final { scan-assembler-times "ldp\ts\[0-9\]+, s\[0-9\]" 0 } } */ +/* { dg-final { scan-assembler-times "ldp\td\[0-9\]+, d\[0-9\]" 0 } } */ +/* { dg-final { scan-assembler-times "ldp\tq\[0-9\]+, q\[0-9\]" 0 } } */