From patchwork Mon Apr 24 21:30:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 87167 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3016030vqo; Mon, 24 Apr 2023 14:32:12 -0700 (PDT) X-Google-Smtp-Source: AKy350YL3XykQRHW58p5Vzq/NvjbYtYZCFZ9hEcw6Mm204M3NnM/dCLoxk0YAixLuynZrmwgxyZM X-Received: by 2002:a17:906:697:b0:94e:d84e:d4d0 with SMTP id u23-20020a170906069700b0094ed84ed4d0mr10565371ejb.18.1682371932235; Mon, 24 Apr 2023 14:32:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682371932; cv=none; d=google.com; s=arc-20160816; b=v2UcqWqnnjPPAMUkt9zhrxa2jGFX+g3R+eUtcGF0VQvqJ6Sve0dCtglcUXnDziR4dW JJ5G94/GTP2PQikJ7cRWbJ3WmDWqcFhiTLIKzQkR9ZsEApremm7KRTfESbJNIUVnROQN 6svmTnyNP0NrogG+k+d6T10BRBQhFA8s8biPjPKZoEsEcbdtbOnVUVW2A07xgd+NvMPj 13VLwZdx+Ibz1fnTe9r1c7KMt9vwyr3vlZU9x+xHsyL6BRrGkkEKSUt1ipkhcXavswWj RjFayAGK/VnCKT+I6U1j9EduUKTxxXooNWIfJshaKv0r/jVjlt23Ye2PGPHWCSsAqwyU Ld0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=1+bXVC+q09JH7oRtxE4gneL2WXBF6yEFEkOiefvsVRc=; b=aaJuvVv6UkKGneXCx78vyscxoj1tR58v5MXA14f3s3gLv7dJkKIjlZPRAj0yVD59AK UP/wO/+owzIf4haaHKnbLs0t6Fr5XpDeAsSghT3nfVTeZeZ1diT2CjUgs0vzUu0/jxfd DFAoFIhSonwGVMGjdbrjpjFea+i7hbGHPfkBBOllYxU1aw7siKqYjDcJMBIdugNXPf0+ 5W5LntUG4ggIc7TtmHHMKfixl2SA7pMTA6pXmw7IjNK4dp2zQoBmiBSMGtz9kwvBoJ77 JUCZcVXqygyc3YBX5ePjYyqPUZCphnMNIixcy5x/RwKkRDqNvAfLx5QgGX6mqwNp7dCt J6Vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=NlMFesg5; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id r15-20020a170906704f00b009537a538312si8582327ejj.28.2023.04.24.14.32.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Apr 2023 14:32:12 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=NlMFesg5; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A9FDD3856963 for ; Mon, 24 Apr 2023 21:31:14 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A9FDD3856963 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1682371874; bh=1+bXVC+q09JH7oRtxE4gneL2WXBF6yEFEkOiefvsVRc=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=NlMFesg544xRq0ht/D1VcvnMEsQo90gYsoDqItBWP9hFwW7o3pjS76gIwOfv4VWFT 0kW4kfV2eHarr/1XQwTPmVCOAqr17XGyjpni2T8+KzYWxUiIUVcEZlffvp28DyM4JY H53ZtTXMqTQ1E6UImUSI35aLVoeRDFn4oG4MNM4o= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by sourceware.org (Postfix) with ESMTPS id 1EF3F3858D3C for ; Mon, 24 Apr 2023 21:30:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1EF3F3858D3C Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33OGXiZw028418 for ; Mon, 24 Apr 2023 14:30:22 -0700 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3q5nfb3jvs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 24 Apr 2023 14:30:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 24 Apr 2023 14:30:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 24 Apr 2023 14:30:20 -0700 Received: from vpnclient.wrightpinski.org.com (unknown [10.69.242.187]) by maili.marvell.com (Postfix) with ESMTP id 81EEA3F70A2; Mon, 24 Apr 2023 14:30:20 -0700 (PDT) To: CC: Andrew Pinski Subject: [PATCH 3/7] PHIOPT: Move store_elim_worker into pass_cselim::execute Date: Mon, 24 Apr 2023 14:30:07 -0700 Message-ID: <20230424213011.528181-4-apinski@marvell.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230424213011.528181-1-apinski@marvell.com> References: <20230424213011.528181-1-apinski@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: yKxdQ0pDmcFLNS2AkAvlvQZgRvD0eNpi X-Proofpoint-GUID: yKxdQ0pDmcFLNS2AkAvlvQZgRvD0eNpi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-24_11,2023-04-21_01,2023-02-09_01 X-Spam-Status: No, score=-14.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew Pinski via Gcc-patches From: Andrew Pinski Reply-To: Andrew Pinski Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764094831312783847?= X-GMAIL-MSGID: =?utf-8?q?1764094831312783847?= This simple patch moves the body of store_elim_worker direclty into pass_cselim::execute. Also removes unneeded prototypes too. OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions. gcc/ChangeLog: * tree-ssa-phiopt.cc (cond_store_replacement): Remove prototype. (cond_if_else_store_replacement): Likewise. (get_non_trapping): Likewise. (store_elim_worker): Move into ... (pass_cselim::execute): This. --- gcc/tree-ssa-phiopt.cc | 250 ++++++++++++++++++++--------------------- 1 file changed, 119 insertions(+), 131 deletions(-) diff --git a/gcc/tree-ssa-phiopt.cc b/gcc/tree-ssa-phiopt.cc index d232fd9b551..fb2d2c9fc1a 100644 --- a/gcc/tree-ssa-phiopt.cc +++ b/gcc/tree-ssa-phiopt.cc @@ -55,11 +55,6 @@ along with GCC; see the file COPYING3. If not see #include "tree-ssa-propagate.h" #include "tree-ssa-dce.h" -static bool cond_store_replacement (basic_block, basic_block, edge, edge, - hash_set *); -static bool cond_if_else_store_replacement (basic_block, basic_block, basic_block); -static hash_set * get_non_trapping (); - /* Return the singleton PHI in the SEQ of PHIs for edges E0 and E1. */ static gphi * @@ -87,130 +82,6 @@ single_non_singleton_phi_for_edges (gimple_seq seq, edge e0, edge e1) return phi; } -/* The core routine of conditional store replacement. */ -static unsigned int -store_elim_worker (void) -{ - basic_block bb; - basic_block *bb_order; - unsigned n, i; - bool cfgchanged = false; - hash_set *nontrap = 0; - - calculate_dominance_info (CDI_DOMINATORS); - - /* Calculate the set of non-trapping memory accesses. */ - nontrap = get_non_trapping (); - - /* Search every basic block for COND_EXPR we may be able to optimize. - - We walk the blocks in order that guarantees that a block with - a single predecessor is processed before the predecessor. - This ensures that we collapse inner ifs before visiting the - outer ones, and also that we do not try to visit a removed - block. */ - bb_order = single_pred_before_succ_order (); - n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; - - for (i = 0; i < n; i++) - { - basic_block bb1, bb2; - edge e1, e2; - bool diamond_p = false; - - bb = bb_order[i]; - - /* Check to see if the last statement is a GIMPLE_COND. */ - gcond *cond_stmt = safe_dyn_cast (*gsi_last_bb (bb)); - if (!cond_stmt) - continue; - - e1 = EDGE_SUCC (bb, 0); - bb1 = e1->dest; - e2 = EDGE_SUCC (bb, 1); - bb2 = e2->dest; - - /* We cannot do the optimization on abnormal edges. */ - if ((e1->flags & EDGE_ABNORMAL) != 0 - || (e2->flags & EDGE_ABNORMAL) != 0) - continue; - - /* If either bb1's succ or bb2 or bb2's succ is non NULL. */ - if (EDGE_COUNT (bb1->succs) == 0 - || EDGE_COUNT (bb2->succs) == 0) - continue; - - /* Find the bb which is the fall through to the other. */ - if (EDGE_SUCC (bb1, 0)->dest == bb2) - ; - else if (EDGE_SUCC (bb2, 0)->dest == bb1) - { - std::swap (bb1, bb2); - std::swap (e1, e2); - } - else if (EDGE_SUCC (bb1, 0)->dest == EDGE_SUCC (bb2, 0)->dest - && single_succ_p (bb2)) - { - diamond_p = true; - e2 = EDGE_SUCC (bb2, 0); - /* Make sure bb2 is just a fall through. */ - if ((e2->flags & EDGE_FALLTHRU) == 0) - continue; - } - else - continue; - - e1 = EDGE_SUCC (bb1, 0); - - /* Make sure that bb1 is just a fall through. */ - if (!single_succ_p (bb1) - || (e1->flags & EDGE_FALLTHRU) == 0) - continue; - - if (diamond_p) - { - basic_block bb3 = e1->dest; - - /* Only handle sinking of store from 2 bbs only, - The middle bbs don't need to come from the - if always since we are sinking rather than - hoisting. */ - if (EDGE_COUNT (bb3->preds) != 2) - continue; - if (cond_if_else_store_replacement (bb1, bb2, bb3)) - cfgchanged = true; - continue; - } - - /* Also make sure that bb1 only have one predecessor and that it - is bb. */ - if (!single_pred_p (bb1) - || single_pred (bb1) != bb) - continue; - - /* bb1 is the middle block, bb2 the join block, bb the split block, - e1 the fallthrough edge from bb1 to bb2. We can't do the - optimization if the join block has more than two predecessors. */ - if (EDGE_COUNT (bb2->preds) > 2) - continue; - if (cond_store_replacement (bb1, bb2, e1, e2, nontrap)) - cfgchanged = true; - } - - free (bb_order); - - delete nontrap; - /* If the CFG has changed, we should cleanup the CFG. */ - if (cfgchanged) - { - /* In cond-store replacement we have added some loads on edges - and new VOPS (as we moved the store, and created a load). */ - gsi_commit_edge_inserts (); - return TODO_cleanup_cfg | TODO_update_ssa_only_virtuals; - } - return 0; -} - /* Replace PHI node element whose edge is E in block BB with variable NEW. Remove the edge from COND_BLOCK which does not lead to BB (COND_BLOCK is known to have two edges, one of which must reach BB). */ @@ -4403,13 +4274,130 @@ make_pass_cselim (gcc::context *ctxt) unsigned int pass_cselim::execute (function *) { - unsigned todo; + basic_block bb; + basic_block *bb_order; + unsigned n, i; + bool cfgchanged = false; + hash_set *nontrap = 0; + unsigned todo = 0; + /* ??? We are not interested in loop related info, but the following will create it, ICEing as we didn't init loops with pre-headers. An interfacing issue of find_data_references_in_bb. */ loop_optimizer_init (LOOPS_NORMAL); scev_initialize (); - todo = store_elim_worker (); + + calculate_dominance_info (CDI_DOMINATORS); + + /* Calculate the set of non-trapping memory accesses. */ + nontrap = get_non_trapping (); + + /* Search every basic block for COND_EXPR we may be able to optimize. + + We walk the blocks in order that guarantees that a block with + a single predecessor is processed before the predecessor. + This ensures that we collapse inner ifs before visiting the + outer ones, and also that we do not try to visit a removed + block. */ + bb_order = single_pred_before_succ_order (); + n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; + + for (i = 0; i < n; i++) + { + basic_block bb1, bb2; + edge e1, e2; + bool diamond_p = false; + + bb = bb_order[i]; + + /* Check to see if the last statement is a GIMPLE_COND. */ + gcond *cond_stmt = safe_dyn_cast (*gsi_last_bb (bb)); + if (!cond_stmt) + continue; + + e1 = EDGE_SUCC (bb, 0); + bb1 = e1->dest; + e2 = EDGE_SUCC (bb, 1); + bb2 = e2->dest; + + /* We cannot do the optimization on abnormal edges. */ + if ((e1->flags & EDGE_ABNORMAL) != 0 + || (e2->flags & EDGE_ABNORMAL) != 0) + continue; + + /* If either bb1's succ or bb2 or bb2's succ is non NULL. */ + if (EDGE_COUNT (bb1->succs) == 0 + || EDGE_COUNT (bb2->succs) == 0) + continue; + + /* Find the bb which is the fall through to the other. */ + if (EDGE_SUCC (bb1, 0)->dest == bb2) + ; + else if (EDGE_SUCC (bb2, 0)->dest == bb1) + { + std::swap (bb1, bb2); + std::swap (e1, e2); + } + else if (EDGE_SUCC (bb1, 0)->dest == EDGE_SUCC (bb2, 0)->dest + && single_succ_p (bb2)) + { + diamond_p = true; + e2 = EDGE_SUCC (bb2, 0); + /* Make sure bb2 is just a fall through. */ + if ((e2->flags & EDGE_FALLTHRU) == 0) + continue; + } + else + continue; + + e1 = EDGE_SUCC (bb1, 0); + + /* Make sure that bb1 is just a fall through. */ + if (!single_succ_p (bb1) + || (e1->flags & EDGE_FALLTHRU) == 0) + continue; + + if (diamond_p) + { + basic_block bb3 = e1->dest; + + /* Only handle sinking of store from 2 bbs only, + The middle bbs don't need to come from the + if always since we are sinking rather than + hoisting. */ + if (EDGE_COUNT (bb3->preds) != 2) + continue; + if (cond_if_else_store_replacement (bb1, bb2, bb3)) + cfgchanged = true; + continue; + } + + /* Also make sure that bb1 only have one predecessor and that it + is bb. */ + if (!single_pred_p (bb1) + || single_pred (bb1) != bb) + continue; + + /* bb1 is the middle block, bb2 the join block, bb the split block, + e1 the fallthrough edge from bb1 to bb2. We can't do the + optimization if the join block has more than two predecessors. */ + if (EDGE_COUNT (bb2->preds) > 2) + continue; + if (cond_store_replacement (bb1, bb2, e1, e2, nontrap)) + cfgchanged = true; + } + + free (bb_order); + + delete nontrap; + /* If the CFG has changed, we should cleanup the CFG. */ + if (cfgchanged) + { + /* In cond-store replacement we have added some loads on edges + and new VOPS (as we moved the store, and created a load). */ + gsi_commit_edge_inserts (); + todo = TODO_cleanup_cfg | TODO_update_ssa_only_virtuals; + } scev_finalize (); loop_optimizer_finalize (); return todo;