From patchwork Tue Mar 28 12:43:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 76055 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2185633vqo; Tue, 28 Mar 2023 05:44:20 -0700 (PDT) X-Google-Smtp-Source: AKy350Ylgk9zvAXgkJLTfrN1YfbQk1/osoS+W+ZynOpmgbcbAegwjeKOfjm9AVEEyAGANEjeuWsQ X-Received: by 2002:a17:906:738a:b0:878:58e6:f1eb with SMTP id f10-20020a170906738a00b0087858e6f1ebmr13678449ejl.23.1680007460800; Tue, 28 Mar 2023 05:44:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680007460; cv=none; d=google.com; s=arc-20160816; b=DDHXzfM4tVTAk4c55Z4kuyeR9WVxC89jPwzhd69Swywc00mlk8xblnp7GVAU+WOBXY D6fFfVV0xf4G7tbsFcYa42WYVbqDicU5Pn0l7FLCnFk4VWRuTbEd5lCQtIEUubQp5UG6 jf6u05o6I+igZ2tfREgsFmaoicF5uhlv/rZDqUw2sP7h1HPf809xH5T7DH8xnsRwDcat QIQ+SDq2e2oBD4I5pff+ue6/RZfnofEHn6YFBX0+TgKZXK/z4Icz3n/c1eiH3ipXQe2h hTJL4DNc0hafTYR0UGKIV01ZZTfgDe7tyloN+OQlrRE1aDcz8buqrYgnpcluJtxDcjXV emqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:sender:errors-to:reply-to:from:list-subscribe:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mime-version:user-agent:subject:cc:to:date:dmarc-filter :delivered-to:dkim-signature:dkim-filter; bh=A4dgMNQPQMN/l8TJ19QR0Edb6rWuEtmAZnBuIyYuzSk=; b=cA6VyCAjCWn0ru/nwQj6BPAmhs5p6pWfYT4+bfOFf7INjEsPYRNrmaj2NBEKnv+x9M Eyf1MezEEWsc5LtFwy944+1GUGWH/PB6DOsA4AauJsd1stSihgFBgdHgvcsLYhnaX6js nFbJHRln0dT5ZFybe26xLjjeKiCQHpMvSU/8Qt3Q/p+3jFXD8bUsq8XCz56+ySv5jnqX BidXy01TeSKexJRdA0GW3md+zqUJZVd6atbce1djxWUB9WHRtSATsZtGpFqNSvzuRu4A vxEd+pqeGWFz/XEhA/eOYe/46c4r8KD1jzlpUPxX8B4Nwc1JpdAYvMCwMTcIry96HNzi +2YA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=A8F7ctcb; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id vj2-20020a170907d48200b0093e849ee435si10652424ejc.401.2023.03.28.05.44.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Mar 2023 05:44:20 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=A8F7ctcb; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6EDD5385840E for ; Tue, 28 Mar 2023 12:44:19 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6EDD5385840E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1680007459; bh=A4dgMNQPQMN/l8TJ19QR0Edb6rWuEtmAZnBuIyYuzSk=; h=Date:To:cc:Subject:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=A8F7ctcb83wNvFVmIfcQqVdJAO9VQla3WONaBSg2N0Jl+nr6+qzDNiTimsjA8SJMy cAVXmfBqrsEtXbE3aX58iErFiICkTwA1C2QGjm8wJv9PXZhhOjCxX7+sFPRiN1RhJU AFQwwzU2oxwLn7EpB/ywisvoP9uW+K6/fMvMAMCk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by sourceware.org (Postfix) with ESMTPS id C338C3858D39 for ; Tue, 28 Mar 2023 12:43:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C338C3858D39 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id ED6011F8B8; Tue, 28 Mar 2023 12:43:34 +0000 (UTC) Received: from wotan.suse.de (wotan.suse.de [10.160.0.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id C320F2C141; Tue, 28 Mar 2023 12:43:34 +0000 (UTC) Date: Tue, 28 Mar 2023 12:43:34 +0000 (UTC) To: gcc-patches@gcc.gnu.org cc: Jakub Jelinek Subject: [PATCH] tree-optimization/107087 - missed CCP after forwprop User-Agent: Alpine 2.22 (LSU 394 2020-01-19) MIME-Version: 1.0 X-Spam-Status: No, score=-10.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, MISSING_MID, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Biener via Gcc-patches From: Richard Biener Reply-To: Richard Biener Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" Message-Id: <20230328124419.6EDD5385840E@sourceware.org> X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761615502900651442?= X-GMAIL-MSGID: =?utf-8?q?1761615502900651442?= When forwprop simplifies the CFG the 2nd order opportunities by exposed degenerate PHIs are not realized. The following improves this by properly tracking executable edges and thus handling this for non-cyclic CFGs at least. This avoids the bogus diagnostic reported for the testcase in this PR. Bootstrapped and tested on x86_64-unknown-linux-gnu. OK? Thanks, Richard. PR tree-optimization/107087 * tree-ssa-forwprop.cc (pass_forwprop::execute): Track executable regions to avoid useless work and to better propagate degenerate PHIs. * testsuite/g++.dg/pr107087.C: New testcase. --- gcc/testsuite/g++.dg/pr107087.C | 16 ++++++++ gcc/tree-ssa-forwprop.cc | 67 ++++++++++++++++++++++++++++++--- 2 files changed, 77 insertions(+), 6 deletions(-) create mode 100644 gcc/testsuite/g++.dg/pr107087.C diff --git a/gcc/testsuite/g++.dg/pr107087.C b/gcc/testsuite/g++.dg/pr107087.C new file mode 100644 index 00000000000..3ca76b4153c --- /dev/null +++ b/gcc/testsuite/g++.dg/pr107087.C @@ -0,0 +1,16 @@ +// { dg-do compile } +// { dg-require-effective-target c++11 } +// { dg-options "-O2" } + +#include + +void test01() +{ + std::vector v1, v2{5, 6}; + int n = 0; + std::vector::iterator it = v1.insert(v1.cbegin(), n); + it = v1.insert(v1.cbegin(), 1); + it = v1.insert(v1.cbegin(), {2, 3}); + it = v1.insert(v1.cbegin(), 1, 4); + it = v1.insert(v1.cbegin(), v2.begin(), v2.end()); +} diff --git a/gcc/tree-ssa-forwprop.cc b/gcc/tree-ssa-forwprop.cc index 6df0b8f2215..5eccc7a89b5 100644 --- a/gcc/tree-ssa-forwprop.cc +++ b/gcc/tree-ssa-forwprop.cc @@ -3462,6 +3462,17 @@ pass_forwprop::execute (function *fun) int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (fun)); int postorder_num = pre_and_rev_post_order_compute_fn (fun, NULL, postorder, false); + int *bb_to_rpo = XNEWVEC (int, last_basic_block_for_fn (fun)); + for (int i = 0; i < postorder_num; ++i) + { + bb_to_rpo[postorder[i]] = i; + edge_iterator ei; + edge e; + FOR_EACH_EDGE (e, ei, BASIC_BLOCK_FOR_FN (fun, postorder[i])->succs) + e->flags &= ~EDGE_EXECUTABLE; + } + single_succ_edge (BASIC_BLOCK_FOR_FN (fun, ENTRY_BLOCK))->flags + |= EDGE_EXECUTABLE; auto_vec to_fixup; auto_vec to_remove; to_purge = BITMAP_ALLOC (NULL); @@ -3470,6 +3481,30 @@ pass_forwprop::execute (function *fun) { gimple_stmt_iterator gsi; basic_block bb = BASIC_BLOCK_FOR_FN (fun, postorder[i]); + edge_iterator ei; + edge e; + + /* Skip processing not executable blocks. We could improve + single_use tracking by at least unlinking uses from unreachable + blocks but since blocks with uses are not processed in a + meaningful order this is probably not worth it. */ + bool any = false; + FOR_EACH_EDGE (e, ei, bb->preds) + { + if ((e->flags & EDGE_EXECUTABLE) + /* With dominators we could improve backedge handling + when e->src is dominated by bb. But for irreducible + regions we have to take all backedges conservatively. + We can handle single-block cycles as we know the + dominator relationship here. */ + || bb_to_rpo[e->src->index] > i) + { + any = true; + break; + } + } + if (!any) + continue; /* Record degenerate PHIs in the lattice. */ for (gphi_iterator si = gsi_start_phis (bb); !gsi_end_p (si); @@ -3480,13 +3515,25 @@ pass_forwprop::execute (function *fun) if (virtual_operand_p (res)) continue; - use_operand_p use_p; - ssa_op_iter it; tree first = NULL_TREE; bool all_same = true; - FOR_EACH_PHI_ARG (use_p, phi, it, SSA_OP_USE) + edge_iterator ei; + edge e; + FOR_EACH_EDGE (e, ei, bb->preds) { - tree use = USE_FROM_PTR (use_p); + /* Ignore not executable forward edges. */ + if (!(e->flags & EDGE_EXECUTABLE)) + { + if (bb_to_rpo[e->src->index] < i) + continue; + /* Avoid equivalences from backedges - while we might + be able to make irreducible regions reducible and + thus turning a back into a forward edge we do not + want to deal with the intermediate SSA issues that + exposes. */ + all_same = false; + } + tree use = PHI_ARG_DEF_FROM_EDGE (phi, e); if (use == res) /* The PHI result can also appear on a backedge, if so we can ignore this case for the purpose of determining @@ -3981,8 +4028,6 @@ pass_forwprop::execute (function *fun) } /* Substitute in destination PHI arguments. */ - edge_iterator ei; - edge e; FOR_EACH_EDGE (e, ei, bb->succs) for (gphi_iterator gsi = gsi_start_phis (e->dest); !gsi_end_p (gsi); gsi_next (&gsi)) @@ -3998,8 +4043,18 @@ pass_forwprop::execute (function *fun) && may_propagate_copy (arg, val)) propagate_value (use_p, val); } + + /* Mark outgoing exectuable edges. */ + if (edge e = find_taken_edge (bb, NULL)) + e->flags |= EDGE_EXECUTABLE; + else + { + FOR_EACH_EDGE (e, ei, bb->succs) + e->flags |= EDGE_EXECUTABLE; + } } free (postorder); + free (bb_to_rpo); lattice.release (); /* Remove stmts in reverse order to make debug stmt creation possible. */