From patchwork Wed May 3 12:58:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 89741 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1305849vqo; Wed, 3 May 2023 05:59:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7zcnSqI5i14VZht0nudtrpjpyliqJq3e/5jX/gbqgRbFfqwja5PLtID9RT3XoNTphO1a7Y X-Received: by 2002:a17:907:7e99:b0:94e:46ef:1361 with SMTP id qb25-20020a1709077e9900b0094e46ef1361mr3172504ejc.34.1683118795626; Wed, 03 May 2023 05:59:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683118795; cv=none; d=google.com; s=arc-20160816; b=QRqOMPWN+zMKZBPf9s0sivtUCKNUuO/noEpTL+ZvUAX6q9VT+yhEChq7/Kct10v2/U w6Mf3CtcGYjK8oRYKKfyrQM9gcQGOdbHodBI7vbfJ6UlmXQbjZGlnF1vEoBK5/yY2NQK FKJlieYNmfF8H02rx/ZTyXdE0lCAWtn+Bk31MWnRfCQgDveauudvbh2S9kMsIm3Sht79 yC55qHozYQfHoZuEafmZ6bFRpyeRxQlRMqJr5usyaE2dQpTXjtv7oJ3Y82HPN3w6CQeP GDCAtvjNL2CjY41BbLjhgNpan2h52YEcMyjeWnUJmTCEVrSC8XN3D/jqZ6/8/0dSIXsL 4MyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:message-id :mime-version:subject:to:date:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=BCro1m8FGJ/8+P7UtI0WQUEgG2zuRphIY6eNPG7fs/4=; b=u9DPV4HeeGuix9sxoqntyp4PBqqletbec6RyRxG4UsyOS34k1fQnCkWkkYgYQk3kGf H+QdfcSQvOyw1ZL379b96/HRO/l9ojtgYJzX0+0Fteu80BINx6QE3o08U1jFMmWnhLsW tu8GtS2Yp7B3Pcs/xF2RHW6GjbStcNxtOaswiW9JdKIcGml0k4oDqlStSrQFrPS177XF PFoP9hnTFo6+ab3xYvVumh1Pq95oi41nedP7umfwWrYJEsNStKzPcgO4qQ9cp8m9iOLZ IEqhM71FVzCtyIwsgKgcGh2pxwgE3HzT2ffgf5bB3jDeZm/t5KfUFS1DBgbLbdblVzmw ykmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=iMeFIqEE; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id p6-20020a170906784600b0095985d29d76si17892779ejm.243.2023.05.03.05.59.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 05:59:55 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=iMeFIqEE; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8B58B385772B for ; Wed, 3 May 2023 12:59:36 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8B58B385772B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683118776; bh=BCro1m8FGJ/8+P7UtI0WQUEgG2zuRphIY6eNPG7fs/4=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=iMeFIqEEhPADNIJ9UK7SHezJ44C4drq9jW1td51mO5c109m3zhAbsA+siMtc13T7A Gya1EArTRxByiIHmCwcY5swYgbAcE489tD+T83F+h7CG8Wqxxu3Q0nCpJ1y5y6Gsgi WCKigvvO33VYjiLNsxQUQxzmuuntvIexg5funtSc= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by sourceware.org (Postfix) with ESMTPS id B3E2A385773E for ; Wed, 3 May 2023 12:58:48 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B3E2A385773E Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E605D22805 for ; Wed, 3 May 2023 12:58:47 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D146213584 for ; Wed, 3 May 2023 12:58:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id p4PnMYdaUmRUEAAAMHmgww (envelope-from ) for ; Wed, 03 May 2023 12:58:47 +0000 Date: Wed, 3 May 2023 14:58:47 +0200 (CEST) To: gcc-patches@gcc.gnu.org Subject: [PATCH] Rename last_stmt to last_nondebug_stmt MIME-Version: 1.0 Message-Id: <20230503125847.D146213584@imap2.suse-dmz.suse.de> X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Biener via Gcc-patches From: Richard Biener Reply-To: Richard Biener Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764877974179805943?= X-GMAIL-MSGID: =?utf-8?q?1764877974179805943?= The following renames last_stmt to last_nondebug_stmt which is what it really does. Bootstrapped and tested on x86_64-unknown-linux-gnu. I'm going to push this tomorrow if there are no comments. Richard. * tree-cfg.h (last_stmt): Rename to ... (last_nondebug_stmt): ... this. * tree-cfg.cc (last_stmt): Rename to ... (last_nondebug_stmt): ... this. (assign_discriminators): Adjust. (group_case_labels_stmt): Likewise. (gimple_can_duplicate_bb_p): Likewise. (execute_fixup_cfg): Likewise. * auto-profile.cc (afdo_propagate_circuit): Likewise. * gimple-range.cc (gimple_ranger::range_on_exit): Likewise. * omp-expand.cc (workshare_safe_to_combine_p): Likewise. (determine_parallel_type): Likewise. (adjust_context_and_scope): Likewise. (expand_task_call): Likewise. (remove_exit_barrier): Likewise. (expand_omp_taskreg): Likewise. (expand_omp_for_init_counts): Likewise. (expand_omp_for_init_vars): Likewise. (expand_omp_for_static_chunk): Likewise. (expand_omp_simd): Likewise. (expand_oacc_for): Likewise. (expand_omp_for): Likewise. (expand_omp_sections): Likewise. (expand_omp_atomic_fetch_op): Likewise. (expand_omp_atomic_cas): Likewise. (expand_omp_atomic): Likewise. (expand_omp_target): Likewise. (expand_omp): Likewise. (omp_make_gimple_edges): Likewise. * trans-mem.cc (tm_region_init): Likewise. * tree-inline.cc (redirect_all_calls): Likewise. * tree-parloops.cc (gen_parallel_loop): Likewise. * tree-ssa-loop-ch.cc (do_while_loop_p): Likewise. * tree-ssa-loop-ivcanon.cc (canonicalize_loop_induction_variables): Likewise. * tree-ssa-loop-ivopts.cc (stmt_after_ip_normal_pos): Likewise. (may_eliminate_iv): Likewise. * tree-ssa-loop-manip.cc (standard_iv_increment_position): Likewise. * tree-ssa-loop-niter.cc (do_warn_aggressive_loop_optimizations): Likewise. (estimate_numbers_of_iterations): Likewise. * tree-ssa-loop-split.cc (compute_added_num_insns): Likewise. * tree-ssa-loop-unswitch.cc (get_predicates_for_bb): Likewise. (set_predicates_for_bb): Likewise. (init_loop_unswitch_info): Likewise. (hoist_guard): Likewise. * tree-ssa-phiopt.cc (match_simplify_replacement): Likewise. (minmax_replacement): Likewise. * tree-ssa-reassoc.cc (update_range_test): Likewise. (optimize_range_tests_to_bit_test): Likewise. (optimize_range_tests_var_bound): Likewise. (optimize_range_tests): Likewise. (no_side_effect_bb): Likewise. (suitable_cond_bb): Likewise. (maybe_optimize_range_tests): Likewise. (reassociate_bb): Likewise. * tree-vrp.cc (rvrp_folder::pre_fold_bb): Likewise. --- gcc/auto-profile.cc | 2 +- gcc/gimple-range.cc | 2 +- gcc/omp-expand.cc | 72 ++++++++++++++++++----------------- gcc/trans-mem.cc | 2 +- gcc/tree-cfg.cc | 12 +++--- gcc/tree-cfg.h | 2 +- gcc/tree-inline.cc | 2 +- gcc/tree-parloops.cc | 2 +- gcc/tree-ssa-loop-ch.cc | 2 +- gcc/tree-ssa-loop-ivcanon.cc | 4 +- gcc/tree-ssa-loop-ivopts.cc | 4 +- gcc/tree-ssa-loop-manip.cc | 2 +- gcc/tree-ssa-loop-niter.cc | 4 +- gcc/tree-ssa-loop-split.cc | 2 +- gcc/tree-ssa-loop-unswitch.cc | 10 ++--- gcc/tree-ssa-phiopt.cc | 4 +- gcc/tree-ssa-reassoc.cc | 23 +++++------ gcc/tree-vrp.cc | 2 +- 18 files changed, 79 insertions(+), 74 deletions(-) diff --git a/gcc/auto-profile.cc b/gcc/auto-profile.cc index f88d00934e1..360c42c4b89 100644 --- a/gcc/auto-profile.cc +++ b/gcc/auto-profile.cc @@ -1303,7 +1303,7 @@ afdo_propagate_circuit (const bb_set &annotated_bb) { gimple *def_stmt; tree cmp_rhs, cmp_lhs; - gimple *cmp_stmt = last_stmt (bb); + gimple *cmp_stmt = last_nondebug_stmt (bb); edge e; edge_iterator ei; diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc index 49e9d6b4de6..a275c090e4b 100644 --- a/gcc/gimple-range.cc +++ b/gcc/gimple-range.cc @@ -181,7 +181,7 @@ gimple_ranger::range_on_exit (vrange &r, basic_block bb, tree name) // If this is not the definition block, get the range on the last stmt in // the block... if there is one. if (def_bb != bb) - s = last_stmt (bb); + s = last_nondebug_stmt (bb); // If there is no statement provided, get the range_on_entry for this block. if (s) range_of_expr (r, name, s); diff --git a/gcc/omp-expand.cc b/gcc/omp-expand.cc index 1ccee29c52a..db58b3cb49b 100644 --- a/gcc/omp-expand.cc +++ b/gcc/omp-expand.cc @@ -172,7 +172,7 @@ static bool workshare_safe_to_combine_p (basic_block ws_entry_bb) { struct omp_for_data fd; - gimple *ws_stmt = last_stmt (ws_entry_bb); + gimple *ws_stmt = last_nondebug_stmt (ws_entry_bb); if (gimple_code (ws_stmt) == GIMPLE_OMP_SECTIONS) return true; @@ -319,19 +319,20 @@ determine_parallel_type (struct omp_region *region) /* Give up for task reductions on the parallel, while it is implementable, adding another big set of APIs or slowing down the normal paths is not acceptable. */ - tree pclauses = gimple_omp_parallel_clauses (last_stmt (par_entry_bb)); + tree pclauses + = gimple_omp_parallel_clauses (last_nondebug_stmt (par_entry_bb)); if (omp_find_clause (pclauses, OMP_CLAUSE__REDUCTEMP_)) return; if (single_succ (par_entry_bb) == ws_entry_bb && single_succ (ws_exit_bb) == par_exit_bb && workshare_safe_to_combine_p (ws_entry_bb) - && (gimple_omp_parallel_combined_p (last_stmt (par_entry_bb)) + && (gimple_omp_parallel_combined_p (last_nondebug_stmt (par_entry_bb)) || (last_and_only_stmt (ws_entry_bb) && last_and_only_stmt (par_exit_bb)))) { - gimple *par_stmt = last_stmt (par_entry_bb); - gimple *ws_stmt = last_stmt (ws_entry_bb); + gimple *par_stmt = last_nondebug_stmt (par_entry_bb); + gimple *ws_stmt = last_nondebug_stmt (ws_entry_bb); if (region->inner->type == GIMPLE_OMP_FOR) { @@ -511,11 +512,11 @@ adjust_context_and_scope (struct omp_region *region, tree entry_block, case GIMPLE_OMP_PARALLEL: case GIMPLE_OMP_TASK: case GIMPLE_OMP_TEAMS: - entry_stmt = last_stmt (region->entry); + entry_stmt = last_nondebug_stmt (region->entry); parent_fndecl = gimple_omp_taskreg_child_fn (entry_stmt); break; case GIMPLE_OMP_TARGET: - entry_stmt = last_stmt (region->entry); + entry_stmt = last_nondebug_stmt (region->entry); parent_fndecl = gimple_omp_target_child_fn (as_a (entry_stmt)); break; @@ -776,7 +777,7 @@ expand_task_call (struct omp_region *region, basic_block bb, bool ull = false; if (taskloop_p) { - gimple *g = last_stmt (region->outer->entry); + gimple *g = last_nondebug_stmt (region->outer->entry); gcc_assert (gimple_code (g) == GIMPLE_OMP_FOR && gimple_omp_for_kind (g) == GF_OMP_FOR_KIND_TASKLOOP); struct omp_for_data fd; @@ -1049,7 +1050,7 @@ remove_exit_barrier (struct omp_region *region) if (any_addressable_vars < 0) { gomp_parallel *parallel_stmt - = as_a (last_stmt (region->entry)); + = as_a (last_nondebug_stmt (region->entry)); tree child_fun = gimple_omp_parallel_child_fn (parallel_stmt); tree local_decls, block, decl; unsigned ix; @@ -1245,7 +1246,7 @@ expand_omp_taskreg (struct omp_region *region) edge e; vec *ws_args; - entry_stmt = last_stmt (region->entry); + entry_stmt = last_nondebug_stmt (region->entry); if (gimple_code (entry_stmt) == GIMPLE_OMP_TASK && gimple_omp_task_taskwait_p (entry_stmt)) { @@ -2340,7 +2341,7 @@ expand_omp_for_init_counts (struct omp_for_data *fd, gimple_stmt_iterator *gsi, set_immediate_dominator (CDI_DOMINATORS, next_bb, cur_bb); break; } - e = split_block (cur_bb, last_stmt (cur_bb)); + e = split_block (cur_bb, last_nondebug_stmt (cur_bb)); basic_block new_cur_bb = create_empty_bb (cur_bb); add_bb_to_loop (new_cur_bb, cur_bb->loop_father); @@ -2356,7 +2357,7 @@ expand_omp_for_init_counts (struct omp_for_data *fd, gimple_stmt_iterator *gsi, true, GSI_SAME_STMT); expand_omp_build_assign (&gsi2, vs[i], t); - ne = split_block (e->dest, last_stmt (e->dest)); + ne = split_block (e->dest, last_nondebug_stmt (e->dest)); gsi2 = gsi_after_labels (ne->dest); expand_omp_build_cond (&gsi2, fd->loops[i].cond_code, vs[i], n2); @@ -2874,7 +2875,7 @@ expand_omp_for_init_vars (struct omp_for_data *fd, gimple_stmt_iterator *gsi, set_immediate_dominator (CDI_DOMINATORS, entry_bb, dom_bb); break; } - e = split_block (cur_bb, last_stmt (cur_bb)); + e = split_block (cur_bb, last_nondebug_stmt (cur_bb)); basic_block new_cur_bb = create_empty_bb (cur_bb); add_bb_to_loop (new_cur_bb, cur_bb->loop_father); @@ -2896,7 +2897,7 @@ expand_omp_for_init_vars (struct omp_for_data *fd, gimple_stmt_iterator *gsi, true, GSI_SAME_STMT); expand_omp_build_assign (&gsi2, vs[j], t); - edge ne = split_block (e->dest, last_stmt (e->dest)); + edge ne = split_block (e->dest, last_nondebug_stmt (e->dest)); gsi2 = gsi_after_labels (ne->dest); gcond *cond_stmt; @@ -5753,7 +5754,7 @@ expand_omp_for_static_chunk (struct omp_region *region, itype = signed_type_for (type); entry_bb = region->entry; - se = split_block (entry_bb, last_stmt (entry_bb)); + se = split_block (entry_bb, last_nondebug_stmt (entry_bb)); entry_bb = se->src; iter_part_bb = se->dest; cont_bb = region->cont; @@ -6483,7 +6484,7 @@ expand_omp_simd (struct omp_region *region, struct omp_for_data *fd) { gcc_assert (BRANCH_EDGE (cont_bb)->dest == l0_bb); gcc_assert (EDGE_COUNT (cont_bb->succs) == 2); - l1_bb = split_block (cont_bb, last_stmt (cont_bb))->dest; + l1_bb = split_block (cont_bb, last_nondebug_stmt (cont_bb))->dest; l2_bb = BRANCH_EDGE (entry_bb)->dest; } else @@ -6931,7 +6932,7 @@ expand_omp_simd (struct omp_region *region, struct omp_for_data *fd) gsi = gsi_after_labels (bb); expand_omp_build_assign (&gsi, fd->loops[i].v, t); - bb = split_block (bb, last_stmt (bb))->dest; + bb = split_block (bb, last_nondebug_stmt (bb))->dest; gsi = gsi_start_bb (bb); tree itype = TREE_TYPE (fd->loops[i].v); if (fd->loops[i].m2) @@ -7057,7 +7058,7 @@ expand_omp_simd (struct omp_region *region, struct omp_for_data *fd) t = counts[i + 1]; expand_omp_build_assign (&gsi, min_arg1, t2); expand_omp_build_assign (&gsi, min_arg2, t); - e = split_block (init_bb, last_stmt (init_bb)); + e = split_block (init_bb, last_nondebug_stmt (init_bb)); gsi = gsi_after_labels (e->dest); init_bb = e->dest; remove_edge (FALLTHRU_EDGE (entry_bb)); @@ -7713,7 +7714,7 @@ expand_oacc_for (struct omp_region *region, struct omp_for_data *fd) edge split, be, fte; /* Split the end of entry_bb to create head_bb. */ - split = split_block (entry_bb, last_stmt (entry_bb)); + split = split_block (entry_bb, last_nondebug_stmt (entry_bb)); basic_block head_bb = split->dest; entry_bb = split->src; @@ -8140,8 +8141,9 @@ expand_omp_for (struct omp_region *region, gimple *inner_stmt) struct omp_for_data_loop *loops; loops = XALLOCAVEC (struct omp_for_data_loop, - gimple_omp_for_collapse (last_stmt (region->entry))); - omp_extract_for_data (as_a (last_stmt (region->entry)), + gimple_omp_for_collapse + (last_nondebug_stmt (region->entry))); + omp_extract_for_data (as_a (last_nondebug_stmt (region->entry)), &fd, loops); region->sched_kind = fd.sched_kind; region->sched_modifiers = fd.sched_modifiers; @@ -8490,7 +8492,7 @@ expand_omp_sections (struct omp_region *region) gcc_assert (gimple_code (gsi_stmt (switch_si)) == GIMPLE_OMP_SECTIONS_SWITCH); if (exit_reachable) { - cont = as_a (last_stmt (l1_bb)); + cont = as_a (last_nondebug_stmt (l1_bb)); gcc_assert (gimple_code (cont) == GIMPLE_OMP_CONTINUE); vmain = gimple_omp_continue_control_use (cont); vnext = gimple_omp_continue_control_def (cont); @@ -8924,9 +8926,9 @@ expand_omp_atomic_fetch_op (basic_block load_bb, if (gimple_code (gsi_stmt (gsi)) != GIMPLE_OMP_ATOMIC_STORE) return false; need_new = gimple_omp_atomic_need_value_p (gsi_stmt (gsi)); - need_old = gimple_omp_atomic_need_value_p (last_stmt (load_bb)); + need_old = gimple_omp_atomic_need_value_p (last_nondebug_stmt (load_bb)); enum omp_memory_order omo - = gimple_omp_atomic_memory_order (last_stmt (load_bb)); + = gimple_omp_atomic_memory_order (last_nondebug_stmt (load_bb)); enum memmodel mo = omp_memory_order_to_memmodel (omo); gcc_checking_assert (!need_old || !need_new); @@ -9140,7 +9142,7 @@ expand_omp_atomic_cas (basic_block load_bb, tree addr, return false; location_t loc = gimple_location (store_stmt); - gimple *load_stmt = last_stmt (load_bb); + gimple *load_stmt = last_nondebug_stmt (load_bb); bool need_new = gimple_omp_atomic_need_value_p (store_stmt); bool need_old = gimple_omp_atomic_need_value_p (load_stmt); bool weak = gimple_omp_atomic_weak_p (load_stmt); @@ -9559,8 +9561,10 @@ static void expand_omp_atomic (struct omp_region *region) { basic_block load_bb = region->entry, store_bb = region->exit; - gomp_atomic_load *load = as_a (last_stmt (load_bb)); - gomp_atomic_store *store = as_a (last_stmt (store_bb)); + gomp_atomic_load *load + = as_a (last_nondebug_stmt (load_bb)); + gomp_atomic_store *store + = as_a (last_nondebug_stmt (store_bb)); tree loaded_val = gimple_omp_atomic_load_lhs (load); tree addr = gimple_omp_atomic_load_rhs (load); tree stored_val = gimple_omp_atomic_store_val (store); @@ -9791,7 +9795,7 @@ expand_omp_target (struct omp_region *region) bool offloaded; int target_kind; - entry_stmt = as_a (last_stmt (region->entry)); + entry_stmt = as_a (last_nondebug_stmt (region->entry)); target_kind = gimple_omp_target_kind (entry_stmt); new_bb = region->entry; @@ -10558,15 +10562,15 @@ expand_omp (struct omp_region *region) determine_parallel_type (region); if (region->type == GIMPLE_OMP_FOR - && gimple_omp_for_combined_p (last_stmt (region->entry))) - inner_stmt = last_stmt (region->inner->entry); + && gimple_omp_for_combined_p (last_nondebug_stmt (region->entry))) + inner_stmt = last_nondebug_stmt (region->inner->entry); if (region->inner) expand_omp (region->inner); saved_location = input_location; - if (gimple_has_location (last_stmt (region->entry))) - input_location = gimple_location (last_stmt (region->entry)); + if (gimple_has_location (last_nondebug_stmt (region->entry))) + input_location = gimple_location (last_nondebug_stmt (region->entry)); switch (region->type) { @@ -10596,7 +10600,7 @@ expand_omp (struct omp_region *region) case GIMPLE_OMP_ORDERED: { gomp_ordered *ord_stmt - = as_a (last_stmt (region->entry)); + = as_a (last_nondebug_stmt (region->entry)); if (gimple_omp_ordered_standalone_p (ord_stmt)) { /* We'll expand these when expanding corresponding @@ -10926,7 +10930,7 @@ bool omp_make_gimple_edges (basic_block bb, struct omp_region **region, int *region_idx) { - gimple *last = last_stmt (bb); + gimple *last = last_nondebug_stmt (bb); enum gimple_code code = gimple_code (last); struct omp_region *cur_region = *region; bool fallthru = false; diff --git a/gcc/trans-mem.cc b/gcc/trans-mem.cc index 080b20d7eb6..4b129663e0d 100644 --- a/gcc/trans-mem.cc +++ b/gcc/trans-mem.cc @@ -2057,7 +2057,7 @@ tm_region_init (struct tm_region *region) region = tm_region_init_1 (region, bb); /* Check for the last statement in the block beginning a new region. */ - g = last_stmt (bb); + g = last_nondebug_stmt (bb); old_region = region; if (g) if (gtransaction *trans_stmt = dyn_cast (g)) diff --git a/gcc/tree-cfg.cc b/gcc/tree-cfg.cc index 4927fc0a8d9..21cf6fca259 100644 --- a/gcc/tree-cfg.cc +++ b/gcc/tree-cfg.cc @@ -1236,7 +1236,7 @@ assign_discriminators (void) curr_discr = next_discriminator_for_locus (curr_locus); } - gimple *last = last_stmt (bb); + gimple *last = last_nondebug_stmt (bb); location_t locus = last ? gimple_location (last) : UNKNOWN_LOCATION; if (locus == UNKNOWN_LOCATION) continue; @@ -1246,7 +1246,7 @@ assign_discriminators (void) FOR_EACH_EDGE (e, ei, bb->succs) { gimple *first = first_non_label_stmt (e->dest); - gimple *last = last_stmt (e->dest); + gimple *last = last_nondebug_stmt (e->dest); gimple *stmt_on_same_line = NULL; if (first && same_line_p (locus, &locus_e, @@ -1860,7 +1860,7 @@ group_case_labels_stmt (gswitch *stmt) -Wreturn-type can be diagnosed. We'll optimize it later during switchconv pass or any other cfg cleanup. */ && (gimple_in_ssa_p (cfun) - || (LOCATION_LOCUS (gimple_location (last_stmt (base_bb))) + || (LOCATION_LOCUS (gimple_location (last_nondebug_stmt (base_bb))) != BUILTINS_LOCATION))) { edge base_edge = find_edge (gimple_bb (stmt), base_bb); @@ -2941,7 +2941,7 @@ first_non_label_stmt (basic_block bb) /* Return the last statement in basic block BB. */ gimple * -last_stmt (basic_block bb) +last_nondebug_stmt (basic_block bb) { gimple_stmt_iterator i = gsi_last_bb (bb); gimple *stmt = NULL; @@ -6409,7 +6409,7 @@ gimple_split_block_before_cond_jump (basic_block bb) static bool gimple_can_duplicate_bb_p (const_basic_block bb) { - gimple *last = last_stmt (CONST_CAST_BB (bb)); + gimple *last = last_nondebug_stmt (CONST_CAST_BB (bb)); /* Do checks that can only fail for the last stmt, to minimize the work in the stmt loop. */ @@ -9954,7 +9954,7 @@ execute_fixup_cfg (void) when inlining a noreturn call that does in fact return. */ if (EDGE_COUNT (bb->succs) == 0) { - gimple *stmt = last_stmt (bb); + gimple *stmt = last_nondebug_stmt (bb); if (!stmt || (!is_ctrl_stmt (stmt) && (!is_gimple_call (stmt) diff --git a/gcc/tree-cfg.h b/gcc/tree-cfg.h index 9b56a68fe9d..f5f0ea0449a 100644 --- a/gcc/tree-cfg.h +++ b/gcc/tree-cfg.h @@ -61,7 +61,7 @@ extern bool assert_unreachable_fallthru_edge_p (edge); extern void delete_tree_cfg_annotations (function *); extern gphi *get_virtual_phi (basic_block); extern gimple *first_stmt (basic_block); -extern gimple *last_stmt (basic_block); +extern gimple *last_nondebug_stmt (basic_block); extern gimple *last_and_only_stmt (basic_block); extern bool verify_gimple_in_seq (gimple_seq, bool = true); extern bool verify_gimple_in_cfg (struct function *, bool, bool = true); diff --git a/gcc/tree-inline.cc b/gcc/tree-inline.cc index c702f0032a1..63a19f8d1d8 100644 --- a/gcc/tree-inline.cc +++ b/gcc/tree-inline.cc @@ -2972,7 +2972,7 @@ void redirect_all_calls (copy_body_data * id, basic_block bb) { gimple_stmt_iterator si; - gimple *last = last_stmt (bb); + gimple *last = last_nondebug_stmt (bb); for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si)) { gimple *stmt = gsi_stmt (si); diff --git a/gcc/tree-parloops.cc b/gcc/tree-parloops.cc index eae240b9ffd..0abec54905d 100644 --- a/gcc/tree-parloops.cc +++ b/gcc/tree-parloops.cc @@ -3168,7 +3168,7 @@ gen_parallel_loop (class loop *loop, /* Create the parallel constructs. */ loc = UNKNOWN_LOCATION; - cond_stmt = last_stmt (loop->header); + cond_stmt = last_nondebug_stmt (loop->header); if (cond_stmt) loc = gimple_location (cond_stmt); create_parallel_loop (loop, create_loop_fn (loc), arg_struct, new_arg_struct, diff --git a/gcc/tree-ssa-loop-ch.cc b/gcc/tree-ssa-loop-ch.cc index 692e8ce7c38..7fdef3bb11a 100644 --- a/gcc/tree-ssa-loop-ch.cc +++ b/gcc/tree-ssa-loop-ch.cc @@ -244,7 +244,7 @@ should_duplicate_loop_header_p (basic_block header, class loop *loop, static bool do_while_loop_p (class loop *loop) { - gimple *stmt = last_stmt (loop->latch); + gimple *stmt = last_nondebug_stmt (loop->latch); /* If the latch of the loop is not empty, it is not a do-while loop. */ if (stmt diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc index e41ec73a52a..f678de41cb0 100644 --- a/gcc/tree-ssa-loop-ivcanon.cc +++ b/gcc/tree-ssa-loop-ivcanon.cc @@ -1207,7 +1207,7 @@ canonicalize_loop_induction_variables (class loop *loop, = niter_desc.may_be_zero && !integer_zerop (niter_desc.may_be_zero); } if (TREE_CODE (niter) == INTEGER_CST) - locus = last_stmt (exit->src); + locus = last_nondebug_stmt (exit->src); else { /* For non-constant niter fold may_be_zero into niter again. */ @@ -1234,7 +1234,7 @@ canonicalize_loop_induction_variables (class loop *loop, niter = find_loop_niter_by_eval (loop, &exit); if (exit) - locus = last_stmt (exit->src); + locus = last_nondebug_stmt (exit->src); if (TREE_CODE (niter) != INTEGER_CST) exit = NULL; diff --git a/gcc/tree-ssa-loop-ivopts.cc b/gcc/tree-ssa-loop-ivopts.cc index 78e8cbc75b5..324703054b5 100644 --- a/gcc/tree-ssa-loop-ivopts.cc +++ b/gcc/tree-ssa-loop-ivopts.cc @@ -937,7 +937,7 @@ stmt_after_ip_normal_pos (class loop *loop, gimple *stmt) if (sbb != bb) return false; - return stmt == last_stmt (bb); + return stmt == last_nondebug_stmt (bb); } /* Returns true if STMT if after the place where the original induction @@ -5397,7 +5397,7 @@ may_eliminate_iv (struct ivopts_data *data, /* For now works only for exits that dominate the loop latch. TODO: extend to other conditions inside loop body. */ ex_bb = gimple_bb (use->stmt); - if (use->stmt != last_stmt (ex_bb) + if (use->stmt != last_nondebug_stmt (ex_bb) || gimple_code (use->stmt) != GIMPLE_COND || !dominated_by_p (CDI_DOMINATORS, loop->latch, ex_bb)) return false; diff --git a/gcc/tree-ssa-loop-manip.cc b/gcc/tree-ssa-loop-manip.cc index 909b705d00d..598e2189f6c 100644 --- a/gcc/tree-ssa-loop-manip.cc +++ b/gcc/tree-ssa-loop-manip.cc @@ -798,7 +798,7 @@ standard_iv_increment_position (class loop *loop, gimple_stmt_iterator *bsi, bool *insert_after) { basic_block bb = ip_normal_pos (loop), latch = ip_end_pos (loop); - gimple *last = last_stmt (latch); + gimple *last = last_nondebug_stmt (latch); if (!bb || (last && gimple_code (last) != GIMPLE_LABEL)) diff --git a/gcc/tree-ssa-loop-niter.cc b/gcc/tree-ssa-loop-niter.cc index c0ed6573409..5d398b67e68 100644 --- a/gcc/tree-ssa-loop-niter.cc +++ b/gcc/tree-ssa-loop-niter.cc @@ -3864,7 +3864,7 @@ do_warn_aggressive_loop_optimizations (class loop *loop, if (e == NULL) return; - gimple *estmt = last_stmt (e->src); + gimple *estmt = last_nondebug_stmt (e->src); char buf[WIDE_INT_PRINT_BUFFER_SIZE]; print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations)) ? UNSIGNED : SIGNED); @@ -4832,7 +4832,7 @@ estimate_numbers_of_iterations (class loop *loop) build_int_cst (type, 0), niter); record_estimate (loop, niter, niter_desc.max, - last_stmt (ex->src), + last_nondebug_stmt (ex->src), true, ex == likely_exit, true); record_control_iv (loop, &niter_desc); } diff --git a/gcc/tree-ssa-loop-split.cc b/gcc/tree-ssa-loop-split.cc index e5e6aa8eede..b41b5e614c2 100644 --- a/gcc/tree-ssa-loop-split.cc +++ b/gcc/tree-ssa-loop-split.cc @@ -1390,7 +1390,7 @@ compute_added_num_insns (struct loop *loop, const_edge branch_edge) auto_vec worklist; hash_set removed; - gimple *stmt = last_stmt (cond_bb); + gimple *stmt = last_nondebug_stmt (cond_bb); worklist.safe_push (stmt); removed.add (stmt); diff --git a/gcc/tree-ssa-loop-unswitch.cc b/gcc/tree-ssa-loop-unswitch.cc index 95580768804..47255a4125d 100644 --- a/gcc/tree-ssa-loop-unswitch.cc +++ b/gcc/tree-ssa-loop-unswitch.cc @@ -236,7 +236,7 @@ static void clean_up_after_unswitching (int); static vec & get_predicates_for_bb (basic_block bb) { - gimple *last = last_stmt (bb); + gimple *last = last_nondebug_stmt (bb); return (*bb_predicates)[last == NULL ? 0 : gimple_uid (last)]; } @@ -245,7 +245,7 @@ get_predicates_for_bb (basic_block bb) static void set_predicates_for_bb (basic_block bb, vec predicates) { - gimple_set_uid (last_stmt (bb), bb_predicates->length ()); + gimple_set_uid (last_nondebug_stmt (bb), bb_predicates->length ()); bb_predicates->safe_push (predicates); } @@ -283,7 +283,7 @@ init_loop_unswitch_info (class loop *&loop, unswitch_predicate *&hottest, else { candidates.release (); - gimple *last = last_stmt (bbs[i]); + gimple *last = last_nondebug_stmt (bbs[i]); if (last != NULL) gimple_set_uid (last, 0); } @@ -305,7 +305,7 @@ init_loop_unswitch_info (class loop *&loop, unswitch_predicate *&hottest, /* No predicates to unswitch on in the outer loops. */ if (!flow_bb_inside_loop_p (loop, bbs[i])) { - gimple *last = last_stmt (bbs[i]); + gimple *last = last_nondebug_stmt (bbs[i]); if (last != NULL) gimple_set_uid (last, 0); } @@ -1472,7 +1472,7 @@ hoist_guard (class loop *loop, edge guard) gimple_cond_make_true (cond_stmt); update_stmt (cond_stmt); /* Create new loop pre-header. */ - e = split_block (pre_header, last_stmt (pre_header)); + e = split_block (pre_header, last_nondebug_stmt (pre_header)); dump_user_location_t loc = find_loop_location (loop); diff --git a/gcc/tree-ssa-phiopt.cc b/gcc/tree-ssa-phiopt.cc index 37b98ef3c52..51f33e1e81a 100644 --- a/gcc/tree-ssa-phiopt.cc +++ b/gcc/tree-ssa-phiopt.cc @@ -711,7 +711,7 @@ match_simplify_replacement (basic_block cond_bb, basic_block middle_bb, So, given the condition COND, and the two PHI arguments, match and simplify can happen on (COND) ? arg0 : arg1. */ - stmt = last_stmt (cond_bb); + stmt = last_nondebug_stmt (cond_bb); /* We need to know which is the true edge and which is the false edge so that we know when to invert the condition below. */ @@ -1832,7 +1832,7 @@ minmax_replacement (basic_block cond_bb, basic_block middle_bb, basic_block alt_ return false; /* Emit the statement to compute min/max. */ - location_t locus = gimple_location (last_stmt (cond_bb)); + location_t locus = gimple_location (last_nondebug_stmt (cond_bb)); gimple_seq stmts = NULL; tree phi_result = PHI_RESULT (phi); result = gimple_build (&stmts, locus, minmax, TREE_TYPE (phi_result), diff --git a/gcc/tree-ssa-reassoc.cc b/gcc/tree-ssa-reassoc.cc index aeaca2f76cc..6956a3dadb5 100644 --- a/gcc/tree-ssa-reassoc.cc +++ b/gcc/tree-ssa-reassoc.cc @@ -2835,7 +2835,7 @@ update_range_test (struct range_entry *range, struct range_entry *otherrange, operand_entry *oe = (*ops)[idx]; tree op = oe->op; gimple *stmt = op ? SSA_NAME_DEF_STMT (op) - : last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)); + : last_nondebug_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)); location_t loc = gimple_location (stmt); tree optype = op ? TREE_TYPE (op) : boolean_type_node; tree tem = build_range_check (loc, optype, unshare_expr (exp), @@ -3400,7 +3400,8 @@ optimize_range_tests_to_bit_test (enum tree_code opcode, int first, int length, operand_entry *oe = (*ops)[ranges[i].idx]; tree op = oe->op; gimple *stmt = op ? SSA_NAME_DEF_STMT (op) - : last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)); + : last_nondebug_stmt (BASIC_BLOCK_FOR_FN + (cfun, oe->id)); location_t loc = gimple_location (stmt); tree optype = op ? TREE_TYPE (op) : boolean_type_node; @@ -3831,7 +3832,7 @@ optimize_range_tests_var_bound (enum tree_code opcode, int first, int length, else { operand_entry *oe = (*ops)[ranges[i].idx]; - stmt = last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)); + stmt = last_nondebug_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)); if (gimple_code (stmt) != GIMPLE_COND) continue; ccode = gimple_cond_code (stmt); @@ -4096,7 +4097,7 @@ optimize_range_tests (enum tree_code opcode, init_range_entry (ranges + i, oe->op, oe->op ? NULL - : last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id))); + : last_nondebug_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id))); /* For | invert it now, we will invert it again before emitting the optimized expression. */ if (opcode == BIT_IOR_EXPR @@ -4443,7 +4444,7 @@ suitable_cond_bb (basic_block bb, basic_block test_bb, basic_block *other_bb, if (test_bb == bb) return false; /* Check last stmt first. */ - stmt = last_stmt (bb); + stmt = last_nondebug_stmt (bb); if (stmt == NULL || (gimple_code (stmt) != GIMPLE_COND && (backward || !final_range_test_p (stmt))) @@ -4521,7 +4522,7 @@ suitable_cond_bb (basic_block bb, basic_block test_bb, basic_block *other_bb, } else { - gimple *test_last = last_stmt (test_bb); + gimple *test_last = last_nondebug_stmt (test_bb); if (gimple_code (test_last) == GIMPLE_COND) { if (backward ? e2->src != test_bb : e->src != bb) @@ -4589,7 +4590,7 @@ no_side_effect_bb (basic_block bb) if (!gimple_seq_empty_p (phi_nodes (bb))) return false; - last = last_stmt (bb); + last = last_nondebug_stmt (bb); for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple *stmt = gsi_stmt (gsi); @@ -4784,7 +4785,7 @@ maybe_optimize_range_tests (gimple *stmt) return cfg_cleanup_needed; else if (single_pred_p (e->dest)) { - stmt = last_stmt (e->dest); + stmt = last_nondebug_stmt (e->dest); if (stmt && gimple_code (stmt) == GIMPLE_COND && EDGE_COUNT (e->dest->succs) == 2) @@ -4842,7 +4843,7 @@ maybe_optimize_range_tests (gimple *stmt) bb_ent.first_idx = ops.length (); bb_ent.last_idx = bb_ent.first_idx; e = find_edge (bb, other_bb); - stmt = last_stmt (bb); + stmt = last_nondebug_stmt (bb); gimple_set_visited (stmt, true); if (gimple_code (stmt) != GIMPLE_COND) { @@ -5018,7 +5019,7 @@ maybe_optimize_range_tests (gimple *stmt) tree new_op; max_idx = idx; - stmt = last_stmt (bb); + stmt = last_nondebug_stmt (bb); new_op = update_ops (bbinfo[idx].op, (enum tree_code) ops[bbinfo[idx].first_idx]->rank, @@ -6660,7 +6661,7 @@ reassociate_bb (basic_block bb) { gimple_stmt_iterator gsi; basic_block son; - gimple *stmt = last_stmt (bb); + gimple *stmt = last_nondebug_stmt (bb); bool cfg_cleanup_needed = false; if (stmt && !gimple_visited_p (stmt)) diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc index 89707a56b21..7f03f54cdd7 100644 --- a/gcc/tree-vrp.cc +++ b/gcc/tree-vrp.cc @@ -942,7 +942,7 @@ public: for (gphi_iterator gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi)) m_ranger->register_inferred_ranges (gsi.phi ()); - m_last_bb_stmt = last_stmt (bb); + m_last_bb_stmt = last_nondebug_stmt (bb); } void post_fold_bb (basic_block bb) override