From patchwork Fri Apr 21 11:24:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 86249 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp992386vqo; Fri, 21 Apr 2023 04:25:39 -0700 (PDT) X-Google-Smtp-Source: AKy350ZaqmiJdRB17XxQtehHIj6FUj4Al6lMTjs04rlbeShpS9orl7XzmH7pc0ljyi99qgXv0IEj X-Received: by 2002:aa7:d606:0:b0:506:be86:2a8d with SMTP id c6-20020aa7d606000000b00506be862a8dmr4361291edr.19.1682076338953; Fri, 21 Apr 2023 04:25:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682076338; cv=none; d=google.com; s=arc-20160816; b=SDPvWbzpq7B5D5XgjdTkAOUnvJP6Ucz+RK7Pi+BlVYdm1oDAL7fNvCwqNjrxWOzhhP KdYRrCrWtyWCzB09qFl6iRg1odNpGSuXX10/tgajQ91ePLr9dSZFWlY7l7bzmPPBKuEM ySj7fly9wjBjFH7aSjrOJwOLRogifxOex+cGcLdg6R4t9yTIlwkioFxKSRJV20/T8dKG Uao7tRWQCF94vC+22mrxSLbqEX88aodZeNH/dQt6DlRQlRmMU2WtXJJHKc6L1nN1Xcuq kkHhM1KG4nguaVwHIDEX7JhPpI/kvdqaWCL9Os3vuQv8IXXdqCLAa6bmXzb+kHVlHmj8 WQ7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:message-id :mime-version:subject:to:date:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=sFUzy0edrI2Pq/9gM596YSURFNMWQos2PiLJ1LDXVkM=; b=eytSAPKWMiuwbKbJbH/a7oube5ZQqU0ftvaaOp9TrGXKP+KjI9f8Vjy3QPfxsWvnEK dkdHwyi9no85QZ0hqFVimMy5wxa2IkTpBl4RVCNZKaGe3uxj4M3llICzG640FDw+DIMF nD2igSylpNfzp0kquS07d6pd19dv6RPj/0JqF9+VioAqd5sj35be3/p2vb9QHGufxHtq WVqQgG7OHHDVDCt1Z2VJfN03mtEb9HZhJv+vXZhdvd4EyBLoCzCutu54jv4FfJcbXHs9 ilNSanSubJXvodQziiSvFHzTQHL/dz1tnO6hXcL3aeGo0wkWIO4SYsot0JYXEPtMl/YU UqkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=b6irh64g; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id c17-20020a056402121100b00501d485862esi3397522edw.426.2023.04.21.04.25.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 04:25:38 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=b6irh64g; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4E4AE3857351 for ; Fri, 21 Apr 2023 11:25:32 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4E4AE3857351 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1682076332; bh=sFUzy0edrI2Pq/9gM596YSURFNMWQos2PiLJ1LDXVkM=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=b6irh64g80Kh6F9HnfANJH9bUBWaGUV5fvWS4LtbH6dRtC03cLiYHNtkFvw3LiLp3 On3CDsjaZ7tetLJ+oxFKOmy0UN4SipDLIQFkm2fbWDpQZ+JXbBqVL2VPc6Bd19/pTy +z9f3kz0XV5Swi8corGZ1Cl83vhRPc+j/iCRu1mQ= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by sourceware.org (Postfix) with ESMTPS id 9E58A3858C83 for ; Fri, 21 Apr 2023 11:24:47 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9E58A3858C83 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 4254A219D9 for ; Fri, 21 Apr 2023 11:24:46 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2DD2B13456 for ; Fri, 21 Apr 2023 11:24:46 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +TPJCX5yQmTiEQAAMHmgww (envelope-from ) for ; Fri, 21 Apr 2023 11:24:46 +0000 Date: Fri, 21 Apr 2023 13:24:45 +0200 (CEST) To: gcc-patches@gcc.gnu.org Subject: [PATCH 1/3] change DF to use the proper CFG order for DF_FORWARD problems MIME-Version: 1.0 Message-Id: <20230421112446.2DD2B13456@imap2.suse-dmz.suse.de> X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Biener via Gcc-patches From: Richard Biener Reply-To: Richard Biener Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763784878983326081?= X-GMAIL-MSGID: =?utf-8?q?1763784878983326081?= This changes DF to use RPO on the forward graph for DF_FORWARD problems. While that naturally maps to pre_and_rev_postorder_compute we use the existing (wrong) CFG order for DF_BACKWARD problems computed by post_order_compute since that provides the required side-effect of deleting unreachable blocks. The change requires turning the inconsistent vec vs int * back to consistent int *. A followup patch will change the inverted_post_order_compute API and change the DF_BACKWARD problem to use the correct RPO on the backward graph together with statistics I produced last year for the combined effect. Bootstrapped and tested on x86_64-unknown-linux-gnu. * df.h (df_d::postorder_inverted): Change back to int *, clarify comments. * df-core.cc (rest_of_handle_df_finish): Adjust. (df_analyze_1): Likewise. (df_analyze): For DF_FORWARD problems use RPO on the forward graph. Adjust. (loop_inverted_post_order_compute): Adjust API. (df_analyze_loop): Adjust. (df_get_n_blocks): Likewise. (df_get_postorder): Likewise. --- gcc/df-core.cc | 58 ++++++++++++++++++++++++++------------------------ gcc/df.h | 8 +++---- 2 files changed, 34 insertions(+), 32 deletions(-) diff --git a/gcc/df-core.cc b/gcc/df-core.cc index de5cbd0c622..27123645da5 100644 --- a/gcc/df-core.cc +++ b/gcc/df-core.cc @@ -810,7 +810,7 @@ rest_of_handle_df_finish (void) } free (df->postorder); - df->postorder_inverted.release (); + free (df->postorder_inverted); free (df->hard_regs_live_count); free (df); df = NULL; @@ -1207,9 +1207,6 @@ df_analyze_1 (void) { int i; - /* These should be the same. */ - gcc_assert ((unsigned) df->n_blocks == df->postorder_inverted.length ()); - /* We need to do this before the df_verify_all because this is not kept incrementally up to date. */ df_compute_regs_ever_live (false); @@ -1232,8 +1229,8 @@ df_analyze_1 (void) if (dflow->problem->dir == DF_FORWARD) df_analyze_problem (dflow, df->blocks_to_analyze, - df->postorder_inverted.address (), - df->postorder_inverted.length ()); + df->postorder_inverted, + df->n_blocks); else df_analyze_problem (dflow, df->blocks_to_analyze, @@ -1261,10 +1258,15 @@ df_analyze (void) bitmap current_all_blocks = BITMAP_ALLOC (&df_bitmap_obstack); free (df->postorder); + free (df->postorder_inverted); df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); df->n_blocks = post_order_compute (df->postorder, true, true); - df->postorder_inverted.truncate (0); - inverted_post_order_compute (&df->postorder_inverted); + /* For DF_FORWARD use a RPO on the forward graph. Since we want to + have unreachable blocks deleted use post_order_compute and reverse + the order. */ + df->postorder_inverted = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + for (int i = 0; i < df->n_blocks; ++i) + df->postorder_inverted[i] = df->postorder[df->n_blocks - 1 - i]; for (int i = 0; i < df->n_blocks; i++) bitmap_set_bit (current_all_blocks, df->postorder[i]); @@ -1273,7 +1275,7 @@ df_analyze (void) { /* Verify that POSTORDER_INVERTED only contains blocks reachable from the ENTRY block. */ - for (unsigned int i = 0; i < df->postorder_inverted.length (); i++) + for (int i = 0; i < df->n_blocks; i++) gcc_assert (bitmap_bit_p (current_all_blocks, df->postorder_inverted[i])); } @@ -1283,12 +1285,11 @@ df_analyze (void) if (df->analyze_subset) { bitmap_and_into (df->blocks_to_analyze, current_all_blocks); - df->n_blocks = df_prune_to_subcfg (df->postorder, - df->n_blocks, df->blocks_to_analyze); - unsigned int newlen = df_prune_to_subcfg (df->postorder_inverted.address (), - df->postorder_inverted.length (), - df->blocks_to_analyze); - df->postorder_inverted.truncate (newlen); + unsigned int newlen = df_prune_to_subcfg (df->postorder, df->n_blocks, + df->blocks_to_analyze); + df_prune_to_subcfg (df->postorder_inverted, df->n_blocks, + df->blocks_to_analyze); + df->n_blocks = newlen; BITMAP_FREE (current_all_blocks); } else @@ -1364,14 +1365,13 @@ loop_post_order_compute (int *post_order, class loop *loop) /* Compute the reverse top sort order of the inverted sub-CFG specified by LOOP. Returns the number of blocks which is always loop->num_nodes. */ -static void -loop_inverted_post_order_compute (vec *post_order, class loop *loop) +static int +loop_inverted_post_order_compute (int *post_order, class loop *loop) { basic_block bb; edge_iterator *stack; int sp; - - post_order->reserve_exact (loop->num_nodes); + int post_order_num = 0; /* Allocate stack for back-tracking up CFG. */ stack = XNEWVEC (edge_iterator, loop->num_nodes + 1); @@ -1408,13 +1408,13 @@ loop_inverted_post_order_compute (vec *post_order, class loop *loop) time, check its predecessors. */ stack[sp++] = ei_start (pred->preds); else - post_order->quick_push (pred->index); + post_order[post_order_num++] = pred->index; } else { if (flow_bb_inside_loop_p (loop, bb) && ei_one_before_end_p (ei)) - post_order->quick_push (bb->index); + post_order[post_order_num++] = bb->index; if (!ei_one_before_end_p (ei)) ei_next (&stack[sp - 1]); @@ -1424,6 +1424,7 @@ loop_inverted_post_order_compute (vec *post_order, class loop *loop) } free (stack); + return post_order_num; } @@ -1433,13 +1434,14 @@ void df_analyze_loop (class loop *loop) { free (df->postorder); + free (df->postorder_inverted); df->postorder = XNEWVEC (int, loop->num_nodes); - df->postorder_inverted.truncate (0); + df->postorder_inverted = XNEWVEC (int, loop->num_nodes); df->n_blocks = loop_post_order_compute (df->postorder, loop); - loop_inverted_post_order_compute (&df->postorder_inverted, loop); + int n = loop_inverted_post_order_compute (df->postorder_inverted, loop); gcc_assert ((unsigned) df->n_blocks == loop->num_nodes); - gcc_assert (df->postorder_inverted.length () == loop->num_nodes); + gcc_assert ((unsigned) n == loop->num_nodes); bitmap blocks = BITMAP_ALLOC (&df_bitmap_obstack); for (int i = 0; i < df->n_blocks; ++i) @@ -1460,8 +1462,8 @@ df_get_n_blocks (enum df_flow_dir dir) if (dir == DF_FORWARD) { - gcc_assert (df->postorder_inverted.length ()); - return df->postorder_inverted.length (); + gcc_assert (df->postorder_inverted); + return df->n_blocks; } gcc_assert (df->postorder); @@ -1480,8 +1482,8 @@ df_get_postorder (enum df_flow_dir dir) if (dir == DF_FORWARD) { - gcc_assert (df->postorder_inverted.length ()); - return df->postorder_inverted.address (); + gcc_assert (df->postorder_inverted); + return df->postorder_inverted; } gcc_assert (df->postorder); return df->postorder; diff --git a/gcc/df.h b/gcc/df.h index aec2223591a..402657a7076 100644 --- a/gcc/df.h +++ b/gcc/df.h @@ -581,10 +581,10 @@ public: bitmap_head insns_to_delete; bitmap_head insns_to_rescan; bitmap_head insns_to_notes_rescan; - int *postorder; /* The current set of basic blocks - in reverse postorder. */ - vec postorder_inverted; /* The current set of basic blocks - in reverse postorder of inverted CFG. */ + int *postorder; /* The current set of basic blocks in reverse + postorder for DF_BACKWARD problems. */ + int *postorder_inverted; /* The current set of basic blocks in reverse + postorder for DF_FORWARD problems. */ int n_blocks; /* The number of blocks in reverse postorder. */ /* An array [FIRST_PSEUDO_REGISTER], indexed by regno, of the number From patchwork Fri Apr 21 11:25:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 86250 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp992557vqo; Fri, 21 Apr 2023 04:26:00 -0700 (PDT) X-Google-Smtp-Source: AKy350ZeyZvpE1gS39szr5tQa58BxOd3uATkHHscTJzgIYRJIt8faYL/1YTnKlJcRNSawHkybmMo X-Received: by 2002:aa7:ccd7:0:b0:509:bd16:db9b with SMTP id y23-20020aa7ccd7000000b00509bd16db9bmr1796241edt.22.1682076360781; Fri, 21 Apr 2023 04:26:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682076360; cv=none; d=google.com; s=arc-20160816; b=mlb/2L7kpKJ89+c+2mUDNtBZvQNDG63XyGVffLF7PsXANLD0v0DGZ89iyRTwPueeSK L/tM5UhnWORZ82XJMzf4vr81OgLw4ECVT8yIB+fvOoMAUeY4WXqPU09ftqbJz1qTzpJl MMnwwRioQFhtNdFDKa8tGBa0CyL2hFqQKEVIlAu69jhyoaLm2GCsnZCikxd9mgt/eMfe ehHW/Hq49yV3KGDfBv3eRbWLy4mejvE+4M0JoGAKRd4BPrttIc3IEbIk3Gl0tQDHnNsG qRx+Ycv7Ncfs43I8jvHJZQFmjPeOkKkg9GOh1Lo/RJQMn66kiYblITRwPIOpE6kfUIad MDmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:message-id :mime-version:subject:to:date:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=r/MviRuekkzkWVxDvPBsO30JYglLh3mcyDmD/cvTRpE=; b=hNVjE8CFx2//jdUxhiAhCBBbWbt0bJawu0pE7h1nJ40CX/6rIKUbh9VpOjvIRS3fuj gf/ov0qvVU22PH4SDX8aPPJumk4cF9ZBntQpKmim9FZPZR0Esf9aw0emOIxb8hjyzTmQ st/sAcP3TemBtCk4Tv4MyRFwyi3Jqn2dHsBO4K/sGGy89avgppm4kCX9lclgMHp4KRC6 FrqtqprlWytIrGL7sf685pZR6pN04dy2n3/Vl5ZlyRpQVdK/97zW9jAcUcgvHA939feb 8OcnXW2ShO9SsAodyUUIunXiKby3fN2TJ34RQLbsFlboLM+0C1nHI+oBMYms8Wz9UJ3t edGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=PPWmbmiA; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id j16-20020aa7ca50000000b005069836fed6si757480edt.608.2023.04.21.04.26.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 04:26:00 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=PPWmbmiA; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EF3BC385700A for ; Fri, 21 Apr 2023 11:25:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EF3BC385700A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1682076353; bh=r/MviRuekkzkWVxDvPBsO30JYglLh3mcyDmD/cvTRpE=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=PPWmbmiAXCgcyBIPKlyJN1r0MRUlwtTGdAgbmGuDXQkFUlWrrOc47lQeCrb4dewWZ VNHmjGmsct1zRghPlF7j9XUSkFoXwhCyRtEvC/alk0cITKwPeHnm7VlX0xvqwklGlo BNUgvKZmWyiS+CHjsPVateV0HNyQUBEsQ1XyRqMA= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by sourceware.org (Postfix) with ESMTPS id 81F793857732 for ; Fri, 21 Apr 2023 11:25:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 81F793857732 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B17621FDDC for ; Fri, 21 Apr 2023 11:25:05 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9E91D13456 for ; Fri, 21 Apr 2023 11:25:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id zqelJZFyQmQMEgAAMHmgww (envelope-from ) for ; Fri, 21 Apr 2023 11:25:05 +0000 Date: Fri, 21 Apr 2023 13:25:05 +0200 (CEST) To: gcc-patches@gcc.gnu.org Subject: [PATCH 2/3] change inverted_post_order_compute to inverted_rev_post_order_compute MIME-Version: 1.0 Message-Id: <20230421112505.9E91D13456@imap2.suse-dmz.suse.de> X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Biener via Gcc-patches From: Richard Biener Reply-To: Richard Biener Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763784902394755469?= X-GMAIL-MSGID: =?utf-8?q?1763784902394755469?= The following changes the inverted_post_order_compute API back to a plain C array interface and computing a reverse post order since that's what's always required. It will make massaging DF to use the correct iteration orders easier. Elsewhere it requires turning backward iteration over the computed order with forward iteration. Bootstrapped and tested on x86_64-unknown-linux-gnu. * cfganal.h (inverted_rev_post_order_compute): Rename from ... (inverted_post_order_compute): ... this. Add struct function argument, change allocation to a C array. * cfganal.cc (inverted_rev_post_order_compute): Likewise. * lcm.cc (compute_antinout_edge): Adjust. * lra-lives.cc (lra_create_live_ranges_1): Likewise. * tree-ssa-dce.cc (remove_dead_stmt): Likewise. * tree-ssa-pre.cc (compute_antic): Likewise. --- gcc/cfganal.cc | 41 ++++++++++++++++++++++------------------- gcc/cfganal.h | 3 ++- gcc/lcm.cc | 9 +++++---- gcc/lra-lives.cc | 11 ++++++----- gcc/tree-ssa-dce.cc | 15 ++++++++------- gcc/tree-ssa-pre.cc | 18 ++++++++++-------- 6 files changed, 53 insertions(+), 44 deletions(-) diff --git a/gcc/cfganal.cc b/gcc/cfganal.cc index ef24c5e4d15..cc858b99e64 100644 --- a/gcc/cfganal.cc +++ b/gcc/cfganal.cc @@ -740,7 +740,7 @@ post_order_compute (int *post_order, bool include_entry_exit, } -/* Helper routine for inverted_post_order_compute +/* Helper routine for inverted_rev_post_order_compute flow_dfs_compute_reverse_execute, and the reverse-CFG deapth first search in dominance.cc. BB has to belong to a region of CFG @@ -820,12 +820,14 @@ dfs_find_deadend (basic_block bb) and start looking for a "dead end" from that block and do another inverted traversal from that block. */ -void -inverted_post_order_compute (vec *post_order, - sbitmap *start_points) +int +inverted_rev_post_order_compute (struct function *fn, + int *rev_post_order, + sbitmap *start_points) { basic_block bb; - post_order->reserve_exact (n_basic_blocks_for_fn (cfun)); + + int rev_post_order_num = n_basic_blocks_for_fn (fn) - 1; if (flag_checking) verify_no_unreachable_blocks (); @@ -855,17 +857,17 @@ inverted_post_order_compute (vec *post_order, } } else - /* Put all blocks that have no successor into the initial work list. */ - FOR_ALL_BB_FN (bb, cfun) - if (EDGE_COUNT (bb->succs) == 0) - { - /* Push the initial edge on to the stack. */ - if (EDGE_COUNT (bb->preds) > 0) - { - stack.quick_push (ei_start (bb->preds)); - bitmap_set_bit (visited, bb->index); - } - } + /* Put all blocks that have no successor into the initial work list. */ + FOR_ALL_BB_FN (bb, cfun) + if (EDGE_COUNT (bb->succs) == 0) + { + /* Push the initial edge on to the stack. */ + if (EDGE_COUNT (bb->preds) > 0) + { + stack.quick_push (ei_start (bb->preds)); + bitmap_set_bit (visited, bb->index); + } + } do { @@ -893,13 +895,13 @@ inverted_post_order_compute (vec *post_order, time, check its predecessors. */ stack.quick_push (ei_start (pred->preds)); else - post_order->quick_push (pred->index); + rev_post_order[rev_post_order_num--] = pred->index; } else { if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun) && ei_one_before_end_p (ei)) - post_order->quick_push (bb->index); + rev_post_order[rev_post_order_num--] = bb->index; if (!ei_one_before_end_p (ei)) ei_next (&stack.last ()); @@ -957,7 +959,8 @@ inverted_post_order_compute (vec *post_order, while (!stack.is_empty ()); /* EXIT_BLOCK is always included. */ - post_order->quick_push (EXIT_BLOCK); + rev_post_order[rev_post_order_num--] = EXIT_BLOCK; + return n_basic_blocks_for_fn (fn); } /* Compute the depth first search order of FN and store in the array diff --git a/gcc/cfganal.h b/gcc/cfganal.h index 0b6c67dfab5..5af917c27dd 100644 --- a/gcc/cfganal.h +++ b/gcc/cfganal.h @@ -66,7 +66,8 @@ extern void add_noreturn_fake_exit_edges (void); extern void connect_infinite_loops_to_exit (void); extern int post_order_compute (int *, bool, bool); extern basic_block dfs_find_deadend (basic_block); -extern void inverted_post_order_compute (vec *postorder, sbitmap *start_points = 0); +extern int inverted_rev_post_order_compute (struct function *, + int *, sbitmap *start_points = 0); extern int pre_and_rev_post_order_compute_fn (struct function *, int *, int *, bool); extern int pre_and_rev_post_order_compute (int *, int *, bool); diff --git a/gcc/lcm.cc b/gcc/lcm.cc index 5adb4eb1a11..94a3ed43aea 100644 --- a/gcc/lcm.cc +++ b/gcc/lcm.cc @@ -102,17 +102,18 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin, optimistic initialization of ANTIN above. Use reverse postorder on the inverted graph to make the backward dataflow problem require less iterations. */ - auto_vec postorder; - inverted_post_order_compute (&postorder); - for (int i = postorder.length () - 1; i >= 0; --i) + int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, rpo); + for (int i = 0; i < n; ++i) { - bb = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); + bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)) continue; *qin++ = bb; bb->aux = bb; } + free (rpo); qin = worklist; qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS]; diff --git a/gcc/lra-lives.cc b/gcc/lra-lives.cc index f7a7066055a..f7a3ba8d76a 100644 --- a/gcc/lra-lives.cc +++ b/gcc/lra-lives.cc @@ -1405,19 +1405,20 @@ lra_create_live_ranges_1 (bool all_p, bool dead_insn_p) point_freq_vec.truncate (0); point_freq_vec.reserve_exact (new_length); lra_point_freq = point_freq_vec.address (); - auto_vec post_order_rev_cfg; - inverted_post_order_compute (&post_order_rev_cfg); - lra_assert (post_order_rev_cfg.length () == (unsigned) n_basic_blocks_for_fn (cfun)); + int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, rpo); + lra_assert (n == n_basic_blocks_for_fn (cfun)); bb_live_change_p = false; - for (i = post_order_rev_cfg.length () - 1; i >= 0; --i) + for (i = 0; i < n; ++i) { - bb = BASIC_BLOCK_FOR_FN (cfun, post_order_rev_cfg[i]); + bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)) continue; if (process_bb_lives (bb, curr_point, dead_insn_p)) bb_live_change_p = true; } + free (rpo); if (bb_live_change_p) { /* We need to clear pseudo live info as some pseudos can diff --git a/gcc/tree-ssa-dce.cc b/gcc/tree-ssa-dce.cc index bda780876f3..08876bfc1c7 100644 --- a/gcc/tree-ssa-dce.cc +++ b/gcc/tree-ssa-dce.cc @@ -1095,7 +1095,7 @@ remove_dead_stmt (gimple_stmt_iterator *i, basic_block bb, nothing to the program, then we not only remove it, but we need to update the CFG. We can chose any of edges out of BB as long as we are sure to not close infinite loops. This is done by always choosing the edge closer to - exit in inverted_post_order_compute order. */ + exit in inverted_rev_post_order_compute order. */ if (is_ctrl_stmt (stmt)) { edge_iterator ei; @@ -1111,17 +1111,18 @@ remove_dead_stmt (gimple_stmt_iterator *i, basic_block bb, { if (!bb_postorder) { - auto_vec postorder; - inverted_post_order_compute (&postorder, - &bb_contains_live_stmts); + int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, rpo, + &bb_contains_live_stmts); bb_postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); - for (unsigned int i = 0; i < postorder.length (); ++i) - bb_postorder[postorder[i]] = i; + for (int i = 0; i < n; ++i) + bb_postorder[rpo[i]] = i; + free (rpo); } FOR_EACH_EDGE (e2, ei, bb->succs) if (!e || e2->dest == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb_postorder [e->dest->index] - < bb_postorder [e2->dest->index]) + >= bb_postorder [e2->dest->index]) e = e2; } gcc_assert (e); diff --git a/gcc/tree-ssa-pre.cc b/gcc/tree-ssa-pre.cc index 37cad36f2de..943936df808 100644 --- a/gcc/tree-ssa-pre.cc +++ b/gcc/tree-ssa-pre.cc @@ -2464,8 +2464,8 @@ compute_antic (void) /* For ANTIC computation we need a postorder that also guarantees that a block with a single successor is visited after its successor. RPO on the inverted CFG has this property. */ - auto_vec postorder; - inverted_post_order_compute (&postorder); + int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, rpo); auto_sbitmap worklist (last_basic_block_for_fn (cfun) + 1); bitmap_clear (worklist); @@ -2481,11 +2481,11 @@ compute_antic (void) for PA ANTIC computation. */ num_iterations++; changed = false; - for (i = postorder.length () - 1; i >= 0; i--) + for (i = 0; i < n; ++i) { - if (bitmap_bit_p (worklist, postorder[i])) + if (bitmap_bit_p (worklist, rpo[i])) { - basic_block block = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); + basic_block block = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); bitmap_clear_bit (worklist, block->index); if (compute_antic_aux (block, bitmap_bit_p (has_abnormal_preds, @@ -2513,15 +2513,17 @@ compute_antic (void) if (do_partial_partial) { /* For partial antic we ignore backedges and thus we do not need - to perform any iteration when we process blocks in postorder. */ - for (i = postorder.length () - 1; i >= 0; i--) + to perform any iteration when we process blocks in rpo. */ + for (i = 0; i < n; ++i) { - basic_block block = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); + basic_block block = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); compute_partial_antic_aux (block, bitmap_bit_p (has_abnormal_preds, block->index)); } } + + free (rpo); } From patchwork Fri Apr 21 11:25:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 86251 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp993050vqo; Fri, 21 Apr 2023 04:27:04 -0700 (PDT) X-Google-Smtp-Source: AKy350bFwpy9ASXIhOtCx9OtEldybykQpb69LejN2k0OZklgIxAG/FAcoQqhmASS9ukmn6zrk7SS X-Received: by 2002:aa7:db92:0:b0:506:be3f:ebab with SMTP id u18-20020aa7db92000000b00506be3febabmr4369861edt.22.1682076424281; Fri, 21 Apr 2023 04:27:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682076424; cv=none; d=google.com; s=arc-20160816; b=LUqqqH9Nly5T/jOdfALGW/5W32E9LlnVpvuI8+GF7fh3SQCqUEMFsfXuuPAigLMvUJ 6kAkIPFBdfVPkllM6gcuFaKfZQO2hptEsB4PtAdZAElHIy0+1E7HlklF7vevhcHY1R3E ujGFePvuLwY9Yq+IaSmB4KU1Kj5Lm44qpLSMmV8SM3Q+bXnugCoEF6uR07gX/+HS1uOb cdTSD/W1PNGcJ+n8+7BJSkGBOstu0MhS4eFNMj7yzv93uCgk9jkR8nAHX+mo/LjNUVEF ecY4U0B2Suii2vbcbZXV10Jr3GZ93f6yHZqRfpeMMuU9J1yAEE6Qj/SGlTXj+NF79OoA vTLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:message-id :mime-version:subject:to:date:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=/4VfgTAxCZzWhotHdU/J39uLy/VfR3XAudXw60Qvv4k=; b=r+seRqcco21ulcjNatCbv+laaUQF1xacogPpt2MmzARXA1NLyM7dO/T/+MaljXxO3l WLa6bjrPV5CEW+ALv3HjhfZixgtbEtcqXz2QsGsot3wMk0P/N7g/SqKzpLbON8tyCob8 CSwzwgDuf29m/4OqcENe1nbPancZDfA+tCdppdeM9NCSizO5+g8dhouDNohTsGKYejs9 Bw3rCIStGxXKiBYgLPlmG1gczVqa2V9g7GU+366YnBLiKqigDx9lKdFzqOM3gv4E2Tl8 ncSt1iBNKt2eR4tPLuRVE394z6b8nprOs5GMoF9C2cgP0vCRQhgzegKd7jATaUJjRuwA Jk9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=SHB0Uj7W; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id f15-20020a05640214cf00b00508da09b7f5si2359874edx.624.2023.04.21.04.27.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Apr 2023 04:27:04 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=SHB0Uj7W; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AA8F83857010 for ; Fri, 21 Apr 2023 11:26:41 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AA8F83857010 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1682076401; bh=/4VfgTAxCZzWhotHdU/J39uLy/VfR3XAudXw60Qvv4k=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=SHB0Uj7W25biDucpWFf+mjJpj5Tm3a7hVszZxt0oPjhv28VdeaAOANQpZHje1N8Ym XWTi2NtDnbZ84vkICzQulCLFcXgWuvQS3Gx3CvRpcWMOFT2iG364f6ZVuYKdI+o43i 4y5HrGXJBxyhLVPRzBc6RUYSQkBxIdxsE23Q+DNI= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by sourceware.org (Postfix) with ESMTPS id 94EA53857722 for ; Fri, 21 Apr 2023 11:25:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 94EA53857722 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id CF60A21976 for ; Fri, 21 Apr 2023 11:25:27 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BB76813456 for ; Fri, 21 Apr 2023 11:25:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id KaKNLKdyQmQ+EgAAMHmgww (envelope-from ) for ; Fri, 21 Apr 2023 11:25:27 +0000 Date: Fri, 21 Apr 2023 13:25:27 +0200 (CEST) To: gcc-patches@gcc.gnu.org Subject: [PATCH 3/3] Use correct CFG orders for DF worklist processing MIME-Version: 1.0 Message-Id: <20230421112527.BB76813456@imap2.suse-dmz.suse.de> X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Biener via Gcc-patches From: Richard Biener Reply-To: Richard Biener Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763784968774009023?= X-GMAIL-MSGID: =?utf-8?q?1763784968774009023?= This adjusts the remaining three RPO computes in DF. The DF_FORWARD problems should use a RPO on the forward graph, the DF_BACKWARD problems should use a RPO on the inverted graph. Conveniently now inverted_rev_post_order_compute computes a RPO. We still use post_order_compute and reverse its order for its side-effect of deleting unreachable blocks. This resuls in an overall reduction on visited blocks on cc1files by 5.2%. Because on the reverse CFG most regions are irreducible, there's few cases the number of visited blocks increases. For the set of cc1files I have this is for et-forest.i, graph.i, hwint.i, tree-ssa-dom.i, tree-ssa-loop-ch.i and tree-ssa-threadedge.i. For tree-ssa-dse.i it's off-noise and I've more closely investigated and figured it is really bad luck due to the irreducibility. Bootstrapped and tested on x86_64-unknown-linux-gnu and the series pushed. * df-core.cc (df_analyze): Compute RPO on the reverse graph for DF_BACKWARD problems. (loop_post_order_compute): Rename to ... (loop_rev_post_order_compute): ... this, compute a RPO. (loop_inverted_post_order_compute): Rename to ... (loop_inverted_rev_post_order_compute): ... this, compute a RPO. (df_analyze_loop): Use RPO on the forward graph for DF_FORWARD problems, RPO on the inverted graph for DF_BACKWARD. --- gcc/df-core.cc | 36 ++++++++++++++++++++---------------- 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/gcc/df-core.cc b/gcc/df-core.cc index 27123645da5..d4812b04a7c 100644 --- a/gcc/df-core.cc +++ b/gcc/df-core.cc @@ -1259,14 +1259,18 @@ df_analyze (void) free (df->postorder); free (df->postorder_inverted); - df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); - df->n_blocks = post_order_compute (df->postorder, true, true); /* For DF_FORWARD use a RPO on the forward graph. Since we want to have unreachable blocks deleted use post_order_compute and reverse the order. */ df->postorder_inverted = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); - for (int i = 0; i < df->n_blocks; ++i) - df->postorder_inverted[i] = df->postorder[df->n_blocks - 1 - i]; + df->n_blocks = post_order_compute (df->postorder_inverted, true, true); + for (int i = 0; i < df->n_blocks / 2; ++i) + std::swap (df->postorder_inverted[i], + df->postorder_inverted[df->n_blocks - 1 - i]); + /* For DF_BACKWARD use a RPO on the reverse graph. */ + df->postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); + int n = inverted_rev_post_order_compute (cfun, df->postorder); + gcc_assert (n == df->n_blocks); for (int i = 0; i < df->n_blocks; i++) bitmap_set_bit (current_all_blocks, df->postorder[i]); @@ -1305,11 +1309,11 @@ df_analyze (void) Returns the number of blocks which is always loop->num_nodes. */ static int -loop_post_order_compute (int *post_order, class loop *loop) +loop_rev_post_order_compute (int *post_order, class loop *loop) { edge_iterator *stack; int sp; - int post_order_num = 0; + int post_order_num = loop->num_nodes - 1; /* Allocate stack for back-tracking up CFG. */ stack = XNEWVEC (edge_iterator, loop->num_nodes + 1); @@ -1342,13 +1346,13 @@ loop_post_order_compute (int *post_order, class loop *loop) time, check its successors. */ stack[sp++] = ei_start (dest->succs); else - post_order[post_order_num++] = dest->index; + post_order[post_order_num--] = dest->index; } else { if (ei_one_before_end_p (ei) && src != loop_preheader_edge (loop)->src) - post_order[post_order_num++] = src->index; + post_order[post_order_num--] = src->index; if (!ei_one_before_end_p (ei)) ei_next (&stack[sp - 1]); @@ -1359,19 +1363,19 @@ loop_post_order_compute (int *post_order, class loop *loop) free (stack); - return post_order_num; + return loop->num_nodes; } /* Compute the reverse top sort order of the inverted sub-CFG specified by LOOP. Returns the number of blocks which is always loop->num_nodes. */ static int -loop_inverted_post_order_compute (int *post_order, class loop *loop) +loop_inverted_rev_post_order_compute (int *post_order, class loop *loop) { basic_block bb; edge_iterator *stack; int sp; - int post_order_num = 0; + int post_order_num = loop->num_nodes - 1; /* Allocate stack for back-tracking up CFG. */ stack = XNEWVEC (edge_iterator, loop->num_nodes + 1); @@ -1408,13 +1412,13 @@ loop_inverted_post_order_compute (int *post_order, class loop *loop) time, check its predecessors. */ stack[sp++] = ei_start (pred->preds); else - post_order[post_order_num++] = pred->index; + post_order[post_order_num--] = pred->index; } else { if (flow_bb_inside_loop_p (loop, bb) && ei_one_before_end_p (ei)) - post_order[post_order_num++] = bb->index; + post_order[post_order_num--] = bb->index; if (!ei_one_before_end_p (ei)) ei_next (&stack[sp - 1]); @@ -1424,7 +1428,7 @@ loop_inverted_post_order_compute (int *post_order, class loop *loop) } free (stack); - return post_order_num; + return loop->num_nodes; } @@ -1438,8 +1442,8 @@ df_analyze_loop (class loop *loop) df->postorder = XNEWVEC (int, loop->num_nodes); df->postorder_inverted = XNEWVEC (int, loop->num_nodes); - df->n_blocks = loop_post_order_compute (df->postorder, loop); - int n = loop_inverted_post_order_compute (df->postorder_inverted, loop); + df->n_blocks = loop_rev_post_order_compute (df->postorder_inverted, loop); + int n = loop_inverted_rev_post_order_compute (df->postorder, loop); gcc_assert ((unsigned) df->n_blocks == loop->num_nodes); gcc_assert ((unsigned) n == loop->num_nodes);