From patchwork Mon Oct 2 07:41:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 147202 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1255862vqb; Mon, 2 Oct 2023 00:42:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG3u7Txbntw98p4exU9qVrFfW5w6frvzclHHwSkh2vjVVo1kxUwK3VWDgXATG+vjG/y0mza X-Received: by 2002:a17:907:1dea:b0:9a9:e5a8:3dd8 with SMTP id og42-20020a1709071dea00b009a9e5a83dd8mr9931775ejc.9.1696232546595; Mon, 02 Oct 2023 00:42:26 -0700 (PDT) Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id hb6-20020a170906b88600b009ade52b32f9si10674336ejb.564.2023.10.02.00.42.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Oct 2023 00:42:26 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=gah+VeVn; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=gah+VeVn; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B32DE385C6F5 for ; Mon, 2 Oct 2023 07:42:22 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2077.outbound.protection.outlook.com [40.107.22.77]) by sourceware.org (Postfix) with ESMTPS id 476023858D37 for ; Mon, 2 Oct 2023 07:41:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 476023858D37 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OxfnK9LJ8cC5lqb6tea5lXAqL+Tjpd0+x7sMBn+g14Y=; b=gah+VeVnHDTFRK7c5Dqcl6oyeVt7RGH47QlY/RGclmNqKYGN59GFvK8G9wJlVXZ0yUTxkG1NrlXAUGgVmlc4Fe+wgQbAjyqKMaRrlZspmCBHXwbCQlcc7e2mESIsYU0NhHi+h2x51H+woXiCg69bsm/laWo3sDaPqN7ED4+MK4U= Received: from AM6PR10CA0021.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:89::34) by DB9PR08MB8721.eurprd08.prod.outlook.com (2603:10a6:10:3d3::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.28; Mon, 2 Oct 2023 07:41:34 +0000 Received: from AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:89:cafe::d1) by AM6PR10CA0021.outlook.office365.com (2603:10a6:209:89::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.29 via Frontend Transport; Mon, 2 Oct 2023 07:41:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT065.mail.protection.outlook.com (100.127.140.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.21 via Frontend Transport; Mon, 2 Oct 2023 07:41:34 +0000 Received: ("Tessian outbound ab4fc72d2cd4:v211"); Mon, 02 Oct 2023 07:41:34 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: c84755000411dcb8 X-CR-MTA-TID: 64aa7808 Received: from e90220c9ec74.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id BB8D2D1A-5BC9-4F7A-BCB5-79D9DBD52FDD.1; Mon, 02 Oct 2023 07:41:27 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e90220c9ec74.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 02 Oct 2023 07:41:27 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ktGkUr+yjvR9BTk/L1VVVXQDb61e73IjP/3rsE1S8THOUJbOED6xpZlsyTv7Zua4R4F30Z8q7qi9oFE06QE5+tFHWZ4MAI70zk06PhNQey97oDNLKvpD29TGWZQX/54WrdEfHQ+14Uzjm3oMSPSdHF2ULdb1Bzgy2TlGrk/Z368cO+l8IfAew5SWYrIb5IaFg44Jy+Nr5PFkjXHuJU3R5LzCED9JAtjxORhuc346eJ1ElO6D2TzCr8SBkubYJVzQ08mT3cBDqso7JT80v8I1azz+GnlO2kgl0BCPUZHmmU7h8n7WbYzMTyVYJcVND21IlfPTQNr/txaGqUXtLQTIfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OxfnK9LJ8cC5lqb6tea5lXAqL+Tjpd0+x7sMBn+g14Y=; b=iGuTft4ud2r2V+a3kUtWBi21QRbaado1Ih04c56nujTpzei5E4ZiEhSJqaHNYQwt28pqMEE0VSIBN9WWP9ds/IwMpuv2OA0gafnbKCj/2jvcDSCIso6kpRQMoDlAiaXr0xEc9rt9y3dufALXio1uXau5cdd2v5RM4ydAmj5rAWTWIXv9eoat3nnqdOP5vol14yXoCQedwaXNsXdc81muZK/eGO5YyrzAxeSf/P3eyMA4iPPh/FzAHTVLM6ElcYH29mC8gDi6Srvzxw2aVR6zngMZ8qE/ngKnumLnUwrw5i1Cx3l4Ybd6/UdieeII6OjhQDOgv4B9jkD8qzGOO/fkGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OxfnK9LJ8cC5lqb6tea5lXAqL+Tjpd0+x7sMBn+g14Y=; b=gah+VeVnHDTFRK7c5Dqcl6oyeVt7RGH47QlY/RGclmNqKYGN59GFvK8G9wJlVXZ0yUTxkG1NrlXAUGgVmlc4Fe+wgQbAjyqKMaRrlZspmCBHXwbCQlcc7e2mESIsYU0NhHi+h2x51H+woXiCg69bsm/laWo3sDaPqN7ED4+MK4U= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by AS4PR08MB7654.eurprd08.prod.outlook.com (2603:10a6:20b:4f0::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.26; Mon, 2 Oct 2023 07:41:23 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1%7]) with mapi id 15.20.6813.027; Mon, 2 Oct 2023 07:41:23 +0000 Date: Mon, 2 Oct 2023 08:41:14 +0100 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, rguenther@suse.de, jlaw@ventanamicro.com Subject: [PATCH 1/3]middle-end: Refactor vectorizer loop conditionals and separate out IV to new variables Message-ID: Content-Disposition: inline X-ClientProxiedBy: SN4PR0501CA0018.namprd05.prod.outlook.com (2603:10b6:803:40::31) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|AS4PR08MB7654:EE_|AM7EUR03FT065:EE_|DB9PR08MB8721:EE_ X-MS-Office365-Filtering-Correlation-Id: 5e58933d-a6d5-4e84-e63c-08dbc31b0097 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: gWexwEzU+ZgfN7xyOvxlGioP5a9Hp0xP21D9T6uR1HSEhr5J2zQz72MBkpQYsL6FkywMFK7UPxWoj++qYP4FT6KRdR/CyppDlbZX10rXiNcr2bYoM/K2xtDa8IpwI+otDgLXEWvO+AOYezo8HI1q/67SEtRVMnYu0hG+ZrsyQtUPOeN2KjoHrXC/1GhHPyTquQ72hMUTEW0lGUjDOM3KYzAsr6PCcGEQtvph0D9virKo3iRYR59NdHLZ+aiobsEoRu2cybMTc+NQBNJXXDGLDas8MGalifF8G8wtE22pcriZf3QXAKoRrwCJmc515aQaMolJScSWKL/DGP2YYWFE6lAFrmSMpO6BVWp5BhAbvQ92Sn9rDQPJUxVOP9rBiU3QcUdnqhMafRA2b+RpzsUY4n5gzw1wLGKG98SbMeAYn9jj/H7dd5+zqCthU5GBv+SLCuPbFtkXhpXQJSaWJEIeW52dxHtGZ8cnZbgH8lrFOYJr4pJaIXQtrJVXOK/WvDk8vGm1/TWibPfPNMsg86feSRhAWcA27nVG6aI/I7AGdLsLRLKSpgZGOT2Nqxmsqyt4INjvrQ8V49wXRgx9WXNuM+C17zaX3Aiq8H+nsMzkyon97evzXyvNmJqhCe+1ZauNzlttgutDxJzQT2puEjK1BA== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(136003)(396003)(346002)(376002)(39860400002)(366004)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(44144004)(6506007)(6486002)(478600001)(33964004)(6666004)(83380400001)(26005)(6512007)(2616005)(41300700001)(4743002)(316002)(8936002)(44832011)(5660300002)(6916009)(4326008)(235185007)(66946007)(66556008)(8676002)(36756003)(66476007)(86362001)(2906002)(30864003)(38100700002)(4216001)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7654 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 19cb24e6-d3c3-42aa-7468-08dbc31af92b X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: h9z+11itSmT8vuBHNeLsVofhseBR2sSqLHVSlph9w7n5xTeM7MAt90KKTRE4/0MRcxJ4V4T3LiN7MV5Ru0FJPSsvzQTK7xFAJiv25XaNbV7I4vxWcqRGk1s75DtE4yKCTwti3vML7JFhZnp4bAdk2oAflqo5u9ODrkLvI8w6+Imo945l3xeJOUyTgouz4M1p2cNIA5D0MuWMRmqvwRlL1HXjHhBHnE49SOzrixFpN2KGc6LVrN/7vIDOBloRp0MQ0gflcRukca5NB78BXj+zH+oEIuzVgpK9D7D3fw5jsTbg99svnh9/e3Q0TzLLO6GMYoJ0RsAAwPszg1WU/uPAHis9/pXSaZWi0eOjSlyeZRcZ4BMWNyq8MrLlPtQdXIZBRYYb50/Ch+CS7/i/cjb9RsW0WMdO25eL6UmDs9kfOh5RQkm26CCN2+9o9SKa3aQJ9AFkmMKCyl82scIY1uvUN+y5YLIzVlQSMGGIb7izm48qZbQo+35hKPxNDpQIUKFkUrLlPYcSEyAPbZOz13YXCfSCuJZNRg9bYXJgR9ELHlZ98WXdW75YlK9svnGrDixro5qagXZZTdc6ooDuZNSs9bAIuZzSa2RQg+kfOdU3UmJi5egb2xJrtN6pEft16E6gdLBhvSwKTJ+79rMbe/Ob3lCgZ4dX9A7VI1K7PY67FBKf50lEcnYUDLp3xKpy0AoMC83Y2xvG+uqtDdWHnWjkaPYkgwVuIr2CpgNg/Yd86P5njzEbBejMLpxb/0Np3HkkjyhtpDjXocPrRNnwRHjfI0B1dNUFIzRSeiNmsiEeZWg= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(39860400002)(136003)(346002)(376002)(230922051799003)(1800799009)(82310400011)(186009)(64100799003)(451199024)(40470700004)(46966006)(36840700001)(40480700001)(6916009)(316002)(70586007)(70206006)(47076005)(40460700003)(36860700001)(5660300002)(44832011)(36756003)(30864003)(81166007)(235185007)(356005)(82740400003)(86362001)(2906002)(41300700001)(8936002)(4326008)(8676002)(33964004)(336012)(4743002)(83380400001)(44144004)(6506007)(6512007)(107886003)(6666004)(2616005)(6486002)(478600001)(26005)(4216001)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2023 07:41:34.3764 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5e58933d-a6d5-4e84-e63c-08dbc31b0097 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8721 X-Spam-Status: No, score=-12.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778628738500352693 X-GMAIL-MSGID: 1778628738500352693 Hi All, This is extracted out of the patch series to support early break vectorization in order to simplify the review of that patch series. The goal of this one is to separate out the refactoring from the new functionality. This first patch separates out the vectorizer's definition of an exit to their own values inside loop_vinfo. During vectorization we can have three separate copies for each loop: scalar, vectorized, epilogue. The scalar loop can also be the versioned loop before peeling. Because of this we track 3 different exits inside loop_vinfo corresponding to each of these loops. Additionally each function that uses an exit, when not obviously clear which exit is needed will now take the exit explicitly as an argument. This is because often times the callers switch the loops being passed around. While the caller knows which loops it is, the callee does not. For now the loop exits are simply initialized to same value as before determined by single_exit (..). No change in functionality is expected throughout this patch series. Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-linux-gnu, and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * tree-loop-distribution.cc (copy_loop_before): Pass exit explicitly. (loop_distribution::distribute_loop): Bail out of not single exit. * tree-scalar-evolution.cc (get_loop_exit_condition): New. * tree-scalar-evolution.h (get_loop_exit_condition): New. * tree-vect-data-refs.cc (vect_enhance_data_refs_alignment): Pass exit explicitly. * tree-vect-loop-manip.cc (vect_set_loop_condition_partial_vectors, vect_set_loop_condition_partial_vectors_avx512, vect_set_loop_condition_normal, vect_set_loop_condition): Explicitly take exit. (slpeel_tree_duplicate_loop_to_edge_cfg): Explicitly take exit and return new peeled corresponding peeled exit. (slpeel_can_duplicate_loop_p): Explicitly take exit. (find_loop_location): Handle not knowing an explicit exit. (vect_update_ivs_after_vectorizer, vect_gen_vector_loop_niters_mult_vf, find_guard_arg, slpeel_update_phi_nodes_for_loops, slpeel_update_phi_nodes_for_guard2): Use new exits. (vect_do_peeling): Update bookkeeping to keep track of exits. * tree-vect-loop.cc (vect_get_loop_niters): Explicitly take exit to analyze. (vec_init_loop_exit_info): New. (_loop_vec_info::_loop_vec_info): Initialize vec_loop_iv, vec_epilogue_loop_iv, scalar_loop_iv. (vect_analyze_loop_form): Initialize exits. (vect_create_loop_vinfo): Set main exit. (vect_create_epilog_for_reduction, vectorizable_live_operation, vect_transform_loop): Use it. (scale_profile_for_vect_loop): Explicitly take exit to scale. * tree-vectorizer.cc (set_uid_loop_bbs): Initialize loop exit. * tree-vectorizer.h (LOOP_VINFO_IV_EXIT, LOOP_VINFO_EPILOGUE_IV_EXIT, LOOP_VINFO_SCALAR_IV_EXIT): New. (struct loop_vec_info): Add vec_loop_iv, vec_epilogue_loop_iv, scalar_loop_iv. (vect_set_loop_condition, slpeel_can_duplicate_loop_p, slpeel_tree_duplicate_loop_to_edge_cfg): Take explicit exits. (vec_init_loop_exit_info): New. (struct vect_loop_form_info): Add loop_exit. --- inline copy of patch -- diff --git a/gcc/tree-loop-distribution.cc b/gcc/tree-loop-distribution.cc index a28470b66ea935741a61fb73961ed7c927543a3d..902edc49ab588152a5b845f2c8a42a7e2a1d6080 100644 --- diff --git a/gcc/tree-loop-distribution.cc b/gcc/tree-loop-distribution.cc index a28470b66ea935741a61fb73961ed7c927543a3d..902edc49ab588152a5b845f2c8a42a7e2a1d6080 100644 --- a/gcc/tree-loop-distribution.cc +++ b/gcc/tree-loop-distribution.cc @@ -949,7 +949,8 @@ copy_loop_before (class loop *loop, bool redirect_lc_phi_defs) edge preheader = loop_preheader_edge (loop); initialize_original_copy_tables (); - res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, NULL, preheader); + res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, single_exit (loop), NULL, + NULL, preheader, NULL); gcc_assert (res != NULL); /* When a not last partition is supposed to keep the LC PHIs computed @@ -3043,6 +3044,24 @@ loop_distribution::distribute_loop (class loop *loop, return 0; } + /* Loop distribution only does prologue peeling but we still need to + initialize loop exit information. However we only support single exits at + the moment. As such, should exit information not have been provided and we + have more than one exit, bail out. */ + if (!single_exit (loop)) + { + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, + "Loop %d not distributed: too many exits.\n", + loop->num); + + free_rdg (rdg); + loop_nest.release (); + free_data_refs (datarefs_vec); + delete ddrs_table; + return 0; + } + data_reference_p dref; for (i = 0; datarefs_vec.iterate (i, &dref); ++i) dref->aux = (void *) (uintptr_t) i; diff --git a/gcc/tree-scalar-evolution.h b/gcc/tree-scalar-evolution.h index c58a8a16e81573aada38e912b7c58b3e1b23b66d..f35ca1bded0b841179e4958645d264ad23684019 100644 --- a/gcc/tree-scalar-evolution.h +++ b/gcc/tree-scalar-evolution.h @@ -23,6 +23,7 @@ along with GCC; see the file COPYING3. If not see extern tree number_of_latch_executions (class loop *); extern gcond *get_loop_exit_condition (const class loop *); +extern gcond *get_loop_exit_condition (const_edge); extern void scev_initialize (void); extern bool scev_initialized_p (void); diff --git a/gcc/tree-scalar-evolution.cc b/gcc/tree-scalar-evolution.cc index 3fb6951e6085352c027d32c3548246042b98b64b..7cafe5ce576079921e380aaab5c5c4aa84cea372 100644 --- a/gcc/tree-scalar-evolution.cc +++ b/gcc/tree-scalar-evolution.cc @@ -1292,9 +1292,17 @@ scev_dfs::follow_ssa_edge_expr (gimple *at_stmt, tree expr, gcond * get_loop_exit_condition (const class loop *loop) +{ + return get_loop_exit_condition (single_exit (loop)); +} + +/* If the statement just before the EXIT_EDGE contains a condition then + return the condition, otherwise NULL. */ + +gcond * +get_loop_exit_condition (const_edge exit_edge) { gcond *res = NULL; - edge exit_edge = single_exit (loop); if (dump_file && (dump_flags & TDF_SCEV)) fprintf (dump_file, "(get_loop_exit_condition \n "); diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index 40ab568fe355964b878d770010aa9eeaef63eeac..9607a9fb25da26591ffd8071a02495f2042e0579 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -2078,7 +2078,8 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo) /* Check if we can possibly peel the loop. */ if (!vect_can_advance_ivs_p (loop_vinfo) - || !slpeel_can_duplicate_loop_p (loop, single_exit (loop)) + || !slpeel_can_duplicate_loop_p (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), + LOOP_VINFO_IV_EXIT (loop_vinfo)) || loop->inner) do_peeling = false; diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 09641901ff1e5c03dd07ab6f85dd67288f940ea2..e06717272aafc6d31cbdcb94840ac25de616da6d 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -803,7 +803,7 @@ vect_set_loop_controls_directly (class loop *loop, loop_vec_info loop_vinfo, final gcond. */ static gcond * -vect_set_loop_condition_partial_vectors (class loop *loop, +vect_set_loop_condition_partial_vectors (class loop *loop, edge exit_edge, loop_vec_info loop_vinfo, tree niters, tree final_iv, bool niters_maybe_zero, gimple_stmt_iterator loop_cond_gsi) @@ -904,7 +904,6 @@ vect_set_loop_condition_partial_vectors (class loop *loop, add_header_seq (loop, header_seq); /* Get a boolean result that tells us whether to iterate. */ - edge exit_edge = single_exit (loop); gcond *cond_stmt; if (LOOP_VINFO_USING_DECREMENTING_IV_P (loop_vinfo) && !LOOP_VINFO_USING_SELECT_VL_P (loop_vinfo)) @@ -935,7 +934,7 @@ vect_set_loop_condition_partial_vectors (class loop *loop, if (final_iv) { gassign *assign = gimple_build_assign (final_iv, orig_niters); - gsi_insert_on_edge_immediate (single_exit (loop), assign); + gsi_insert_on_edge_immediate (exit_edge, assign); } return cond_stmt; @@ -953,6 +952,7 @@ vect_set_loop_condition_partial_vectors (class loop *loop, static gcond * vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, + edge exit_edge, loop_vec_info loop_vinfo, tree niters, tree final_iv, bool niters_maybe_zero, @@ -1144,7 +1144,6 @@ vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, add_preheader_seq (loop, preheader_seq); /* Adjust the exit test using the decrementing IV. */ - edge exit_edge = single_exit (loop); tree_code code = (exit_edge->flags & EDGE_TRUE_VALUE) ? LE_EXPR : GT_EXPR; /* When we peel for alignment with niter_skip != 0 this can cause niter + niter_skip to wrap and since we are comparing the @@ -1183,7 +1182,8 @@ vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, loop handles exactly VF scalars per iteration. */ static gcond * -vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, +vect_set_loop_condition_normal (loop_vec_info /* loop_vinfo */, edge exit_edge, + class loop *loop, tree niters, tree step, tree final_iv, bool niters_maybe_zero, gimple_stmt_iterator loop_cond_gsi) { @@ -1191,13 +1191,12 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, gcond *cond_stmt; gcond *orig_cond; edge pe = loop_preheader_edge (loop); - edge exit_edge = single_exit (loop); gimple_stmt_iterator incr_gsi; bool insert_after; enum tree_code code; tree niters_type = TREE_TYPE (niters); - orig_cond = get_loop_exit_condition (loop); + orig_cond = get_loop_exit_condition (exit_edge); gcc_assert (orig_cond); loop_cond_gsi = gsi_for_stmt (orig_cond); @@ -1305,19 +1304,18 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, if (final_iv) { gassign *assign; - edge exit = single_exit (loop); - gcc_assert (single_pred_p (exit->dest)); + gcc_assert (single_pred_p (exit_edge->dest)); tree phi_dest = integer_zerop (init) ? final_iv : copy_ssa_name (indx_after_incr); /* Make sure to maintain LC SSA form here and elide the subtraction if the value is zero. */ - gphi *phi = create_phi_node (phi_dest, exit->dest); - add_phi_arg (phi, indx_after_incr, exit, UNKNOWN_LOCATION); + gphi *phi = create_phi_node (phi_dest, exit_edge->dest); + add_phi_arg (phi, indx_after_incr, exit_edge, UNKNOWN_LOCATION); if (!integer_zerop (init)) { assign = gimple_build_assign (final_iv, MINUS_EXPR, phi_dest, init); - gimple_stmt_iterator gsi = gsi_after_labels (exit->dest); + gimple_stmt_iterator gsi = gsi_after_labels (exit_edge->dest); gsi_insert_before (&gsi, assign, GSI_SAME_STMT); } } @@ -1348,29 +1346,33 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, Assumption: the exit-condition of LOOP is the last stmt in the loop. */ void -vect_set_loop_condition (class loop *loop, loop_vec_info loop_vinfo, +vect_set_loop_condition (class loop *loop, edge loop_e, loop_vec_info loop_vinfo, tree niters, tree step, tree final_iv, bool niters_maybe_zero) { gcond *cond_stmt; - gcond *orig_cond = get_loop_exit_condition (loop); + gcond *orig_cond = get_loop_exit_condition (loop_e); gimple_stmt_iterator loop_cond_gsi = gsi_for_stmt (orig_cond); if (loop_vinfo && LOOP_VINFO_USING_PARTIAL_VECTORS_P (loop_vinfo)) { if (LOOP_VINFO_PARTIAL_VECTORS_STYLE (loop_vinfo) == vect_partial_vectors_avx512) - cond_stmt = vect_set_loop_condition_partial_vectors_avx512 (loop, loop_vinfo, + cond_stmt = vect_set_loop_condition_partial_vectors_avx512 (loop, loop_e, + loop_vinfo, niters, final_iv, niters_maybe_zero, loop_cond_gsi); else - cond_stmt = vect_set_loop_condition_partial_vectors (loop, loop_vinfo, + cond_stmt = vect_set_loop_condition_partial_vectors (loop, loop_e, + loop_vinfo, niters, final_iv, niters_maybe_zero, loop_cond_gsi); } else - cond_stmt = vect_set_loop_condition_normal (loop, niters, step, final_iv, + cond_stmt = vect_set_loop_condition_normal (loop_vinfo, loop_e, loop, + niters, + step, final_iv, niters_maybe_zero, loop_cond_gsi); @@ -1439,7 +1441,6 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) get_current_def (PHI_ARG_DEF_FROM_EDGE (from_phi, from))); } - /* Given LOOP this function generates a new copy of it and puts it on E which is either the entry or exit of LOOP. If SCALAR_LOOP is non-NULL, assume LOOP and SCALAR_LOOP are equivalent and copy the @@ -1447,8 +1448,9 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) entry or exit of LOOP. */ class loop * -slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, - class loop *scalar_loop, edge e) +slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, + class loop *scalar_loop, + edge scalar_exit, edge e, edge *new_e) { class loop *new_loop; basic_block *new_bbs, *bbs, *pbbs; @@ -1458,13 +1460,16 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge exit, new_exit; bool duplicate_outer_loop = false; - exit = single_exit (loop); + exit = loop_exit; at_exit = (e == exit); if (!at_exit && e != loop_preheader_edge (loop)) return NULL; if (scalar_loop == NULL) - scalar_loop = loop; + { + scalar_loop = loop; + scalar_exit = loop_exit; + } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); pbbs = bbs + 1; @@ -1490,13 +1495,15 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, bbs[0] = preheader; new_bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); - exit = single_exit (scalar_loop); copy_bbs (bbs, scalar_loop->num_nodes + 1, new_bbs, - &exit, 1, &new_exit, NULL, + &scalar_exit, 1, &new_exit, NULL, at_exit ? loop->latch : e->src, true); - exit = single_exit (loop); + exit = loop_exit; basic_block new_preheader = new_bbs[0]; + if (new_e) + *new_e = new_exit; + /* Before installing PHI arguments make sure that the edges into them match that of the scalar loop we analyzed. This makes sure the SLP tree matches up between the main vectorized @@ -1537,8 +1544,7 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, but LOOP will not. slpeel_update_phi_nodes_for_guard{1,2} expects the LOOP SSA_NAMEs (on the exit edge and edge from latch to header) to have current_def set, so copy them over. */ - slpeel_duplicate_current_defs_from_edges (single_exit (scalar_loop), - exit); + slpeel_duplicate_current_defs_from_edges (scalar_exit, exit); slpeel_duplicate_current_defs_from_edges (EDGE_SUCC (scalar_loop->latch, 0), EDGE_SUCC (loop->latch, 0)); @@ -1696,11 +1702,11 @@ slpeel_add_loop_guard (basic_block guard_bb, tree cond, */ bool -slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) +slpeel_can_duplicate_loop_p (const class loop *loop, const_edge exit_e, + const_edge e) { - edge exit_e = single_exit (loop); edge entry_e = loop_preheader_edge (loop); - gcond *orig_cond = get_loop_exit_condition (loop); + gcond *orig_cond = get_loop_exit_condition (exit_e); gimple_stmt_iterator loop_exit_gsi = gsi_last_bb (exit_e->src); unsigned int num_bb = loop->inner? 5 : 2; @@ -1709,7 +1715,7 @@ slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) if (!loop_outer (loop) || loop->num_nodes != num_bb || !empty_block_p (loop->latch) - || !single_exit (loop) + || !exit_e /* Verify that new loop exit condition can be trivially modified. */ || (!orig_cond || orig_cond != gsi_stmt (loop_exit_gsi)) || (e != exit_e && e != entry_e)) @@ -1722,7 +1728,7 @@ slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) return ret; } -/* Function vect_get_loop_location. +/* Function find_loop_location. Extract the location of the loop in the source code. If the loop is not well formed for vectorization, an estimated @@ -1739,11 +1745,19 @@ find_loop_location (class loop *loop) if (!loop) return dump_user_location_t (); - stmt = get_loop_exit_condition (loop); + if (loops_state_satisfies_p (LOOPS_HAVE_RECORDED_EXITS)) + { + /* We only care about the loop location, so use any exit with location + information. */ + for (edge e : get_loop_exit_edges (loop)) + { + stmt = get_loop_exit_condition (e); - if (stmt - && LOCATION_LOCUS (gimple_location (stmt)) > BUILTINS_LOCATION) - return stmt; + if (stmt + && LOCATION_LOCUS (gimple_location (stmt)) > BUILTINS_LOCATION) + return stmt; + } + } /* If we got here the loop is probably not "well formed", try to estimate the loop location */ @@ -1962,7 +1976,8 @@ vect_update_ivs_after_vectorizer (loop_vec_info loop_vinfo, gphi_iterator gsi, gsi1; class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); basic_block update_bb = update_e->dest; - basic_block exit_bb = single_exit (loop)->dest; + + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; /* Make sure there exists a single-predecessor exit bb: */ gcc_assert (single_pred_p (exit_bb)); @@ -2529,10 +2544,9 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, { /* We should be using a step_vector of VF if VF is variable. */ int vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo).to_constant (); - class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); tree type = TREE_TYPE (niters_vector); tree log_vf = build_int_cst (type, exact_log2 (vf)); - basic_block exit_bb = single_exit (loop)->dest; + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; gcc_assert (niters_vector_mult_vf_ptr != NULL); tree niters_vector_mult_vf = fold_build2 (LSHIFT_EXPR, type, @@ -2555,11 +2569,11 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, NULL. */ static tree -find_guard_arg (class loop *loop, class loop *epilog ATTRIBUTE_UNUSED, - gphi *lcssa_phi) +find_guard_arg (class loop *loop ATTRIBUTE_UNUSED, + class loop *epilog ATTRIBUTE_UNUSED, + const_edge e, gphi *lcssa_phi) { gphi_iterator gsi; - edge e = single_exit (loop); gcc_assert (single_pred_p (e->dest)); for (gsi = gsi_start_phis (e->dest); !gsi_end_p (gsi); gsi_next (&gsi)) @@ -2620,7 +2634,8 @@ find_guard_arg (class loop *loop, class loop *epilog ATTRIBUTE_UNUSED, static void slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, - class loop *first, class loop *second, + class loop *first, edge first_loop_e, + class loop *second, edge second_loop_e, bool create_lcssa_for_iv_phis) { gphi_iterator gsi_update, gsi_orig; @@ -2628,7 +2643,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, edge first_latch_e = EDGE_SUCC (first->latch, 0); edge second_preheader_e = loop_preheader_edge (second); - basic_block between_bb = single_exit (first)->dest; + basic_block between_bb = first_loop_e->dest; gcc_assert (between_bb == second_preheader_e->src); gcc_assert (single_pred_p (between_bb) && single_succ_p (between_bb)); @@ -2651,7 +2666,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, { tree new_res = copy_ssa_name (PHI_RESULT (orig_phi)); gphi *lcssa_phi = create_phi_node (new_res, between_bb); - add_phi_arg (lcssa_phi, arg, single_exit (first), UNKNOWN_LOCATION); + add_phi_arg (lcssa_phi, arg, first_loop_e, UNKNOWN_LOCATION); arg = new_res; } @@ -2664,7 +2679,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, for correct vectorization of live stmts. */ if (loop == first) { - basic_block orig_exit = single_exit (second)->dest; + basic_block orig_exit = second_loop_e->dest; for (gsi_orig = gsi_start_phis (orig_exit); !gsi_end_p (gsi_orig); gsi_next (&gsi_orig)) { @@ -2673,13 +2688,14 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, if (TREE_CODE (orig_arg) != SSA_NAME || virtual_operand_p (orig_arg)) continue; + const_edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); /* Already created in the above loop. */ - if (find_guard_arg (first, second, orig_phi)) + if (find_guard_arg (first, second, exit_e, orig_phi)) continue; tree new_res = copy_ssa_name (orig_arg); gphi *lcphi = create_phi_node (new_res, between_bb); - add_phi_arg (lcphi, orig_arg, single_exit (first), UNKNOWN_LOCATION); + add_phi_arg (lcphi, orig_arg, first_loop_e, UNKNOWN_LOCATION); } } } @@ -2847,7 +2863,8 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, if (!merge_arg) merge_arg = old_arg; - tree guard_arg = find_guard_arg (loop, epilog, update_phi); + tree guard_arg + = find_guard_arg (loop, epilog, single_exit (loop), update_phi); /* If the var is live after loop but not a reduction, we simply use the old arg. */ if (!guard_arg) @@ -3201,27 +3218,37 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, } if (vect_epilogues) - /* Make sure to set the epilogue's epilogue scalar loop, such that we can - use the original scalar loop as remaining epilogue if necessary. */ - LOOP_VINFO_SCALAR_LOOP (epilogue_vinfo) - = LOOP_VINFO_SCALAR_LOOP (loop_vinfo); + { + /* Make sure to set the epilogue's epilogue scalar loop, such that we can + use the original scalar loop as remaining epilogue if necessary. */ + LOOP_VINFO_SCALAR_LOOP (epilogue_vinfo) + = LOOP_VINFO_SCALAR_LOOP (loop_vinfo); + LOOP_VINFO_SCALAR_IV_EXIT (epilogue_vinfo) + = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); + } if (prolog_peeling) { e = loop_preheader_edge (loop); - gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e)); + edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); + gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, exit_e, e)); /* Peel prolog and put it on preheader edge of loop. */ - prolog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, scalar_loop, e); + edge scalar_e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); + edge prolog_e = NULL; + prolog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, exit_e, + scalar_loop, scalar_e, + e, &prolog_e); gcc_assert (prolog); prolog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, loop, true); + slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, prolog_e, loop, + exit_e, true); first_loop = prolog; reset_original_copy_tables (); /* Update the number of iterations for prolog loop. */ tree step_prolog = build_one_cst (TREE_TYPE (niters_prolog)); - vect_set_loop_condition (prolog, NULL, niters_prolog, + vect_set_loop_condition (prolog, prolog_e, loop_vinfo, niters_prolog, step_prolog, NULL_TREE, false); /* Skip the prolog loop. */ @@ -3275,8 +3302,8 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, if (epilog_peeling) { - e = single_exit (loop); - gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e)); + e = LOOP_VINFO_IV_EXIT (loop_vinfo); + gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e, e)); /* Peel epilog and put it on exit edge of loop. If we are vectorizing said epilog then we should use a copy of the main loop as a starting @@ -3285,12 +3312,18 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, If we are not vectorizing the epilog then we should use the scalar loop as the transformations mentioned above make less or no sense when not vectorizing. */ + edge scalar_e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); epilog = vect_epilogues ? get_loop_copy (loop) : scalar_loop; - epilog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, epilog, e); + edge epilog_e = vect_epilogues ? e : scalar_e; + edge new_epilog_e = NULL; + epilog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, e, epilog, + epilog_e, e, + &new_epilog_e); + LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo) = new_epilog_e; gcc_assert (epilog); - epilog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, epilog, false); + slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, e, epilog, + new_epilog_e, false); bb_before_epilog = loop_preheader_edge (epilog)->src; /* Scalar version loop may be preferred. In this case, add guard @@ -3374,16 +3407,16 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, { guard_cond = fold_build2 (EQ_EXPR, boolean_type_node, niters, niters_vector_mult_vf); - guard_bb = single_exit (loop)->dest; - guard_to = split_edge (single_exit (epilog)); + guard_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; + edge epilog_e = LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo); + guard_to = split_edge (epilog_e); guard_e = slpeel_add_loop_guard (guard_bb, guard_cond, guard_to, skip_vector ? anchor : guard_bb, prob_epilog.invert (), irred_flag); if (vect_epilogues) epilogue_vinfo->skip_this_loop_edge = guard_e; - slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, - single_exit (epilog)); + slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, epilog_e); /* Only need to handle basic block before epilog loop if it's not the guard_bb, which is the case when skip_vector is true. */ if (guard_bb != bb_before_epilog) @@ -3416,6 +3449,8 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, { epilog->aux = epilogue_vinfo; LOOP_VINFO_LOOP (epilogue_vinfo) = epilog; + LOOP_VINFO_IV_EXIT (epilogue_vinfo) + = LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo); loop_constraint_clear (epilog, LOOP_C_INFINITE); diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 23c6e8259e7b133cd7acc6bcf0bad26423e9993a..6e60d84143626a8e1d801bb580f4dcebc73c7ba7 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -855,10 +855,9 @@ vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo) static gcond * -vect_get_loop_niters (class loop *loop, tree *assumptions, +vect_get_loop_niters (class loop *loop, edge exit, tree *assumptions, tree *number_of_iterations, tree *number_of_iterationsm1) { - edge exit = single_exit (loop); class tree_niter_desc niter_desc; tree niter_assumptions, niter, may_be_zero; gcond *cond = get_loop_exit_condition (loop); @@ -927,6 +926,20 @@ vect_get_loop_niters (class loop *loop, tree *assumptions, return cond; } +/* Determine the main loop exit for the vectorizer. */ + +edge +vec_init_loop_exit_info (class loop *loop) +{ + /* Before we begin we must first determine which exit is the main one and + which are auxilary exits. */ + auto_vec exits = get_loop_exit_edges (loop); + if (exits.length () == 1) + return exits[0]; + else + return NULL; +} + /* Function bb_in_loop_p Used as predicate for dfs order traversal of the loop bbs. */ @@ -987,7 +1000,10 @@ _loop_vec_info::_loop_vec_info (class loop *loop_in, vec_info_shared *shared) has_mask_store (false), scalar_loop_scaling (profile_probability::uninitialized ()), scalar_loop (NULL), - orig_loop_info (NULL) + orig_loop_info (NULL), + vec_loop_iv (NULL), + vec_epilogue_loop_iv (NULL), + scalar_loop_iv (NULL) { /* CHECKME: We want to visit all BBs before their successors (except for latch blocks, for which this assertion wouldn't hold). In the simple @@ -1646,6 +1662,18 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) { DUMP_VECT_SCOPE ("vect_analyze_loop_form"); + edge exit_e = vec_init_loop_exit_info (loop); + if (!exit_e) + return opt_result::failure_at (vect_location, + "not vectorized:" + " could not determine main exit from" + " loop with multiple exits.\n"); + info->loop_exit = exit_e; + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "using as main loop exit: %d -> %d [AUX: %p]\n", + exit_e->src->index, exit_e->dest->index, exit_e->aux); + /* Different restrictions apply when we are considering an inner-most loop, vs. an outer (nested) loop. (FORNOW. May want to relax some of these restrictions in the future). */ @@ -1767,7 +1795,7 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) " abnormal loop exit edge.\n"); info->loop_cond - = vect_get_loop_niters (loop, &info->assumptions, + = vect_get_loop_niters (loop, e, &info->assumptions, &info->number_of_iterations, &info->number_of_iterationsm1); if (!info->loop_cond) @@ -1821,6 +1849,9 @@ vect_create_loop_vinfo (class loop *loop, vec_info_shared *shared, stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (info->loop_cond); STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + + LOOP_VINFO_IV_EXIT (loop_vinfo) = info->loop_exit; + if (info->inner_loop_cond) { stmt_vec_info inner_loop_cond_info @@ -3063,9 +3094,9 @@ start_over: if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "epilog loop required\n"); if (!vect_can_advance_ivs_p (loop_vinfo) - || !slpeel_can_duplicate_loop_p (LOOP_VINFO_LOOP (loop_vinfo), - single_exit (LOOP_VINFO_LOOP - (loop_vinfo)))) + || !slpeel_can_duplicate_loop_p (loop, + LOOP_VINFO_IV_EXIT (loop_vinfo), + LOOP_VINFO_IV_EXIT (loop_vinfo))) { ok = opt_result::failure_at (vect_location, "not vectorized: can't create required " @@ -6002,7 +6033,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, Store them in NEW_PHIS. */ if (double_reduc) loop = outer_loop; - exit_bb = single_exit (loop)->dest; + exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; exit_gsi = gsi_after_labels (exit_bb); reduc_inputs.create (slp_node ? vec_num : ncopies); for (unsigned i = 0; i < vec_num; i++) @@ -6018,7 +6049,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, phi = create_phi_node (new_def, exit_bb); if (j) def = gimple_get_lhs (STMT_VINFO_VEC_STMTS (rdef_info)[j]); - SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def); + SET_PHI_ARG_DEF (phi, LOOP_VINFO_IV_EXIT (loop_vinfo)->dest_idx, def); new_def = gimple_convert (&stmts, vectype, new_def); reduc_inputs.quick_push (new_def); } @@ -10416,12 +10447,12 @@ vectorizable_live_operation (vec_info *vinfo, stmt_vec_info stmt_info, lhs' = new_tree; */ class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); - basic_block exit_bb = single_exit (loop)->dest; + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; gcc_assert (single_pred_p (exit_bb)); tree vec_lhs_phi = copy_ssa_name (vec_lhs); gimple *phi = create_phi_node (vec_lhs_phi, exit_bb); - SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, vec_lhs); + SET_PHI_ARG_DEF (phi, LOOP_VINFO_IV_EXIT (loop_vinfo)->dest_idx, vec_lhs); gimple_seq stmts = NULL; tree new_tree; @@ -10965,7 +10996,7 @@ vect_get_loop_len (loop_vec_info loop_vinfo, gimple_stmt_iterator *gsi, profile. */ static void -scale_profile_for_vect_loop (class loop *loop, unsigned vf, bool flat) +scale_profile_for_vect_loop (class loop *loop, edge exit_e, unsigned vf, bool flat) { /* For flat profiles do not scale down proportionally by VF and only cap by known iteration count bounds. */ @@ -10980,7 +11011,6 @@ scale_profile_for_vect_loop (class loop *loop, unsigned vf, bool flat) return; } /* Loop body executes VF fewer times and exit increases VF times. */ - edge exit_e = single_exit (loop); profile_count entry_count = loop_preheader_edge (loop)->count (); /* If we have unreliable loop profile avoid dropping entry @@ -11350,7 +11380,7 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) /* Make sure there exists a single-predecessor exit bb. Do this before versioning. */ - edge e = single_exit (loop); + edge e = LOOP_VINFO_IV_EXIT (loop_vinfo); if (! single_pred_p (e->dest)) { split_loop_exit_edge (e, true); @@ -11376,7 +11406,7 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) loop closed PHI nodes on the exit. */ if (LOOP_VINFO_SCALAR_LOOP (loop_vinfo)) { - e = single_exit (LOOP_VINFO_SCALAR_LOOP (loop_vinfo)); + e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); if (! single_pred_p (e->dest)) { split_loop_exit_edge (e, true); @@ -11625,8 +11655,9 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) a zero NITERS becomes a nonzero NITERS_VECTOR. */ if (integer_onep (step_vector)) niters_no_overflow = true; - vect_set_loop_condition (loop, loop_vinfo, niters_vector, step_vector, - niters_vector_mult_vf, !niters_no_overflow); + vect_set_loop_condition (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), loop_vinfo, + niters_vector, step_vector, niters_vector_mult_vf, + !niters_no_overflow); unsigned int assumed_vf = vect_vf_for_cost (loop_vinfo); @@ -11699,7 +11730,8 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) assumed_vf) - 1 : wi::udiv_floor (loop->nb_iterations_estimate + bias_for_assumed, assumed_vf) - 1); - scale_profile_for_vect_loop (loop, assumed_vf, flat); + scale_profile_for_vect_loop (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), + assumed_vf, flat); if (dump_enabled_p ()) { diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index f1d0cd79961abb095bc79d3b59a81930f0337e59..afa7a8e30891c782a0e5e3740ecc4377f5a31e54 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -919,10 +919,24 @@ public: analysis. */ vec<_loop_vec_info *> epilogue_vinfos; + /* The controlling loop IV for the current loop when vectorizing. This IV + controls the natural exits of the loop. */ + edge vec_loop_iv; + + /* The controlling loop IV for the epilogue loop when vectorizing. This IV + controls the natural exits of the loop. */ + edge vec_epilogue_loop_iv; + + /* The controlling loop IV for the scalar loop being vectorized. This IV + controls the natural exits of the loop. */ + edge scalar_loop_iv; } *loop_vec_info; /* Access Functions. */ #define LOOP_VINFO_LOOP(L) (L)->loop +#define LOOP_VINFO_IV_EXIT(L) (L)->vec_loop_iv +#define LOOP_VINFO_EPILOGUE_IV_EXIT(L) (L)->vec_epilogue_loop_iv +#define LOOP_VINFO_SCALAR_IV_EXIT(L) (L)->scalar_loop_iv #define LOOP_VINFO_BBS(L) (L)->bbs #define LOOP_VINFO_NITERSM1(L) (L)->num_itersm1 #define LOOP_VINFO_NITERS(L) (L)->num_iters @@ -2155,11 +2169,13 @@ class auto_purge_vect_location /* Simple loop peeling and versioning utilities for vectorizer's purposes - in tree-vect-loop-manip.cc. */ -extern void vect_set_loop_condition (class loop *, loop_vec_info, +extern void vect_set_loop_condition (class loop *, edge, loop_vec_info, tree, tree, tree, bool); -extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge); -class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, - class loop *, edge); +extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge, + const_edge); +class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, edge, + class loop *, edge, + edge, edge *); class loop *vect_loop_versioning (loop_vec_info, gimple *); extern class loop *vect_do_peeling (loop_vec_info, tree, tree, tree *, tree *, tree *, int, bool, bool, @@ -2169,6 +2185,7 @@ extern void vect_prepare_for_masked_peels (loop_vec_info); extern dump_user_location_t find_loop_location (class loop *); extern bool vect_can_advance_ivs_p (loop_vec_info); extern void vect_update_inits_of_drs (loop_vec_info, tree, tree_code); +extern edge vec_init_loop_exit_info (class loop *); /* In tree-vect-stmts.cc. */ extern tree get_related_vectype_for_scalar_type (machine_mode, tree, @@ -2358,6 +2375,7 @@ struct vect_loop_form_info tree assumptions; gcond *loop_cond; gcond *inner_loop_cond; + edge loop_exit; }; extern opt_result vect_analyze_loop_form (class loop *, vect_loop_form_info *); extern loop_vec_info vect_create_loop_vinfo (class loop *, vec_info_shared *, diff --git a/gcc/tree-vectorizer.cc b/gcc/tree-vectorizer.cc index a048e9d89178a37455bd7b83ab0f2a238a4ce69e..d97e2b54c25ac60378935392aa7b73476efed74b 100644 --- a/gcc/tree-vectorizer.cc +++ b/gcc/tree-vectorizer.cc @@ -943,6 +943,8 @@ set_uid_loop_bbs (loop_vec_info loop_vinfo, gimple *loop_vectorized_call, class loop *scalar_loop = get_loop (fun, tree_to_shwi (arg)); LOOP_VINFO_SCALAR_LOOP (loop_vinfo) = scalar_loop; + LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo) + = vec_init_loop_exit_info (scalar_loop); gcc_checking_assert (vect_loop_vectorized_call (scalar_loop) == loop_vectorized_call); /* If we are going to vectorize outer loop, prevent vectorization --- a/gcc/tree-loop-distribution.cc +++ b/gcc/tree-loop-distribution.cc @@ -949,7 +949,8 @@ copy_loop_before (class loop *loop, bool redirect_lc_phi_defs) edge preheader = loop_preheader_edge (loop); initialize_original_copy_tables (); - res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, NULL, preheader); + res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, single_exit (loop), NULL, + NULL, preheader, NULL); gcc_assert (res != NULL); /* When a not last partition is supposed to keep the LC PHIs computed @@ -3043,6 +3044,24 @@ loop_distribution::distribute_loop (class loop *loop, return 0; } + /* Loop distribution only does prologue peeling but we still need to + initialize loop exit information. However we only support single exits at + the moment. As such, should exit information not have been provided and we + have more than one exit, bail out. */ + if (!single_exit (loop)) + { + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, + "Loop %d not distributed: too many exits.\n", + loop->num); + + free_rdg (rdg); + loop_nest.release (); + free_data_refs (datarefs_vec); + delete ddrs_table; + return 0; + } + data_reference_p dref; for (i = 0; datarefs_vec.iterate (i, &dref); ++i) dref->aux = (void *) (uintptr_t) i; diff --git a/gcc/tree-scalar-evolution.h b/gcc/tree-scalar-evolution.h index c58a8a16e81573aada38e912b7c58b3e1b23b66d..f35ca1bded0b841179e4958645d264ad23684019 100644 --- a/gcc/tree-scalar-evolution.h +++ b/gcc/tree-scalar-evolution.h @@ -23,6 +23,7 @@ along with GCC; see the file COPYING3. If not see extern tree number_of_latch_executions (class loop *); extern gcond *get_loop_exit_condition (const class loop *); +extern gcond *get_loop_exit_condition (const_edge); extern void scev_initialize (void); extern bool scev_initialized_p (void); diff --git a/gcc/tree-scalar-evolution.cc b/gcc/tree-scalar-evolution.cc index 3fb6951e6085352c027d32c3548246042b98b64b..7cafe5ce576079921e380aaab5c5c4aa84cea372 100644 --- a/gcc/tree-scalar-evolution.cc +++ b/gcc/tree-scalar-evolution.cc @@ -1292,9 +1292,17 @@ scev_dfs::follow_ssa_edge_expr (gimple *at_stmt, tree expr, gcond * get_loop_exit_condition (const class loop *loop) +{ + return get_loop_exit_condition (single_exit (loop)); +} + +/* If the statement just before the EXIT_EDGE contains a condition then + return the condition, otherwise NULL. */ + +gcond * +get_loop_exit_condition (const_edge exit_edge) { gcond *res = NULL; - edge exit_edge = single_exit (loop); if (dump_file && (dump_flags & TDF_SCEV)) fprintf (dump_file, "(get_loop_exit_condition \n "); diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index 40ab568fe355964b878d770010aa9eeaef63eeac..9607a9fb25da26591ffd8071a02495f2042e0579 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -2078,7 +2078,8 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo) /* Check if we can possibly peel the loop. */ if (!vect_can_advance_ivs_p (loop_vinfo) - || !slpeel_can_duplicate_loop_p (loop, single_exit (loop)) + || !slpeel_can_duplicate_loop_p (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), + LOOP_VINFO_IV_EXIT (loop_vinfo)) || loop->inner) do_peeling = false; diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 09641901ff1e5c03dd07ab6f85dd67288f940ea2..e06717272aafc6d31cbdcb94840ac25de616da6d 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -803,7 +803,7 @@ vect_set_loop_controls_directly (class loop *loop, loop_vec_info loop_vinfo, final gcond. */ static gcond * -vect_set_loop_condition_partial_vectors (class loop *loop, +vect_set_loop_condition_partial_vectors (class loop *loop, edge exit_edge, loop_vec_info loop_vinfo, tree niters, tree final_iv, bool niters_maybe_zero, gimple_stmt_iterator loop_cond_gsi) @@ -904,7 +904,6 @@ vect_set_loop_condition_partial_vectors (class loop *loop, add_header_seq (loop, header_seq); /* Get a boolean result that tells us whether to iterate. */ - edge exit_edge = single_exit (loop); gcond *cond_stmt; if (LOOP_VINFO_USING_DECREMENTING_IV_P (loop_vinfo) && !LOOP_VINFO_USING_SELECT_VL_P (loop_vinfo)) @@ -935,7 +934,7 @@ vect_set_loop_condition_partial_vectors (class loop *loop, if (final_iv) { gassign *assign = gimple_build_assign (final_iv, orig_niters); - gsi_insert_on_edge_immediate (single_exit (loop), assign); + gsi_insert_on_edge_immediate (exit_edge, assign); } return cond_stmt; @@ -953,6 +952,7 @@ vect_set_loop_condition_partial_vectors (class loop *loop, static gcond * vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, + edge exit_edge, loop_vec_info loop_vinfo, tree niters, tree final_iv, bool niters_maybe_zero, @@ -1144,7 +1144,6 @@ vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, add_preheader_seq (loop, preheader_seq); /* Adjust the exit test using the decrementing IV. */ - edge exit_edge = single_exit (loop); tree_code code = (exit_edge->flags & EDGE_TRUE_VALUE) ? LE_EXPR : GT_EXPR; /* When we peel for alignment with niter_skip != 0 this can cause niter + niter_skip to wrap and since we are comparing the @@ -1183,7 +1182,8 @@ vect_set_loop_condition_partial_vectors_avx512 (class loop *loop, loop handles exactly VF scalars per iteration. */ static gcond * -vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, +vect_set_loop_condition_normal (loop_vec_info /* loop_vinfo */, edge exit_edge, + class loop *loop, tree niters, tree step, tree final_iv, bool niters_maybe_zero, gimple_stmt_iterator loop_cond_gsi) { @@ -1191,13 +1191,12 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, gcond *cond_stmt; gcond *orig_cond; edge pe = loop_preheader_edge (loop); - edge exit_edge = single_exit (loop); gimple_stmt_iterator incr_gsi; bool insert_after; enum tree_code code; tree niters_type = TREE_TYPE (niters); - orig_cond = get_loop_exit_condition (loop); + orig_cond = get_loop_exit_condition (exit_edge); gcc_assert (orig_cond); loop_cond_gsi = gsi_for_stmt (orig_cond); @@ -1305,19 +1304,18 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, if (final_iv) { gassign *assign; - edge exit = single_exit (loop); - gcc_assert (single_pred_p (exit->dest)); + gcc_assert (single_pred_p (exit_edge->dest)); tree phi_dest = integer_zerop (init) ? final_iv : copy_ssa_name (indx_after_incr); /* Make sure to maintain LC SSA form here and elide the subtraction if the value is zero. */ - gphi *phi = create_phi_node (phi_dest, exit->dest); - add_phi_arg (phi, indx_after_incr, exit, UNKNOWN_LOCATION); + gphi *phi = create_phi_node (phi_dest, exit_edge->dest); + add_phi_arg (phi, indx_after_incr, exit_edge, UNKNOWN_LOCATION); if (!integer_zerop (init)) { assign = gimple_build_assign (final_iv, MINUS_EXPR, phi_dest, init); - gimple_stmt_iterator gsi = gsi_after_labels (exit->dest); + gimple_stmt_iterator gsi = gsi_after_labels (exit_edge->dest); gsi_insert_before (&gsi, assign, GSI_SAME_STMT); } } @@ -1348,29 +1346,33 @@ vect_set_loop_condition_normal (class loop *loop, tree niters, tree step, Assumption: the exit-condition of LOOP is the last stmt in the loop. */ void -vect_set_loop_condition (class loop *loop, loop_vec_info loop_vinfo, +vect_set_loop_condition (class loop *loop, edge loop_e, loop_vec_info loop_vinfo, tree niters, tree step, tree final_iv, bool niters_maybe_zero) { gcond *cond_stmt; - gcond *orig_cond = get_loop_exit_condition (loop); + gcond *orig_cond = get_loop_exit_condition (loop_e); gimple_stmt_iterator loop_cond_gsi = gsi_for_stmt (orig_cond); if (loop_vinfo && LOOP_VINFO_USING_PARTIAL_VECTORS_P (loop_vinfo)) { if (LOOP_VINFO_PARTIAL_VECTORS_STYLE (loop_vinfo) == vect_partial_vectors_avx512) - cond_stmt = vect_set_loop_condition_partial_vectors_avx512 (loop, loop_vinfo, + cond_stmt = vect_set_loop_condition_partial_vectors_avx512 (loop, loop_e, + loop_vinfo, niters, final_iv, niters_maybe_zero, loop_cond_gsi); else - cond_stmt = vect_set_loop_condition_partial_vectors (loop, loop_vinfo, + cond_stmt = vect_set_loop_condition_partial_vectors (loop, loop_e, + loop_vinfo, niters, final_iv, niters_maybe_zero, loop_cond_gsi); } else - cond_stmt = vect_set_loop_condition_normal (loop, niters, step, final_iv, + cond_stmt = vect_set_loop_condition_normal (loop_vinfo, loop_e, loop, + niters, + step, final_iv, niters_maybe_zero, loop_cond_gsi); @@ -1439,7 +1441,6 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) get_current_def (PHI_ARG_DEF_FROM_EDGE (from_phi, from))); } - /* Given LOOP this function generates a new copy of it and puts it on E which is either the entry or exit of LOOP. If SCALAR_LOOP is non-NULL, assume LOOP and SCALAR_LOOP are equivalent and copy the @@ -1447,8 +1448,9 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) entry or exit of LOOP. */ class loop * -slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, - class loop *scalar_loop, edge e) +slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, + class loop *scalar_loop, + edge scalar_exit, edge e, edge *new_e) { class loop *new_loop; basic_block *new_bbs, *bbs, *pbbs; @@ -1458,13 +1460,16 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge exit, new_exit; bool duplicate_outer_loop = false; - exit = single_exit (loop); + exit = loop_exit; at_exit = (e == exit); if (!at_exit && e != loop_preheader_edge (loop)) return NULL; if (scalar_loop == NULL) - scalar_loop = loop; + { + scalar_loop = loop; + scalar_exit = loop_exit; + } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); pbbs = bbs + 1; @@ -1490,13 +1495,15 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, bbs[0] = preheader; new_bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); - exit = single_exit (scalar_loop); copy_bbs (bbs, scalar_loop->num_nodes + 1, new_bbs, - &exit, 1, &new_exit, NULL, + &scalar_exit, 1, &new_exit, NULL, at_exit ? loop->latch : e->src, true); - exit = single_exit (loop); + exit = loop_exit; basic_block new_preheader = new_bbs[0]; + if (new_e) + *new_e = new_exit; + /* Before installing PHI arguments make sure that the edges into them match that of the scalar loop we analyzed. This makes sure the SLP tree matches up between the main vectorized @@ -1537,8 +1544,7 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, but LOOP will not. slpeel_update_phi_nodes_for_guard{1,2} expects the LOOP SSA_NAMEs (on the exit edge and edge from latch to header) to have current_def set, so copy them over. */ - slpeel_duplicate_current_defs_from_edges (single_exit (scalar_loop), - exit); + slpeel_duplicate_current_defs_from_edges (scalar_exit, exit); slpeel_duplicate_current_defs_from_edges (EDGE_SUCC (scalar_loop->latch, 0), EDGE_SUCC (loop->latch, 0)); @@ -1696,11 +1702,11 @@ slpeel_add_loop_guard (basic_block guard_bb, tree cond, */ bool -slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) +slpeel_can_duplicate_loop_p (const class loop *loop, const_edge exit_e, + const_edge e) { - edge exit_e = single_exit (loop); edge entry_e = loop_preheader_edge (loop); - gcond *orig_cond = get_loop_exit_condition (loop); + gcond *orig_cond = get_loop_exit_condition (exit_e); gimple_stmt_iterator loop_exit_gsi = gsi_last_bb (exit_e->src); unsigned int num_bb = loop->inner? 5 : 2; @@ -1709,7 +1715,7 @@ slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) if (!loop_outer (loop) || loop->num_nodes != num_bb || !empty_block_p (loop->latch) - || !single_exit (loop) + || !exit_e /* Verify that new loop exit condition can be trivially modified. */ || (!orig_cond || orig_cond != gsi_stmt (loop_exit_gsi)) || (e != exit_e && e != entry_e)) @@ -1722,7 +1728,7 @@ slpeel_can_duplicate_loop_p (const class loop *loop, const_edge e) return ret; } -/* Function vect_get_loop_location. +/* Function find_loop_location. Extract the location of the loop in the source code. If the loop is not well formed for vectorization, an estimated @@ -1739,11 +1745,19 @@ find_loop_location (class loop *loop) if (!loop) return dump_user_location_t (); - stmt = get_loop_exit_condition (loop); + if (loops_state_satisfies_p (LOOPS_HAVE_RECORDED_EXITS)) + { + /* We only care about the loop location, so use any exit with location + information. */ + for (edge e : get_loop_exit_edges (loop)) + { + stmt = get_loop_exit_condition (e); - if (stmt - && LOCATION_LOCUS (gimple_location (stmt)) > BUILTINS_LOCATION) - return stmt; + if (stmt + && LOCATION_LOCUS (gimple_location (stmt)) > BUILTINS_LOCATION) + return stmt; + } + } /* If we got here the loop is probably not "well formed", try to estimate the loop location */ @@ -1962,7 +1976,8 @@ vect_update_ivs_after_vectorizer (loop_vec_info loop_vinfo, gphi_iterator gsi, gsi1; class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); basic_block update_bb = update_e->dest; - basic_block exit_bb = single_exit (loop)->dest; + + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; /* Make sure there exists a single-predecessor exit bb: */ gcc_assert (single_pred_p (exit_bb)); @@ -2529,10 +2544,9 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, { /* We should be using a step_vector of VF if VF is variable. */ int vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo).to_constant (); - class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); tree type = TREE_TYPE (niters_vector); tree log_vf = build_int_cst (type, exact_log2 (vf)); - basic_block exit_bb = single_exit (loop)->dest; + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; gcc_assert (niters_vector_mult_vf_ptr != NULL); tree niters_vector_mult_vf = fold_build2 (LSHIFT_EXPR, type, @@ -2555,11 +2569,11 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, NULL. */ static tree -find_guard_arg (class loop *loop, class loop *epilog ATTRIBUTE_UNUSED, - gphi *lcssa_phi) +find_guard_arg (class loop *loop ATTRIBUTE_UNUSED, + class loop *epilog ATTRIBUTE_UNUSED, + const_edge e, gphi *lcssa_phi) { gphi_iterator gsi; - edge e = single_exit (loop); gcc_assert (single_pred_p (e->dest)); for (gsi = gsi_start_phis (e->dest); !gsi_end_p (gsi); gsi_next (&gsi)) @@ -2620,7 +2634,8 @@ find_guard_arg (class loop *loop, class loop *epilog ATTRIBUTE_UNUSED, static void slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, - class loop *first, class loop *second, + class loop *first, edge first_loop_e, + class loop *second, edge second_loop_e, bool create_lcssa_for_iv_phis) { gphi_iterator gsi_update, gsi_orig; @@ -2628,7 +2643,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, edge first_latch_e = EDGE_SUCC (first->latch, 0); edge second_preheader_e = loop_preheader_edge (second); - basic_block between_bb = single_exit (first)->dest; + basic_block between_bb = first_loop_e->dest; gcc_assert (between_bb == second_preheader_e->src); gcc_assert (single_pred_p (between_bb) && single_succ_p (between_bb)); @@ -2651,7 +2666,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, { tree new_res = copy_ssa_name (PHI_RESULT (orig_phi)); gphi *lcssa_phi = create_phi_node (new_res, between_bb); - add_phi_arg (lcssa_phi, arg, single_exit (first), UNKNOWN_LOCATION); + add_phi_arg (lcssa_phi, arg, first_loop_e, UNKNOWN_LOCATION); arg = new_res; } @@ -2664,7 +2679,7 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, for correct vectorization of live stmts. */ if (loop == first) { - basic_block orig_exit = single_exit (second)->dest; + basic_block orig_exit = second_loop_e->dest; for (gsi_orig = gsi_start_phis (orig_exit); !gsi_end_p (gsi_orig); gsi_next (&gsi_orig)) { @@ -2673,13 +2688,14 @@ slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, if (TREE_CODE (orig_arg) != SSA_NAME || virtual_operand_p (orig_arg)) continue; + const_edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); /* Already created in the above loop. */ - if (find_guard_arg (first, second, orig_phi)) + if (find_guard_arg (first, second, exit_e, orig_phi)) continue; tree new_res = copy_ssa_name (orig_arg); gphi *lcphi = create_phi_node (new_res, between_bb); - add_phi_arg (lcphi, orig_arg, single_exit (first), UNKNOWN_LOCATION); + add_phi_arg (lcphi, orig_arg, first_loop_e, UNKNOWN_LOCATION); } } } @@ -2847,7 +2863,8 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, if (!merge_arg) merge_arg = old_arg; - tree guard_arg = find_guard_arg (loop, epilog, update_phi); + tree guard_arg + = find_guard_arg (loop, epilog, single_exit (loop), update_phi); /* If the var is live after loop but not a reduction, we simply use the old arg. */ if (!guard_arg) @@ -3201,27 +3218,37 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, } if (vect_epilogues) - /* Make sure to set the epilogue's epilogue scalar loop, such that we can - use the original scalar loop as remaining epilogue if necessary. */ - LOOP_VINFO_SCALAR_LOOP (epilogue_vinfo) - = LOOP_VINFO_SCALAR_LOOP (loop_vinfo); + { + /* Make sure to set the epilogue's epilogue scalar loop, such that we can + use the original scalar loop as remaining epilogue if necessary. */ + LOOP_VINFO_SCALAR_LOOP (epilogue_vinfo) + = LOOP_VINFO_SCALAR_LOOP (loop_vinfo); + LOOP_VINFO_SCALAR_IV_EXIT (epilogue_vinfo) + = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); + } if (prolog_peeling) { e = loop_preheader_edge (loop); - gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e)); + edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); + gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, exit_e, e)); /* Peel prolog and put it on preheader edge of loop. */ - prolog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, scalar_loop, e); + edge scalar_e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); + edge prolog_e = NULL; + prolog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, exit_e, + scalar_loop, scalar_e, + e, &prolog_e); gcc_assert (prolog); prolog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, loop, true); + slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, prolog_e, loop, + exit_e, true); first_loop = prolog; reset_original_copy_tables (); /* Update the number of iterations for prolog loop. */ tree step_prolog = build_one_cst (TREE_TYPE (niters_prolog)); - vect_set_loop_condition (prolog, NULL, niters_prolog, + vect_set_loop_condition (prolog, prolog_e, loop_vinfo, niters_prolog, step_prolog, NULL_TREE, false); /* Skip the prolog loop. */ @@ -3275,8 +3302,8 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, if (epilog_peeling) { - e = single_exit (loop); - gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e)); + e = LOOP_VINFO_IV_EXIT (loop_vinfo); + gcc_checking_assert (slpeel_can_duplicate_loop_p (loop, e, e)); /* Peel epilog and put it on exit edge of loop. If we are vectorizing said epilog then we should use a copy of the main loop as a starting @@ -3285,12 +3312,18 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, If we are not vectorizing the epilog then we should use the scalar loop as the transformations mentioned above make less or no sense when not vectorizing. */ + edge scalar_e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); epilog = vect_epilogues ? get_loop_copy (loop) : scalar_loop; - epilog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, epilog, e); + edge epilog_e = vect_epilogues ? e : scalar_e; + edge new_epilog_e = NULL; + epilog = slpeel_tree_duplicate_loop_to_edge_cfg (loop, e, epilog, + epilog_e, e, + &new_epilog_e); + LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo) = new_epilog_e; gcc_assert (epilog); - epilog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, epilog, false); + slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, e, epilog, + new_epilog_e, false); bb_before_epilog = loop_preheader_edge (epilog)->src; /* Scalar version loop may be preferred. In this case, add guard @@ -3374,16 +3407,16 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, { guard_cond = fold_build2 (EQ_EXPR, boolean_type_node, niters, niters_vector_mult_vf); - guard_bb = single_exit (loop)->dest; - guard_to = split_edge (single_exit (epilog)); + guard_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; + edge epilog_e = LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo); + guard_to = split_edge (epilog_e); guard_e = slpeel_add_loop_guard (guard_bb, guard_cond, guard_to, skip_vector ? anchor : guard_bb, prob_epilog.invert (), irred_flag); if (vect_epilogues) epilogue_vinfo->skip_this_loop_edge = guard_e; - slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, - single_exit (epilog)); + slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, epilog_e); /* Only need to handle basic block before epilog loop if it's not the guard_bb, which is the case when skip_vector is true. */ if (guard_bb != bb_before_epilog) @@ -3416,6 +3449,8 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, { epilog->aux = epilogue_vinfo; LOOP_VINFO_LOOP (epilogue_vinfo) = epilog; + LOOP_VINFO_IV_EXIT (epilogue_vinfo) + = LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo); loop_constraint_clear (epilog, LOOP_C_INFINITE); diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 23c6e8259e7b133cd7acc6bcf0bad26423e9993a..6e60d84143626a8e1d801bb580f4dcebc73c7ba7 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -855,10 +855,9 @@ vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo) static gcond * -vect_get_loop_niters (class loop *loop, tree *assumptions, +vect_get_loop_niters (class loop *loop, edge exit, tree *assumptions, tree *number_of_iterations, tree *number_of_iterationsm1) { - edge exit = single_exit (loop); class tree_niter_desc niter_desc; tree niter_assumptions, niter, may_be_zero; gcond *cond = get_loop_exit_condition (loop); @@ -927,6 +926,20 @@ vect_get_loop_niters (class loop *loop, tree *assumptions, return cond; } +/* Determine the main loop exit for the vectorizer. */ + +edge +vec_init_loop_exit_info (class loop *loop) +{ + /* Before we begin we must first determine which exit is the main one and + which are auxilary exits. */ + auto_vec exits = get_loop_exit_edges (loop); + if (exits.length () == 1) + return exits[0]; + else + return NULL; +} + /* Function bb_in_loop_p Used as predicate for dfs order traversal of the loop bbs. */ @@ -987,7 +1000,10 @@ _loop_vec_info::_loop_vec_info (class loop *loop_in, vec_info_shared *shared) has_mask_store (false), scalar_loop_scaling (profile_probability::uninitialized ()), scalar_loop (NULL), - orig_loop_info (NULL) + orig_loop_info (NULL), + vec_loop_iv (NULL), + vec_epilogue_loop_iv (NULL), + scalar_loop_iv (NULL) { /* CHECKME: We want to visit all BBs before their successors (except for latch blocks, for which this assertion wouldn't hold). In the simple @@ -1646,6 +1662,18 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) { DUMP_VECT_SCOPE ("vect_analyze_loop_form"); + edge exit_e = vec_init_loop_exit_info (loop); + if (!exit_e) + return opt_result::failure_at (vect_location, + "not vectorized:" + " could not determine main exit from" + " loop with multiple exits.\n"); + info->loop_exit = exit_e; + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "using as main loop exit: %d -> %d [AUX: %p]\n", + exit_e->src->index, exit_e->dest->index, exit_e->aux); + /* Different restrictions apply when we are considering an inner-most loop, vs. an outer (nested) loop. (FORNOW. May want to relax some of these restrictions in the future). */ @@ -1767,7 +1795,7 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) " abnormal loop exit edge.\n"); info->loop_cond - = vect_get_loop_niters (loop, &info->assumptions, + = vect_get_loop_niters (loop, e, &info->assumptions, &info->number_of_iterations, &info->number_of_iterationsm1); if (!info->loop_cond) @@ -1821,6 +1849,9 @@ vect_create_loop_vinfo (class loop *loop, vec_info_shared *shared, stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (info->loop_cond); STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + + LOOP_VINFO_IV_EXIT (loop_vinfo) = info->loop_exit; + if (info->inner_loop_cond) { stmt_vec_info inner_loop_cond_info @@ -3063,9 +3094,9 @@ start_over: if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "epilog loop required\n"); if (!vect_can_advance_ivs_p (loop_vinfo) - || !slpeel_can_duplicate_loop_p (LOOP_VINFO_LOOP (loop_vinfo), - single_exit (LOOP_VINFO_LOOP - (loop_vinfo)))) + || !slpeel_can_duplicate_loop_p (loop, + LOOP_VINFO_IV_EXIT (loop_vinfo), + LOOP_VINFO_IV_EXIT (loop_vinfo))) { ok = opt_result::failure_at (vect_location, "not vectorized: can't create required " @@ -6002,7 +6033,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, Store them in NEW_PHIS. */ if (double_reduc) loop = outer_loop; - exit_bb = single_exit (loop)->dest; + exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; exit_gsi = gsi_after_labels (exit_bb); reduc_inputs.create (slp_node ? vec_num : ncopies); for (unsigned i = 0; i < vec_num; i++) @@ -6018,7 +6049,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, phi = create_phi_node (new_def, exit_bb); if (j) def = gimple_get_lhs (STMT_VINFO_VEC_STMTS (rdef_info)[j]); - SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def); + SET_PHI_ARG_DEF (phi, LOOP_VINFO_IV_EXIT (loop_vinfo)->dest_idx, def); new_def = gimple_convert (&stmts, vectype, new_def); reduc_inputs.quick_push (new_def); } @@ -10416,12 +10447,12 @@ vectorizable_live_operation (vec_info *vinfo, stmt_vec_info stmt_info, lhs' = new_tree; */ class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); - basic_block exit_bb = single_exit (loop)->dest; + basic_block exit_bb = LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; gcc_assert (single_pred_p (exit_bb)); tree vec_lhs_phi = copy_ssa_name (vec_lhs); gimple *phi = create_phi_node (vec_lhs_phi, exit_bb); - SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, vec_lhs); + SET_PHI_ARG_DEF (phi, LOOP_VINFO_IV_EXIT (loop_vinfo)->dest_idx, vec_lhs); gimple_seq stmts = NULL; tree new_tree; @@ -10965,7 +10996,7 @@ vect_get_loop_len (loop_vec_info loop_vinfo, gimple_stmt_iterator *gsi, profile. */ static void -scale_profile_for_vect_loop (class loop *loop, unsigned vf, bool flat) +scale_profile_for_vect_loop (class loop *loop, edge exit_e, unsigned vf, bool flat) { /* For flat profiles do not scale down proportionally by VF and only cap by known iteration count bounds. */ @@ -10980,7 +11011,6 @@ scale_profile_for_vect_loop (class loop *loop, unsigned vf, bool flat) return; } /* Loop body executes VF fewer times and exit increases VF times. */ - edge exit_e = single_exit (loop); profile_count entry_count = loop_preheader_edge (loop)->count (); /* If we have unreliable loop profile avoid dropping entry @@ -11350,7 +11380,7 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) /* Make sure there exists a single-predecessor exit bb. Do this before versioning. */ - edge e = single_exit (loop); + edge e = LOOP_VINFO_IV_EXIT (loop_vinfo); if (! single_pred_p (e->dest)) { split_loop_exit_edge (e, true); @@ -11376,7 +11406,7 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) loop closed PHI nodes on the exit. */ if (LOOP_VINFO_SCALAR_LOOP (loop_vinfo)) { - e = single_exit (LOOP_VINFO_SCALAR_LOOP (loop_vinfo)); + e = LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo); if (! single_pred_p (e->dest)) { split_loop_exit_edge (e, true); @@ -11625,8 +11655,9 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) a zero NITERS becomes a nonzero NITERS_VECTOR. */ if (integer_onep (step_vector)) niters_no_overflow = true; - vect_set_loop_condition (loop, loop_vinfo, niters_vector, step_vector, - niters_vector_mult_vf, !niters_no_overflow); + vect_set_loop_condition (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), loop_vinfo, + niters_vector, step_vector, niters_vector_mult_vf, + !niters_no_overflow); unsigned int assumed_vf = vect_vf_for_cost (loop_vinfo); @@ -11699,7 +11730,8 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) assumed_vf) - 1 : wi::udiv_floor (loop->nb_iterations_estimate + bias_for_assumed, assumed_vf) - 1); - scale_profile_for_vect_loop (loop, assumed_vf, flat); + scale_profile_for_vect_loop (loop, LOOP_VINFO_IV_EXIT (loop_vinfo), + assumed_vf, flat); if (dump_enabled_p ()) { diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index f1d0cd79961abb095bc79d3b59a81930f0337e59..afa7a8e30891c782a0e5e3740ecc4377f5a31e54 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -919,10 +919,24 @@ public: analysis. */ vec<_loop_vec_info *> epilogue_vinfos; + /* The controlling loop IV for the current loop when vectorizing. This IV + controls the natural exits of the loop. */ + edge vec_loop_iv; + + /* The controlling loop IV for the epilogue loop when vectorizing. This IV + controls the natural exits of the loop. */ + edge vec_epilogue_loop_iv; + + /* The controlling loop IV for the scalar loop being vectorized. This IV + controls the natural exits of the loop. */ + edge scalar_loop_iv; } *loop_vec_info; /* Access Functions. */ #define LOOP_VINFO_LOOP(L) (L)->loop +#define LOOP_VINFO_IV_EXIT(L) (L)->vec_loop_iv +#define LOOP_VINFO_EPILOGUE_IV_EXIT(L) (L)->vec_epilogue_loop_iv +#define LOOP_VINFO_SCALAR_IV_EXIT(L) (L)->scalar_loop_iv #define LOOP_VINFO_BBS(L) (L)->bbs #define LOOP_VINFO_NITERSM1(L) (L)->num_itersm1 #define LOOP_VINFO_NITERS(L) (L)->num_iters @@ -2155,11 +2169,13 @@ class auto_purge_vect_location /* Simple loop peeling and versioning utilities for vectorizer's purposes - in tree-vect-loop-manip.cc. */ -extern void vect_set_loop_condition (class loop *, loop_vec_info, +extern void vect_set_loop_condition (class loop *, edge, loop_vec_info, tree, tree, tree, bool); -extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge); -class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, - class loop *, edge); +extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge, + const_edge); +class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, edge, + class loop *, edge, + edge, edge *); class loop *vect_loop_versioning (loop_vec_info, gimple *); extern class loop *vect_do_peeling (loop_vec_info, tree, tree, tree *, tree *, tree *, int, bool, bool, @@ -2169,6 +2185,7 @@ extern void vect_prepare_for_masked_peels (loop_vec_info); extern dump_user_location_t find_loop_location (class loop *); extern bool vect_can_advance_ivs_p (loop_vec_info); extern void vect_update_inits_of_drs (loop_vec_info, tree, tree_code); +extern edge vec_init_loop_exit_info (class loop *); /* In tree-vect-stmts.cc. */ extern tree get_related_vectype_for_scalar_type (machine_mode, tree, @@ -2358,6 +2375,7 @@ struct vect_loop_form_info tree assumptions; gcond *loop_cond; gcond *inner_loop_cond; + edge loop_exit; }; extern opt_result vect_analyze_loop_form (class loop *, vect_loop_form_info *); extern loop_vec_info vect_create_loop_vinfo (class loop *, vec_info_shared *, diff --git a/gcc/tree-vectorizer.cc b/gcc/tree-vectorizer.cc index a048e9d89178a37455bd7b83ab0f2a238a4ce69e..d97e2b54c25ac60378935392aa7b73476efed74b 100644 --- a/gcc/tree-vectorizer.cc +++ b/gcc/tree-vectorizer.cc @@ -943,6 +943,8 @@ set_uid_loop_bbs (loop_vec_info loop_vinfo, gimple *loop_vectorized_call, class loop *scalar_loop = get_loop (fun, tree_to_shwi (arg)); LOOP_VINFO_SCALAR_LOOP (loop_vinfo) = scalar_loop; + LOOP_VINFO_SCALAR_IV_EXIT (loop_vinfo) + = vec_init_loop_exit_info (scalar_loop); gcc_checking_assert (vect_loop_vectorized_call (scalar_loop) == loop_vectorized_call); /* If we are going to vectorize outer loop, prevent vectorization From patchwork Mon Oct 2 07:41:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 147204 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1256182vqb; Mon, 2 Oct 2023 00:43:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGU5oBrbAG4Iy32nvRy4XY7aqEhgFcFxNBhh2iBfnW+1SPGtdz+PNfD3f7KY6Ng/fc2C1kf X-Received: by 2002:a17:906:295:b0:9ae:4054:5d2a with SMTP id 21-20020a170906029500b009ae40545d2amr9208483ejf.16.1696232605323; Mon, 02 Oct 2023 00:43:25 -0700 (PDT) Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id c27-20020a170906171b00b00989027eb30bsi19022378eje.610.2023.10.02.00.43.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Oct 2023 00:43:25 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=7zyEI9i6; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=7zyEI9i6; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 623283858005 for ; Mon, 2 Oct 2023 07:43:11 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03on2083.outbound.protection.outlook.com [40.107.104.83]) by sourceware.org (Postfix) with ESMTPS id 751F938555AE for ; Mon, 2 Oct 2023 07:42:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 751F938555AE Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S3rMKZW+dHl3Tle4xdnYM8E1vrZKZDuNkYrfbRAPK2M=; b=7zyEI9i6QpszG/yLlBQPIcrk++7oKM06c5u6zV2ctvoYtkujB7t88EOvoUF7w8Otq+WClLnp/qJWRwnwLdb2Yb1YEnB2y7v4ELTHLrmwcpY5rhAGYsM4yp5aSIi3lSO9cmX+/dhO+JcSaFuR/NzQTabFMvtu2ATWXpp2Fm6m078= Received: from AM6PR10CA0096.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::37) by AS8PR08MB9866.eurprd08.prod.outlook.com (2603:10a6:20b:5aa::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.22; Mon, 2 Oct 2023 07:42:06 +0000 Received: from AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:8c:cafe::f7) by AM6PR10CA0096.outlook.office365.com (2603:10a6:209:8c::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.30 via Frontend Transport; Mon, 2 Oct 2023 07:42:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT048.mail.protection.outlook.com (100.127.140.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.20 via Frontend Transport; Mon, 2 Oct 2023 07:42:06 +0000 Received: ("Tessian outbound 9aeaca65ec26:v211"); Mon, 02 Oct 2023 07:42:06 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: fab14a2866704e66 X-CR-MTA-TID: 64aa7808 Received: from 7c19fe06f8a3.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 49AEFFDF-D932-4B09-9850-CFB974DE9A74.1; Mon, 02 Oct 2023 07:41:59 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7c19fe06f8a3.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 02 Oct 2023 07:41:59 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=acJbJQ40MB3ztdokHP2+x+JLpPzT2UdApCc4fYTLDOCszTtqbF/0y3bLNJD8K5+gAtOvHj21hiQVIUOa2Lki1vq+A+eYtkpCLz1MWG7Hisq0/NirNj/sg9FKsgpB6iVgl+GpIKzvAM4QA8yKSFIeAWDz4vbtEpb+5yV7k/ChB8+nDl7XW/TMlJYq1QTM/nVCUibgT/P6dUWGi14jxOw6eRrkTNVsWZBN4kbqVDIOUjah2hE+GwzdFaV3uCmIdZUin/Z5ADKkUxbG96n0ck93vIH3y36kZy4lrb5uaKZzoMTbE/+JcEmbpyQnfFkPtiV3lHJVPMYpyJI3JYtExwHp7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S3rMKZW+dHl3Tle4xdnYM8E1vrZKZDuNkYrfbRAPK2M=; b=HGmDsjDQccHf5TzFn/Ppq0zf6DkKwWjWLWymj9+qKqJAh5fPEG91k5QvG91p+Nd9CnqjtC252L2oxCYid7TQLIDvcURZQS8bZw964YC0QFu8wmlDKgSsvWGLG9JZgwClXChLPKoZB4W7Qf7sdNOqxwp0FdhXyvxW4d70j7W5DTn5QCpnXG+STdaddNo+kI5KysiTz8fmis5026cs17pEpYGOAmKCVEr7umzFUCX4ESa5d3n6I3G9VAx3aoFmSB+LysaTbtAnfqVj37TXuPMLJC9SM5aXNFqsC3CbxI3kosINxK/xfM6vfTUIesmEez3XbViENxrEe6+dketC2dfSZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S3rMKZW+dHl3Tle4xdnYM8E1vrZKZDuNkYrfbRAPK2M=; b=7zyEI9i6QpszG/yLlBQPIcrk++7oKM06c5u6zV2ctvoYtkujB7t88EOvoUF7w8Otq+WClLnp/qJWRwnwLdb2Yb1YEnB2y7v4ELTHLrmwcpY5rhAGYsM4yp5aSIi3lSO9cmX+/dhO+JcSaFuR/NzQTabFMvtu2ATWXpp2Fm6m078= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by AS4PR08MB7654.eurprd08.prod.outlook.com (2603:10a6:20b:4f0::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.26; Mon, 2 Oct 2023 07:41:57 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1%7]) with mapi id 15.20.6813.027; Mon, 2 Oct 2023 07:41:57 +0000 Date: Mon, 2 Oct 2023 08:41:55 +0100 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, rguenther@suse.de, jlaw@ventanamicro.com Subject: [PATCH 2/3]middle-end: updated niters analysis to handle multiple exits. Message-ID: Content-Disposition: inline In-Reply-To: X-ClientProxiedBy: LO4P123CA0556.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:33b::6) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|AS4PR08MB7654:EE_|AM7EUR03FT048:EE_|AS8PR08MB9866:EE_ X-MS-Office365-Filtering-Correlation-Id: eb2af5e5-788c-478e-8698-08dbc31b13e1 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: nTZixKClVYXLwkXURz2ZGbNkQGVC9hx7VjJ4TB0U61rJaEiO7PpAUfpgy2efrZPGOg9TXBT6oktKaPExtCJgXY/Wwyg0wLPe4Pj557SiQDBAlHfk8hBAFmsYCAITQBcYFlyMAalbhR69rjqeQKdoETdjN0Ch0pd3G4f42GPgYyoSLPX3rUENoQOLmann6qZD0ExYhwBJ/aLzxntX8+e6NvvRz1Lfl2IwU5Nhp4VIsA6ljMFRSyVkFXIjgYb18nf5pCeMzorxLmTlRRn6zsl5ZlI2Bs268iWB8weGnjNJZmwBZ+RCMZzWQ5lB+Z/s90BdUXIHEHueUdKMbwPGB+1QyHtGtLeJgyMv9em/DxOGVWZGQRCwNpqiX/mqsLuk3sdb06ZRj2w6szZ62ftyI+X9jee/JfY5uCIeZbG25+z3O9qj6t2cXzL9Xw+cqZMoqxtag+c0egewUJnm/eaa+ITMdJKV+Vk8gOPVfzP9NaCokFOx7l+B7P5ilzMksPOf8aahAApPB8akk3/wCe8ooUgnJrTCzjDs0Eofq41UjcNoR0PQL9vKNkDw3WORoPjqUDkUCYEVsw8Ejz4V6/qe1qu536us6UdbZSFRYT9PEStiGHRl8WvddDtJTWNMlxrqQvVh X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(136003)(396003)(346002)(376002)(39860400002)(366004)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(44144004)(6506007)(6486002)(478600001)(33964004)(83380400001)(26005)(6512007)(2616005)(41300700001)(15650500001)(4743002)(316002)(8936002)(44832011)(5660300002)(6916009)(4326008)(235185007)(66946007)(66556008)(8676002)(36756003)(66476007)(86362001)(2906002)(30864003)(38100700002)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7654 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: e17a6a46-6693-4286-62be-08dbc31b0e27 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BNehiTxNwZrlJUoLo1JPLDAb3rMy2oYFbKk0uuvkxotS0Gq0OvALg0I3nWMTGAvhGdBvEv0CehTXYq2FblQOB8GrM0KzrmId76RtLjDCoTNk+uICFVtVap9UmR8qhXWvd0OjEQh74yhuS961x9cVgXUczUbOnrm7sD2fvTMf+hg915Z2QyQ1X2mk0BAK9f6fkpuZBd7Jl41KzaapsE2hJ2nGmNNs9b9jFFnlDjHuihNhzGY0e54HbvKBqlTcYnT16wsr4INCz93CUuiuuCJoqH8FZs5dIFE6SazY068t90HbA5jibP6mIu/qvNauMcIiYDaVtZbNH9yW+hn7SnSMIcszZpK3yvxExDI1NgROgD+unulAUN01+ao+K7C60USifGCMLMBU2Iaoet3Te8uquH+wbqXkKDZyxnjY9KbirPTlL2Q5WaTifV8o6KwjbH3grHBWsDbr06uD5IK+O7ZW89GeXnhIe4qA5ntiRVIW7Px+OgzZZi9PkidrSSfbRzwLbkqlsxOtNU0/r48jolosTdjnu+f01rAUHBOnRJgd887rsK1ZKvqVDJywAHWWK3o7me5TeHSyoysFR02zR7kxkm1Xs8gga6vWtrftAuwiK+SKWu7RZCjiCwAO7n0xt5mHQ/SscgjNs9oeznXUr/z+FeVr2yLlu0DodExUhpTx0L4MH4fj+qzmTVwBzAz6b9xm1NOtoJPOPY8MactYe9LNnqAfAek9VNHULcFyL+mADnXiOim1XBHWr31RaDK363ouBY4WZqDGJZ1MWTHHnaiWaA== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(136003)(39860400002)(396003)(376002)(230922051799003)(64100799003)(1800799009)(186009)(82310400011)(451199024)(36840700001)(40470700004)(46966006)(82740400003)(356005)(81166007)(40480700001)(36756003)(40460700003)(86362001)(44832011)(6486002)(6506007)(478600001)(235185007)(2616005)(44144004)(107886003)(33964004)(15650500001)(70206006)(6916009)(70586007)(2906002)(316002)(4326008)(30864003)(8936002)(6512007)(47076005)(5660300002)(8676002)(36860700001)(4743002)(41300700001)(336012)(26005)(83380400001)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2023 07:42:06.7356 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eb2af5e5-788c-478e-8698-08dbc31b13e1 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9866 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778628800526058687 X-GMAIL-MSGID: 1778628800526058687 Hi All, This second part updates niters analysis to be able to analyze any number of exits. If we have multiple exits we determine the main exit by finding the first counting IV. The change allows the vectorizer to pass analysis for multiple loops, but we later gracefully reject them. It does however allow us to test if the exit handling is using the right exit everywhere. Additionally since we analyze all exits, we now return all conditions for them and determine which condition belongs to the main exit. The main condition is needed because the vectorizer needs to ignore the main IV condition during vectorization as it will replace it during codegen. To track versioned loops we extend the contract between ifcvt and the vectorizer to store the exit number in aux so that we can match it up again during peeling. Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-linux-gnu, and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * tree-if-conv.cc (tree_if_conversion): Record exits in aux. * tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg): Use it. * tree-vect-loop.cc (vect_get_loop_niters): Determine main exit. (vec_init_loop_exit_info): Extend analysis when multiple exits. (vect_analyze_loop_form): Record conds and determine main cond. (vect_create_loop_vinfo): Extend bookkeeping of conds. (vect_analyze_loop): Release conds. * tree-vectorizer.h (LOOP_VINFO_LOOP_CONDS, LOOP_VINFO_LOOP_IV_COND): New. (struct vect_loop_form_info): Add conds, alt_loop_conds; (struct loop_vec_info): Add conds, loop_iv_cond. --- inline copy of patch -- diff --git a/gcc/tree-if-conv.cc b/gcc/tree-if-conv.cc index 799f071965e5c41eb352b5530cf1d9c7ecf7bf25..3dc2290467797ebbfcef55903531b22829f4fdbd 100644 --- diff --git a/gcc/tree-if-conv.cc b/gcc/tree-if-conv.cc index 799f071965e5c41eb352b5530cf1d9c7ecf7bf25..3dc2290467797ebbfcef55903531b22829f4fdbd 100644 --- a/gcc/tree-if-conv.cc +++ b/gcc/tree-if-conv.cc @@ -3795,6 +3795,13 @@ tree_if_conversion (class loop *loop, vec *preds) } if (need_to_ifcvt) { + /* Before we rewrite edges we'll record their original position in the + edge map such that we can map the edges between the ifcvt and the + non-ifcvt loop during peeling. */ + uintptr_t idx = 0; + for (edge exit : get_loop_exit_edges (loop)) + exit->aux = (void*)idx++; + /* Now all statements are if-convertible. Combine all the basic blocks into one huge basic block doing the if-conversion on-the-fly. */ diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index e06717272aafc6d31cbdcb94840ac25de616da6d..77f8e668bcc8beca99ba4052e1b12e0d17300262 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -1470,6 +1470,18 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, scalar_loop = loop; scalar_exit = loop_exit; } + else if (scalar_loop == loop) + scalar_exit = loop_exit; + else + { + /* Loop has been version, match exits up using the aux index. */ + for (edge exit : get_loop_exit_edges (scalar_loop)) + if (exit->aux == loop_exit->aux) + { + scalar_exit = exit; + break; + } + } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); pbbs = bbs + 1; @@ -1501,6 +1513,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, exit = loop_exit; basic_block new_preheader = new_bbs[0]; + /* Record the new loop exit information. new_loop doesn't have SCEV data and + so we must initialize the exit information. */ if (new_e) *new_e = new_exit; diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 6e60d84143626a8e1d801bb580f4dcebc73c7ba7..f1caa5f207d3b13da58c3a313b11d1ef98374349 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -851,79 +851,106 @@ vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo) in NUMBER_OF_ITERATIONSM1. Place the condition under which the niter information holds in ASSUMPTIONS. - Return the loop exit condition. */ + Return the loop exit conditions. */ -static gcond * -vect_get_loop_niters (class loop *loop, edge exit, tree *assumptions, +static vec +vect_get_loop_niters (class loop *loop, tree *assumptions, const_edge main_exit, tree *number_of_iterations, tree *number_of_iterationsm1) { + auto_vec exits = get_loop_exit_edges (loop); + vec conds; + conds.create (exits.length ()); class tree_niter_desc niter_desc; tree niter_assumptions, niter, may_be_zero; - gcond *cond = get_loop_exit_condition (loop); *assumptions = boolean_true_node; *number_of_iterationsm1 = chrec_dont_know; *number_of_iterations = chrec_dont_know; + DUMP_VECT_SCOPE ("get_loop_niters"); - if (!exit) - return cond; + if (exits.is_empty ()) + return conds; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "Loop has %d exits.\n", + exits.length ()); + + edge exit; + unsigned int i; + FOR_EACH_VEC_ELT (exits, i, exit) + { + gcond *cond = get_loop_exit_condition (exit); + if (cond) + conds.safe_push (cond); + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "Analyzing exit %d...\n", i); - may_be_zero = NULL_TREE; - if (!number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) - || chrec_contains_undetermined (niter_desc.niter)) - return cond; + may_be_zero = NULL_TREE; + if (!number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) + || chrec_contains_undetermined (niter_desc.niter)) + continue; - niter_assumptions = niter_desc.assumptions; - may_be_zero = niter_desc.may_be_zero; - niter = niter_desc.niter; + niter_assumptions = niter_desc.assumptions; + may_be_zero = niter_desc.may_be_zero; + niter = niter_desc.niter; - if (may_be_zero && integer_zerop (may_be_zero)) - may_be_zero = NULL_TREE; + if (may_be_zero && integer_zerop (may_be_zero)) + may_be_zero = NULL_TREE; - if (may_be_zero) - { - if (COMPARISON_CLASS_P (may_be_zero)) + if (may_be_zero) { - /* Try to combine may_be_zero with assumptions, this can simplify - computation of niter expression. */ - if (niter_assumptions && !integer_nonzerop (niter_assumptions)) - niter_assumptions = fold_build2 (TRUTH_AND_EXPR, boolean_type_node, - niter_assumptions, - fold_build1 (TRUTH_NOT_EXPR, - boolean_type_node, - may_be_zero)); + if (COMPARISON_CLASS_P (may_be_zero)) + { + /* Try to combine may_be_zero with assumptions, this can simplify + computation of niter expression. */ + if (niter_assumptions && !integer_nonzerop (niter_assumptions)) + niter_assumptions = fold_build2 (TRUTH_AND_EXPR, boolean_type_node, + niter_assumptions, + fold_build1 (TRUTH_NOT_EXPR, + boolean_type_node, + may_be_zero)); + else + niter = fold_build3 (COND_EXPR, TREE_TYPE (niter), may_be_zero, + build_int_cst (TREE_TYPE (niter), 0), + rewrite_to_non_trapping_overflow (niter)); + + may_be_zero = NULL_TREE; + } + else if (integer_nonzerop (may_be_zero) && exit == main_exit) + { + *number_of_iterationsm1 = build_int_cst (TREE_TYPE (niter), 0); + *number_of_iterations = build_int_cst (TREE_TYPE (niter), 1); + continue; + } else - niter = fold_build3 (COND_EXPR, TREE_TYPE (niter), may_be_zero, - build_int_cst (TREE_TYPE (niter), 0), - rewrite_to_non_trapping_overflow (niter)); + continue; + } - may_be_zero = NULL_TREE; - } - else if (integer_nonzerop (may_be_zero)) + /* Loop assumptions are based off the normal exit. */ + if (exit == main_exit) { - *number_of_iterationsm1 = build_int_cst (TREE_TYPE (niter), 0); - *number_of_iterations = build_int_cst (TREE_TYPE (niter), 1); - return cond; + *assumptions = niter_assumptions; + *number_of_iterationsm1 = niter; + + /* We want the number of loop header executions which is the number + of latch executions plus one. + ??? For UINT_MAX latch executions this number overflows to zero + for loops like do { n++; } while (n != 0); */ + if (niter && !chrec_contains_undetermined (niter)) + niter = fold_build2 (PLUS_EXPR, TREE_TYPE (niter), + unshare_expr (niter), + build_int_cst (TREE_TYPE (niter), 1)); + *number_of_iterations = niter; } - else - return cond; } - *assumptions = niter_assumptions; - *number_of_iterationsm1 = niter; - - /* We want the number of loop header executions which is the number - of latch executions plus one. - ??? For UINT_MAX latch executions this number overflows to zero - for loops like do { n++; } while (n != 0); */ - if (niter && !chrec_contains_undetermined (niter)) - niter = fold_build2 (PLUS_EXPR, TREE_TYPE (niter), unshare_expr (niter), - build_int_cst (TREE_TYPE (niter), 1)); - *number_of_iterations = niter; + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "All loop exits successfully analyzed.\n"); - return cond; + return conds; } /* Determine the main loop exit for the vectorizer. */ @@ -936,8 +963,25 @@ vec_init_loop_exit_info (class loop *loop) auto_vec exits = get_loop_exit_edges (loop); if (exits.length () == 1) return exits[0]; - else - return NULL; + + /* If we have multiple exits we only support counting IV at the moment. Analyze + all exits and return one */ + class tree_niter_desc niter_desc; + edge candidate = NULL; + for (edge exit : exits) + { + if (!get_loop_exit_condition (exit)) + continue; + + if (number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) + && !chrec_contains_undetermined (niter_desc.niter)) + { + if (!niter_desc.may_be_zero || !candidate) + candidate = exit; + } + } + + return candidate; } /* Function bb_in_loop_p @@ -1788,21 +1832,31 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) "not vectorized: latch block not empty.\n"); /* Make sure the exit is not abnormal. */ - edge e = single_exit (loop); - if (e->flags & EDGE_ABNORMAL) + if (exit_e->flags & EDGE_ABNORMAL) return opt_result::failure_at (vect_location, "not vectorized:" " abnormal loop exit edge.\n"); - info->loop_cond - = vect_get_loop_niters (loop, e, &info->assumptions, + info->conds + = vect_get_loop_niters (loop, &info->assumptions, exit_e, &info->number_of_iterations, &info->number_of_iterationsm1); - if (!info->loop_cond) + + if (info->conds.is_empty ()) return opt_result::failure_at (vect_location, "not vectorized: complicated exit condition.\n"); + /* Determine what the primary and alternate exit conds are. */ + info->alt_loop_conds.create (info->conds.length () - 1); + for (gcond *cond : info->conds) + { + if (exit_e->src != gimple_bb (cond)) + info->alt_loop_conds.quick_push (cond); + else + info->loop_cond = cond; + } + if (integer_zerop (info->assumptions) || !info->number_of_iterations || chrec_contains_undetermined (info->number_of_iterations)) @@ -1847,8 +1901,13 @@ vect_create_loop_vinfo (class loop *loop, vec_info_shared *shared, if (!integer_onep (info->assumptions) && !main_loop_info) LOOP_VINFO_NITERS_ASSUMPTIONS (loop_vinfo) = info->assumptions; - stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (info->loop_cond); - STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + for (gcond *cond : info->conds) + { + stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (cond); + STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + } + LOOP_VINFO_LOOP_CONDS (loop_vinfo).safe_splice (info->alt_loop_conds); + LOOP_VINFO_LOOP_IV_COND (loop_vinfo) = info->loop_cond; LOOP_VINFO_IV_EXIT (loop_vinfo) = info->loop_exit; @@ -3594,7 +3653,11 @@ vect_analyze_loop (class loop *loop, vec_info_shared *shared) && LOOP_VINFO_PEELING_FOR_NITER (first_loop_vinfo) && !loop->simduid); if (!vect_epilogues) - return first_loop_vinfo; + { + loop_form_info.conds.release (); + loop_form_info.alt_loop_conds.release (); + return first_loop_vinfo; + } /* Now analyze first_loop_vinfo for epilogue vectorization. */ poly_uint64 lowest_th = LOOP_VINFO_VERSIONING_THRESHOLD (first_loop_vinfo); @@ -3694,6 +3757,9 @@ vect_analyze_loop (class loop *loop, vec_info_shared *shared) (first_loop_vinfo->epilogue_vinfos[0]->vector_mode)); } + loop_form_info.conds.release (); + loop_form_info.alt_loop_conds.release (); + return first_loop_vinfo; } diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index afa7a8e30891c782a0e5e3740ecc4377f5a31e54..55b6771b271d5072fa1327d595e1dddb112cfdf6 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -882,6 +882,12 @@ public: we need to peel off iterations at the end to form an epilogue loop. */ bool peeling_for_niter; + /* List of loop additional IV conditionals found in the loop. */ + auto_vec conds; + + /* Main loop IV cond. */ + gcond* loop_iv_cond; + /* True if there are no loop carried data dependencies in the loop. If loop->safelen <= 1, then this is always true, either the loop didn't have any loop carried data dependencies, or the loop is being @@ -984,6 +990,8 @@ public: #define LOOP_VINFO_REDUCTION_CHAINS(L) (L)->reduction_chains #define LOOP_VINFO_PEELING_FOR_GAPS(L) (L)->peeling_for_gaps #define LOOP_VINFO_PEELING_FOR_NITER(L) (L)->peeling_for_niter +#define LOOP_VINFO_LOOP_CONDS(L) (L)->conds +#define LOOP_VINFO_LOOP_IV_COND(L) (L)->loop_iv_cond #define LOOP_VINFO_NO_DATA_DEPENDENCIES(L) (L)->no_data_dependencies #define LOOP_VINFO_SCALAR_LOOP(L) (L)->scalar_loop #define LOOP_VINFO_SCALAR_LOOP_SCALING(L) (L)->scalar_loop_scaling @@ -2373,7 +2381,9 @@ struct vect_loop_form_info tree number_of_iterations; tree number_of_iterationsm1; tree assumptions; + vec conds; gcond *loop_cond; + vec alt_loop_conds; gcond *inner_loop_cond; edge loop_exit; }; --- a/gcc/tree-if-conv.cc +++ b/gcc/tree-if-conv.cc @@ -3795,6 +3795,13 @@ tree_if_conversion (class loop *loop, vec *preds) } if (need_to_ifcvt) { + /* Before we rewrite edges we'll record their original position in the + edge map such that we can map the edges between the ifcvt and the + non-ifcvt loop during peeling. */ + uintptr_t idx = 0; + for (edge exit : get_loop_exit_edges (loop)) + exit->aux = (void*)idx++; + /* Now all statements are if-convertible. Combine all the basic blocks into one huge basic block doing the if-conversion on-the-fly. */ diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index e06717272aafc6d31cbdcb94840ac25de616da6d..77f8e668bcc8beca99ba4052e1b12e0d17300262 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -1470,6 +1470,18 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, scalar_loop = loop; scalar_exit = loop_exit; } + else if (scalar_loop == loop) + scalar_exit = loop_exit; + else + { + /* Loop has been version, match exits up using the aux index. */ + for (edge exit : get_loop_exit_edges (scalar_loop)) + if (exit->aux == loop_exit->aux) + { + scalar_exit = exit; + break; + } + } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); pbbs = bbs + 1; @@ -1501,6 +1513,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, exit = loop_exit; basic_block new_preheader = new_bbs[0]; + /* Record the new loop exit information. new_loop doesn't have SCEV data and + so we must initialize the exit information. */ if (new_e) *new_e = new_exit; diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 6e60d84143626a8e1d801bb580f4dcebc73c7ba7..f1caa5f207d3b13da58c3a313b11d1ef98374349 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -851,79 +851,106 @@ vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo) in NUMBER_OF_ITERATIONSM1. Place the condition under which the niter information holds in ASSUMPTIONS. - Return the loop exit condition. */ + Return the loop exit conditions. */ -static gcond * -vect_get_loop_niters (class loop *loop, edge exit, tree *assumptions, +static vec +vect_get_loop_niters (class loop *loop, tree *assumptions, const_edge main_exit, tree *number_of_iterations, tree *number_of_iterationsm1) { + auto_vec exits = get_loop_exit_edges (loop); + vec conds; + conds.create (exits.length ()); class tree_niter_desc niter_desc; tree niter_assumptions, niter, may_be_zero; - gcond *cond = get_loop_exit_condition (loop); *assumptions = boolean_true_node; *number_of_iterationsm1 = chrec_dont_know; *number_of_iterations = chrec_dont_know; + DUMP_VECT_SCOPE ("get_loop_niters"); - if (!exit) - return cond; + if (exits.is_empty ()) + return conds; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "Loop has %d exits.\n", + exits.length ()); + + edge exit; + unsigned int i; + FOR_EACH_VEC_ELT (exits, i, exit) + { + gcond *cond = get_loop_exit_condition (exit); + if (cond) + conds.safe_push (cond); + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "Analyzing exit %d...\n", i); - may_be_zero = NULL_TREE; - if (!number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) - || chrec_contains_undetermined (niter_desc.niter)) - return cond; + may_be_zero = NULL_TREE; + if (!number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) + || chrec_contains_undetermined (niter_desc.niter)) + continue; - niter_assumptions = niter_desc.assumptions; - may_be_zero = niter_desc.may_be_zero; - niter = niter_desc.niter; + niter_assumptions = niter_desc.assumptions; + may_be_zero = niter_desc.may_be_zero; + niter = niter_desc.niter; - if (may_be_zero && integer_zerop (may_be_zero)) - may_be_zero = NULL_TREE; + if (may_be_zero && integer_zerop (may_be_zero)) + may_be_zero = NULL_TREE; - if (may_be_zero) - { - if (COMPARISON_CLASS_P (may_be_zero)) + if (may_be_zero) { - /* Try to combine may_be_zero with assumptions, this can simplify - computation of niter expression. */ - if (niter_assumptions && !integer_nonzerop (niter_assumptions)) - niter_assumptions = fold_build2 (TRUTH_AND_EXPR, boolean_type_node, - niter_assumptions, - fold_build1 (TRUTH_NOT_EXPR, - boolean_type_node, - may_be_zero)); + if (COMPARISON_CLASS_P (may_be_zero)) + { + /* Try to combine may_be_zero with assumptions, this can simplify + computation of niter expression. */ + if (niter_assumptions && !integer_nonzerop (niter_assumptions)) + niter_assumptions = fold_build2 (TRUTH_AND_EXPR, boolean_type_node, + niter_assumptions, + fold_build1 (TRUTH_NOT_EXPR, + boolean_type_node, + may_be_zero)); + else + niter = fold_build3 (COND_EXPR, TREE_TYPE (niter), may_be_zero, + build_int_cst (TREE_TYPE (niter), 0), + rewrite_to_non_trapping_overflow (niter)); + + may_be_zero = NULL_TREE; + } + else if (integer_nonzerop (may_be_zero) && exit == main_exit) + { + *number_of_iterationsm1 = build_int_cst (TREE_TYPE (niter), 0); + *number_of_iterations = build_int_cst (TREE_TYPE (niter), 1); + continue; + } else - niter = fold_build3 (COND_EXPR, TREE_TYPE (niter), may_be_zero, - build_int_cst (TREE_TYPE (niter), 0), - rewrite_to_non_trapping_overflow (niter)); + continue; + } - may_be_zero = NULL_TREE; - } - else if (integer_nonzerop (may_be_zero)) + /* Loop assumptions are based off the normal exit. */ + if (exit == main_exit) { - *number_of_iterationsm1 = build_int_cst (TREE_TYPE (niter), 0); - *number_of_iterations = build_int_cst (TREE_TYPE (niter), 1); - return cond; + *assumptions = niter_assumptions; + *number_of_iterationsm1 = niter; + + /* We want the number of loop header executions which is the number + of latch executions plus one. + ??? For UINT_MAX latch executions this number overflows to zero + for loops like do { n++; } while (n != 0); */ + if (niter && !chrec_contains_undetermined (niter)) + niter = fold_build2 (PLUS_EXPR, TREE_TYPE (niter), + unshare_expr (niter), + build_int_cst (TREE_TYPE (niter), 1)); + *number_of_iterations = niter; } - else - return cond; } - *assumptions = niter_assumptions; - *number_of_iterationsm1 = niter; - - /* We want the number of loop header executions which is the number - of latch executions plus one. - ??? For UINT_MAX latch executions this number overflows to zero - for loops like do { n++; } while (n != 0); */ - if (niter && !chrec_contains_undetermined (niter)) - niter = fold_build2 (PLUS_EXPR, TREE_TYPE (niter), unshare_expr (niter), - build_int_cst (TREE_TYPE (niter), 1)); - *number_of_iterations = niter; + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "All loop exits successfully analyzed.\n"); - return cond; + return conds; } /* Determine the main loop exit for the vectorizer. */ @@ -936,8 +963,25 @@ vec_init_loop_exit_info (class loop *loop) auto_vec exits = get_loop_exit_edges (loop); if (exits.length () == 1) return exits[0]; - else - return NULL; + + /* If we have multiple exits we only support counting IV at the moment. Analyze + all exits and return one */ + class tree_niter_desc niter_desc; + edge candidate = NULL; + for (edge exit : exits) + { + if (!get_loop_exit_condition (exit)) + continue; + + if (number_of_iterations_exit_assumptions (loop, exit, &niter_desc, NULL) + && !chrec_contains_undetermined (niter_desc.niter)) + { + if (!niter_desc.may_be_zero || !candidate) + candidate = exit; + } + } + + return candidate; } /* Function bb_in_loop_p @@ -1788,21 +1832,31 @@ vect_analyze_loop_form (class loop *loop, vect_loop_form_info *info) "not vectorized: latch block not empty.\n"); /* Make sure the exit is not abnormal. */ - edge e = single_exit (loop); - if (e->flags & EDGE_ABNORMAL) + if (exit_e->flags & EDGE_ABNORMAL) return opt_result::failure_at (vect_location, "not vectorized:" " abnormal loop exit edge.\n"); - info->loop_cond - = vect_get_loop_niters (loop, e, &info->assumptions, + info->conds + = vect_get_loop_niters (loop, &info->assumptions, exit_e, &info->number_of_iterations, &info->number_of_iterationsm1); - if (!info->loop_cond) + + if (info->conds.is_empty ()) return opt_result::failure_at (vect_location, "not vectorized: complicated exit condition.\n"); + /* Determine what the primary and alternate exit conds are. */ + info->alt_loop_conds.create (info->conds.length () - 1); + for (gcond *cond : info->conds) + { + if (exit_e->src != gimple_bb (cond)) + info->alt_loop_conds.quick_push (cond); + else + info->loop_cond = cond; + } + if (integer_zerop (info->assumptions) || !info->number_of_iterations || chrec_contains_undetermined (info->number_of_iterations)) @@ -1847,8 +1901,13 @@ vect_create_loop_vinfo (class loop *loop, vec_info_shared *shared, if (!integer_onep (info->assumptions) && !main_loop_info) LOOP_VINFO_NITERS_ASSUMPTIONS (loop_vinfo) = info->assumptions; - stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (info->loop_cond); - STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + for (gcond *cond : info->conds) + { + stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (cond); + STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type; + } + LOOP_VINFO_LOOP_CONDS (loop_vinfo).safe_splice (info->alt_loop_conds); + LOOP_VINFO_LOOP_IV_COND (loop_vinfo) = info->loop_cond; LOOP_VINFO_IV_EXIT (loop_vinfo) = info->loop_exit; @@ -3594,7 +3653,11 @@ vect_analyze_loop (class loop *loop, vec_info_shared *shared) && LOOP_VINFO_PEELING_FOR_NITER (first_loop_vinfo) && !loop->simduid); if (!vect_epilogues) - return first_loop_vinfo; + { + loop_form_info.conds.release (); + loop_form_info.alt_loop_conds.release (); + return first_loop_vinfo; + } /* Now analyze first_loop_vinfo for epilogue vectorization. */ poly_uint64 lowest_th = LOOP_VINFO_VERSIONING_THRESHOLD (first_loop_vinfo); @@ -3694,6 +3757,9 @@ vect_analyze_loop (class loop *loop, vec_info_shared *shared) (first_loop_vinfo->epilogue_vinfos[0]->vector_mode)); } + loop_form_info.conds.release (); + loop_form_info.alt_loop_conds.release (); + return first_loop_vinfo; } diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index afa7a8e30891c782a0e5e3740ecc4377f5a31e54..55b6771b271d5072fa1327d595e1dddb112cfdf6 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -882,6 +882,12 @@ public: we need to peel off iterations at the end to form an epilogue loop. */ bool peeling_for_niter; + /* List of loop additional IV conditionals found in the loop. */ + auto_vec conds; + + /* Main loop IV cond. */ + gcond* loop_iv_cond; + /* True if there are no loop carried data dependencies in the loop. If loop->safelen <= 1, then this is always true, either the loop didn't have any loop carried data dependencies, or the loop is being @@ -984,6 +990,8 @@ public: #define LOOP_VINFO_REDUCTION_CHAINS(L) (L)->reduction_chains #define LOOP_VINFO_PEELING_FOR_GAPS(L) (L)->peeling_for_gaps #define LOOP_VINFO_PEELING_FOR_NITER(L) (L)->peeling_for_niter +#define LOOP_VINFO_LOOP_CONDS(L) (L)->conds +#define LOOP_VINFO_LOOP_IV_COND(L) (L)->loop_iv_cond #define LOOP_VINFO_NO_DATA_DEPENDENCIES(L) (L)->no_data_dependencies #define LOOP_VINFO_SCALAR_LOOP(L) (L)->scalar_loop #define LOOP_VINFO_SCALAR_LOOP_SCALING(L) (L)->scalar_loop_scaling @@ -2373,7 +2381,9 @@ struct vect_loop_form_info tree number_of_iterations; tree number_of_iterationsm1; tree assumptions; + vec conds; gcond *loop_cond; + vec alt_loop_conds; gcond *inner_loop_cond; edge loop_exit; }; From patchwork Mon Oct 2 07:42:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 147205 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1256407vqb; Mon, 2 Oct 2023 00:44:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG9OlQk82Qm4BxLFGLmz9Mr1rSaDM5m4zVXwwzWVutOB3KdrfDwmu+a/72APcbHXkrhhA0N X-Received: by 2002:a05:6402:5106:b0:533:26cd:37c4 with SMTP id m6-20020a056402510600b0053326cd37c4mr10237895edd.11.1696232646350; Mon, 02 Oct 2023 00:44:06 -0700 (PDT) Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id x21-20020aa7dad5000000b005332caabb64si17425107eds.580.2023.10.02.00.44.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Oct 2023 00:44:06 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=TnPug4Kn; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=TnPug4Kn; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 228D2385697B for ; Mon, 2 Oct 2023 07:43:49 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2050.outbound.protection.outlook.com [40.107.22.50]) by sourceware.org (Postfix) with ESMTPS id A9FE9385840A for ; Mon, 2 Oct 2023 07:42:31 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A9FE9385840A Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9d3kRYmAFB1PkrbCP9O/75vNuI/1dXCz33NU/WbVXEg=; b=TnPug4KnxMl0vdBWapX0D/e5blH3FsPuCfpzUHh5S+f1Z23beLPS6CkY67tabk7Uqe1zUi2d0ZsPCJ9rauKrOBSR2yqN9+88q6HtZOHi53ld987hTdk4PAWGcNXO1+lwlktNbfvEI/1SdT3BJugHzkigvxtNnji4P9p5fWlXRqM= Received: from DB7PR03CA0087.eurprd03.prod.outlook.com (2603:10a6:10:72::28) by DU0PR08MB7994.eurprd08.prod.outlook.com (2603:10a6:10:3e1::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.29; Mon, 2 Oct 2023 07:42:25 +0000 Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:72:cafe::53) by DB7PR03CA0087.outlook.office365.com (2603:10a6:10:72::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.29 via Frontend Transport; Mon, 2 Oct 2023 07:42:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.21 via Frontend Transport; Mon, 2 Oct 2023 07:42:25 +0000 Received: ("Tessian outbound fdf44c93bd44:v211"); Mon, 02 Oct 2023 07:42:25 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: b23206e89a311c92 X-CR-MTA-TID: 64aa7808 Received: from ff56b27bd1f3.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id D55D0F82-131E-461F-8061-35EB190995E9.1; Mon, 02 Oct 2023 07:42:17 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ff56b27bd1f3.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 02 Oct 2023 07:42:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SeK3IeapDY+vrfnw3o5TteEfttE69R9Z7kU2M41totHHhVUMp3uTMy917qpswEyCAInMgmQBCiSmhQm2g4soG7iLKeRk33qcx2Wx78mIP1vOfyGiUM6Hi+CDIbvCDxagMUavV3cImdPwbU/7hr6UbV44wL5IeVcYASrTkyKsqq5f2+Ik5og1T0QFO+OmLkuEHEePgsIa1hcAyaVdE4V6hjsU+FankCOVP2xeu4QGwUlPEb1PNQJkdAe1K6hv6AucV5wwVCDKi2YirYexhFyi/kJljzKITxMxm0BCcZOej8/wxrCxskRwWbeOyRJuVbrU+yKVb4E7b0EVpp80mVKEQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9d3kRYmAFB1PkrbCP9O/75vNuI/1dXCz33NU/WbVXEg=; b=RPH508oT9YLSqNFiW3/SbLzBVHoZErG5kYK0bfTDfsZ/4vpmOIwX2FcP7Dc5ay+IbW5tGwlSS5ra3Fz1g92O+ESGALW8jOBmb8YAAE3au7V1eJXKWX7ItsfnH+YdzS1pm0dYzb2gDAnym8qnVTsXKVZY9JG1L7oHD6gm9dbGCvjsWDRlw0ozYeLEx27okR2t9o3ZMA7ZgcDM9WlKOA86pin9A+0TsahOjFgQZesaXxBDpA9J+M1n2y/OBkdcb2P5b9+NIY/znw3MRpZ5fC8nPSVPka8RrzD4DFgXQL6o2Mfd10EhyQXB5VU0VYCcg6ek92RQYHPhhrugkutcQU6J8A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9d3kRYmAFB1PkrbCP9O/75vNuI/1dXCz33NU/WbVXEg=; b=TnPug4KnxMl0vdBWapX0D/e5blH3FsPuCfpzUHh5S+f1Z23beLPS6CkY67tabk7Uqe1zUi2d0ZsPCJ9rauKrOBSR2yqN9+88q6HtZOHi53ld987hTdk4PAWGcNXO1+lwlktNbfvEI/1SdT3BJugHzkigvxtNnji4P9p5fWlXRqM= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by AS4PR08MB7654.eurprd08.prod.outlook.com (2603:10a6:20b:4f0::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.26; Mon, 2 Oct 2023 07:42:14 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::662f:8e26:1bf8:aaa1%7]) with mapi id 15.20.6813.027; Mon, 2 Oct 2023 07:42:14 +0000 Date: Mon, 2 Oct 2023 08:42:12 +0100 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, rguenther@suse.de, jlaw@ventanamicro.com Subject: [PATCH 3/3]middle-end: maintain LCSSA throughout loop peeling Message-ID: Content-Disposition: inline In-Reply-To: X-ClientProxiedBy: LO2P265CA0182.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a::26) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|AS4PR08MB7654:EE_|DBAEUR03FT034:EE_|DU0PR08MB7994:EE_ X-MS-Office365-Filtering-Correlation-Id: 661b4f0f-3dcc-4fef-4fa8-08dbc31b1edd x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: dtg4lKYrp2S5vdjuFLcY8/Yp96wtF6bTfisGdTW3kyX555hjGwYgZa99ANO/TeY93w5YtlsEwo38k7SZfjLpS6LcX6U2ndI6c0sw450ey02ZGiHvf1RSHOCqFlBU2P8iud6sHeYhS4LudrLdfZ2pfDpW3AnDnz+PvIQU7FYNQFFq2iAlcehGBw742Bbv/s3ds1A8A0yGpPMVj/2Z4EWiIrPkF5BTyOra+XJWBbCFmvX2ujm0eq5hp7pb/ToYkfMop8LTbPW8aMpRNwSBHIgoPGXm1qKeua3+xKvXM4ssH3PaIDy0s+ZXt8OKeOyoPeqh8eRcg13QE1FzGJjUmeLkVUAKEbOhWFzzhvbF8tDSVnuiY/F/nvBzXF8XFVRbHMobvKNM5kvAHzGhxnlfzXR3+akFWAXJIHFMWOscLGsMlKlAKReofbmrD5UZ1KW2MX1uH5/2+XGjBSC7awuD1YGaPmgps3DYbm5Sn6VSVsfmh7i5161EJIxz5wMuZYzb28jxyuc/mGGZPQnGw8X1uGb+aboDxh/oHi1nJT+G/cVoK2x+WyYxfC7t2PUeWbigPL1748X7S9Rl58hpBl4JoS4UcXob9a+kqKxZMg1Q4CFTyEngB7BuFGm5r3I/7bD1G4aM X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(136003)(396003)(346002)(376002)(39860400002)(366004)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(44144004)(6506007)(6486002)(478600001)(33964004)(83380400001)(26005)(6512007)(2616005)(41300700001)(316002)(8936002)(44832011)(5660300002)(6916009)(4326008)(235185007)(66946007)(66556008)(8676002)(36756003)(66476007)(86362001)(2906002)(30864003)(38100700002)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7654 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 77503fce-0c18-4a1c-15b4-08dbc31b1891 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YTmWvWw/XOan+C200OIWdaLvWoJz8Iifr2w1iVpDA3A1VYSlqw3uqzvJ1u4Kj9aD0s9y/w/lZPxqN4a4PTl4XaIHo9Mdh2JFonpcTitDTpicXMlnWlBEfiJXfnqOZmxootMn5y9qETNtHvC5ctD94X8G582WWlyuUXHR4dk6pwcoWtyg2U0DpR4EheI1LW8ia0tLXzAmzqYvqmolgbbtW+GRtmrSeradQV6qktaTznP13ooMBhkSBN9gxlPbhT8aco7pSqg5JKzJaD53a/Db9T3V8ytQ8VSya5pHo2D+vvzg1iGzq1a7UtuUDNre2AcNNbgtCytdrZoSI2dYJBMPNEj+HNOsnF3o0EGmq/X04m/a8ghhAFSyPGbvQIx4yLRFLhVxposlM/RksFyYRSED5ihTX/lU8fdIBqaJsaBBdXz06OBTM6MzFbJ7s8eF0RCjsW39oWUDqdnYrTdd5dboDDgRObteA2mQfbusRCNyNB9MTqs3/uQpWNZQG0rg9ylH2nqgi6Pmio0+i+EF7Ue8UOIRVgcrTRNVj2Vc2qerGv6Q7wsiEAWU048bn2wuKfJPq4kl1ROZSxfdwtaXijzdUnhXvgkikACLrrJgy7SMHDGNqYTMt62/UR6+4EH6cINXzTNj2X/v5NC7serRy+jA6l1QyQozi+DPdQnVyBh64Qk7F9QUsHP6bQoTxtYJ44R5ieuznHfEaeYP+b2d7r1MzwuQFFOcaMbNFpFnTwubDGU/IDcDek3epr92wTHi1L2E8LyLxDSV0UhTC7Cy/G9Ukg== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(376002)(396003)(39860400002)(136003)(230922051799003)(1800799009)(82310400011)(64100799003)(186009)(451199024)(46966006)(40470700004)(36840700001)(36756003)(356005)(40480700001)(82740400003)(40460700003)(478600001)(316002)(6916009)(70586007)(70206006)(41300700001)(81166007)(86362001)(2616005)(6506007)(44144004)(33964004)(36860700001)(26005)(336012)(107886003)(8936002)(2906002)(30864003)(8676002)(4326008)(235185007)(6486002)(6512007)(83380400001)(44832011)(5660300002)(47076005)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2023 07:42:25.2410 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 661b4f0f-3dcc-4fef-4fa8-08dbc31b1edd X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7994 X-Spam-Status: No, score=-12.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778628843289046579 X-GMAIL-MSGID: 1778628843289046579 Hi All, This final patch updates peeling to maintain LCSSA all the way through. It's significantly easier to maintain it during peeling while we still know where all new edges connect rather than touching it up later as is currently being done. This allows us to remove many of the helper functions that touch up the loops at various parts. The only complication is for loop distribution where we should be able to use the same, however ldist depending on whether redirect_lc_phi_defs is true or not will either try to maintain a limited LCSSA form itself or removes are non-virtual phis. The problem here is that if we maintain LCSSA then in some cases the blocks connecting the two loops get PHIs to keep the loop IV up to date. However there is no loop, the guard condition is rewritten as 0 != 0, to the "loop" always exits. However due to the PHI nodes the probabilities get completely wrong. It seems to think that the impossible exit is the likely edge. This causes incorrect warnings and the presence of the PHIs prevent the blocks to be simplified. While it may be possible to make ldist work with LCSSA form, doing so seems more work than not. For that reason the peeling code has an additional parameter used by only ldist to not connect the two loops during peeling. This preserves the current behaviour from ldist until I can dive into the implementation more. Hopefully that's ok for now. Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-linux-gnu, and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * tree-loop-distribution.cc (copy_loop_before): Request no LCSSA. * tree-vect-loop-manip.cc (adjust_phi_and_debug_stmts): Add additional asserts. (slpeel_tree_duplicate_loop_to_edge_cfg): Keep LCSSA during peeling. (find_guard_arg): Look value up through explicit edge and original defs. (vect_do_peeling): Use it. (slpeel_update_phi_nodes_for_guard2): Take explicit exit edge. (slpeel_update_phi_nodes_for_lcssa, slpeel_update_phi_nodes_for_loops): Remove. * tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize phi. * tree-vectorizer.h (slpeel_tree_duplicate_loop_to_edge_cfg): Add optional param to turn off LCSSA mode. --- inline copy of patch -- diff --git a/gcc/tree-loop-distribution.cc b/gcc/tree-loop-distribution.cc index 902edc49ab588152a5b845f2c8a42a7e2a1d6080..14fb884d3e91d79785867debaee4956a2d5b0bb1 100644 --- diff --git a/gcc/tree-loop-distribution.cc b/gcc/tree-loop-distribution.cc index 902edc49ab588152a5b845f2c8a42a7e2a1d6080..14fb884d3e91d79785867debaee4956a2d5b0bb1 100644 --- a/gcc/tree-loop-distribution.cc +++ b/gcc/tree-loop-distribution.cc @@ -950,7 +950,7 @@ copy_loop_before (class loop *loop, bool redirect_lc_phi_defs) initialize_original_copy_tables (); res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, single_exit (loop), NULL, - NULL, preheader, NULL); + NULL, preheader, NULL, false); gcc_assert (res != NULL); /* When a not last partition is supposed to keep the LC PHIs computed diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 77f8e668bcc8beca99ba4052e1b12e0d17300262..0e8c0be5384aab2399ed93966e7bf4918f6c87a5 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -252,6 +252,9 @@ adjust_phi_and_debug_stmts (gimple *update_phi, edge e, tree new_def) { tree orig_def = PHI_ARG_DEF_FROM_EDGE (update_phi, e); + gcc_assert (TREE_CODE (orig_def) != SSA_NAME + || orig_def != new_def); + SET_PHI_ARG_DEF (update_phi, e->dest_idx, new_def); if (MAY_HAVE_DEBUG_BIND_STMTS) @@ -1445,12 +1448,19 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) on E which is either the entry or exit of LOOP. If SCALAR_LOOP is non-NULL, assume LOOP and SCALAR_LOOP are equivalent and copy the basic blocks from SCALAR_LOOP instead of LOOP, but to either the - entry or exit of LOOP. */ + entry or exit of LOOP. If FLOW_LOOPS then connect LOOP to SCALAR_LOOP as a + continuation. This is correct for cases where one loop continues from the + other like in the vectorizer, but not true for uses in e.g. loop distribution + where the loop is duplicated and then modified. + + If UPDATED_DOMS is not NULL it is update with the list of basic blocks whoms + dominators were updated during the peeling. */ class loop * slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, class loop *scalar_loop, - edge scalar_exit, edge e, edge *new_e) + edge scalar_exit, edge e, edge *new_e, + bool flow_loops) { class loop *new_loop; basic_block *new_bbs, *bbs, *pbbs; @@ -1481,6 +1491,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, scalar_exit = exit; break; } + + gcc_assert (scalar_exit); } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); @@ -1513,6 +1525,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, exit = loop_exit; basic_block new_preheader = new_bbs[0]; + gcc_assert (new_exit); + /* Record the new loop exit information. new_loop doesn't have SCEV data and so we must initialize the exit information. */ if (new_e) @@ -1551,6 +1565,19 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, for (unsigned i = (at_exit ? 0 : 1); i < scalar_loop->num_nodes + 1; i++) rename_variables_in_bb (new_bbs[i], duplicate_outer_loop); + /* Rename the exit uses. */ + for (edge exit : get_loop_exit_edges (new_loop)) + for (auto gsi = gsi_start_phis (exit->dest); + !gsi_end_p (gsi); gsi_next (&gsi)) + { + tree orig_def = PHI_ARG_DEF_FROM_EDGE (gsi.phi (), exit); + rename_use_op (PHI_ARG_DEF_PTR_FROM_EDGE (gsi.phi (), exit)); + if (MAY_HAVE_DEBUG_BIND_STMTS) + adjust_debug_stmts (orig_def, PHI_RESULT (gsi.phi ()), exit->dest); + } + + /* This condition happens when the loop has been versioned. e.g. due to ifcvt + versioning the loop. */ if (scalar_loop != loop) { /* If we copied from SCALAR_LOOP rather than LOOP, SSA_NAMEs from @@ -1564,28 +1591,83 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, EDGE_SUCC (loop->latch, 0)); } + auto loop_exits = get_loop_exit_edges (loop); + auto_vec doms; + if (at_exit) /* Add the loop copy at exit. */ { - if (scalar_loop != loop) + if (scalar_loop != loop && new_exit->dest != exit_dest) { - gphi_iterator gsi; new_exit = redirect_edge_and_branch (new_exit, exit_dest); + flush_pending_stmts (new_exit); + } - for (gsi = gsi_start_phis (exit_dest); !gsi_end_p (gsi); - gsi_next (&gsi)) - { - gphi *phi = gsi.phi (); - tree orig_arg = PHI_ARG_DEF_FROM_EDGE (phi, e); - location_t orig_locus - = gimple_phi_arg_location_from_edge (phi, e); + auto_vec new_phis; + hash_map new_phi_args; + /* First create the empty phi nodes so that when we flush the + statements they can be filled in. However because there is no order + between the PHI nodes in the exits and the loop headers we need to + order them base on the order of the two headers. First record the new + phi nodes. */ + for (auto gsi_from = gsi_start_phis (scalar_exit->dest); + !gsi_end_p (gsi_from); gsi_next (&gsi_from)) + { + gimple *from_phi = gsi_stmt (gsi_from); + tree new_res = copy_ssa_name (gimple_phi_result (from_phi)); + gphi *res = create_phi_node (new_res, new_preheader); + new_phis.safe_push (res); + } - add_phi_arg (phi, orig_arg, new_exit, orig_locus); + /* Then redirect the edges and flush the changes. This writes out the new + SSA names. */ + for (edge exit : loop_exits) + { + edge e = redirect_edge_and_branch (exit, new_preheader); + flush_pending_stmts (e); + } + + /* Record the new SSA names in the cache so that we can skip materializing + them again when we fill in the rest of the LCSSA variables. */ + for (auto phi : new_phis) + { + tree new_arg = gimple_phi_arg (phi, 0)->def; + new_phi_args.put (new_arg, gimple_phi_result (phi)); + } + + /* Copy the current loop LC PHI nodes between the original loop exit + block and the new loop header. This allows us to later split the + preheader block and still find the right LC nodes. */ + edge latch_new = single_succ_edge (new_preheader); + for (auto gsi_from = gsi_start_phis (loop->header), + gsi_to = gsi_start_phis (new_loop->header); + flow_loops && !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to); + gsi_next (&gsi_from), gsi_next (&gsi_to)) + { + gimple *from_phi = gsi_stmt (gsi_from); + gimple *to_phi = gsi_stmt (gsi_to); + tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi, + loop_latch_edge (loop)); + + /* Check if we've already created a new phi node during edge + redirection. If we have, only propagate the value downwards. */ + if (tree *res = new_phi_args.get (new_arg)) + { + adjust_phi_and_debug_stmts (to_phi, latch_new, *res); + continue; } + + tree new_res = copy_ssa_name (gimple_phi_result (from_phi)); + gphi *lcssa_phi = create_phi_node (new_res, e->dest); + + /* Main loop exit should use the final iter value. */ + add_phi_arg (lcssa_phi, new_arg, loop_exit, UNKNOWN_LOCATION); + + adjust_phi_and_debug_stmts (to_phi, latch_new, new_res); } - redirect_edge_and_branch_force (e, new_preheader); - flush_pending_stmts (e); + set_immediate_dominator (CDI_DOMINATORS, new_preheader, e->src); - if (was_imm_dom || duplicate_outer_loop) + + if ((was_imm_dom || duplicate_outer_loop)) set_immediate_dominator (CDI_DOMINATORS, exit_dest, new_exit->src); /* And remove the non-necessary forwarder again. Keep the other @@ -1598,6 +1680,22 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, } else /* Add the copy at entry. */ { + /* Copy the current loop LC PHI nodes between the original loop exit + block and the new loop header. This allows us to later split the + preheader block and still find the right LC nodes. */ + for (auto gsi_from = gsi_start_phis (new_loop->header), + gsi_to = gsi_start_phis (loop->header); + flow_loops && !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to); + gsi_next (&gsi_from), gsi_next (&gsi_to)) + { + gimple *from_phi = gsi_stmt (gsi_from); + gimple *to_phi = gsi_stmt (gsi_to); + tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi, + loop_latch_edge (new_loop)); + adjust_phi_and_debug_stmts (to_phi, loop_preheader_edge (loop), + new_arg); + } + if (scalar_loop != loop) { /* Remove the non-necessary forwarder of scalar_loop again. */ @@ -1627,29 +1725,6 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, loop_preheader_edge (new_loop)->src); } - if (scalar_loop != loop) - { - /* Update new_loop->header PHIs, so that on the preheader - edge they are the ones from loop rather than scalar_loop. */ - gphi_iterator gsi_orig, gsi_new; - edge orig_e = loop_preheader_edge (loop); - edge new_e = loop_preheader_edge (new_loop); - - for (gsi_orig = gsi_start_phis (loop->header), - gsi_new = gsi_start_phis (new_loop->header); - !gsi_end_p (gsi_orig) && !gsi_end_p (gsi_new); - gsi_next (&gsi_orig), gsi_next (&gsi_new)) - { - gphi *orig_phi = gsi_orig.phi (); - gphi *new_phi = gsi_new.phi (); - tree orig_arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, orig_e); - location_t orig_locus - = gimple_phi_arg_location_from_edge (orig_phi, orig_e); - - add_phi_arg (new_phi, orig_arg, new_e, orig_locus); - } - } - free (new_bbs); free (bbs); @@ -2579,139 +2654,36 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, /* LCSSA_PHI is a lcssa phi of EPILOG loop which is copied from LOOP, this function searches for the corresponding lcssa phi node in exit - bb of LOOP. If it is found, return the phi result; otherwise return - NULL. */ + bb of LOOP following the LCSSA_EDGE to the exit node. If it is found, + return the phi result; otherwise return NULL. */ static tree find_guard_arg (class loop *loop ATTRIBUTE_UNUSED, class loop *epilog ATTRIBUTE_UNUSED, - const_edge e, gphi *lcssa_phi) + const_edge e, gphi *lcssa_phi, int lcssa_edge = 0) { gphi_iterator gsi; - gcc_assert (single_pred_p (e->dest)); for (gsi = gsi_start_phis (e->dest); !gsi_end_p (gsi); gsi_next (&gsi)) { gphi *phi = gsi.phi (); - if (operand_equal_p (PHI_ARG_DEF (phi, 0), - PHI_ARG_DEF (lcssa_phi, 0), 0)) - return PHI_RESULT (phi); - } - return NULL_TREE; -} - -/* Function slpeel_tree_duplicate_loop_to_edge_cfg duplciates FIRST/SECOND - from SECOND/FIRST and puts it at the original loop's preheader/exit - edge, the two loops are arranged as below: - - preheader_a: - first_loop: - header_a: - i_1 = PHI; - ... - i_2 = i_1 + 1; - if (cond_a) - goto latch_a; - else - goto between_bb; - latch_a: - goto header_a; - - between_bb: - ;; i_x = PHI; ;; LCSSA phi node to be created for FIRST, - - second_loop: - header_b: - i_3 = PHI; ;; Use of i_0 to be replaced with i_x, - or with i_2 if no LCSSA phi is created - under condition of CREATE_LCSSA_FOR_IV_PHIS. - ... - i_4 = i_3 + 1; - if (cond_b) - goto latch_b; - else - goto exit_bb; - latch_b: - goto header_b; - - exit_bb: - - This function creates loop closed SSA for the first loop; update the - second loop's PHI nodes by replacing argument on incoming edge with the - result of newly created lcssa PHI nodes. IF CREATE_LCSSA_FOR_IV_PHIS - is false, Loop closed ssa phis will only be created for non-iv phis for - the first loop. - - This function assumes exit bb of the first loop is preheader bb of the - second loop, i.e, between_bb in the example code. With PHIs updated, - the second loop will execute rest iterations of the first. */ - -static void -slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, - class loop *first, edge first_loop_e, - class loop *second, edge second_loop_e, - bool create_lcssa_for_iv_phis) -{ - gphi_iterator gsi_update, gsi_orig; - class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); - - edge first_latch_e = EDGE_SUCC (first->latch, 0); - edge second_preheader_e = loop_preheader_edge (second); - basic_block between_bb = first_loop_e->dest; - - gcc_assert (between_bb == second_preheader_e->src); - gcc_assert (single_pred_p (between_bb) && single_succ_p (between_bb)); - /* Either the first loop or the second is the loop to be vectorized. */ - gcc_assert (loop == first || loop == second); - - for (gsi_orig = gsi_start_phis (first->header), - gsi_update = gsi_start_phis (second->header); - !gsi_end_p (gsi_orig) && !gsi_end_p (gsi_update); - gsi_next (&gsi_orig), gsi_next (&gsi_update)) - { - gphi *orig_phi = gsi_orig.phi (); - gphi *update_phi = gsi_update.phi (); - - tree arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, first_latch_e); - /* Generate lcssa PHI node for the first loop. */ - gphi *vect_phi = (loop == first) ? orig_phi : update_phi; - stmt_vec_info vect_phi_info = loop_vinfo->lookup_stmt (vect_phi); - if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi_info)) + /* Nested loops with multiple exits can have different no# phi node + arguments between the main loop and epilog as epilog falls to the + second loop. */ + if (gimple_phi_num_args (phi) > e->dest_idx) { - tree new_res = copy_ssa_name (PHI_RESULT (orig_phi)); - gphi *lcssa_phi = create_phi_node (new_res, between_bb); - add_phi_arg (lcssa_phi, arg, first_loop_e, UNKNOWN_LOCATION); - arg = new_res; - } - - /* Update PHI node in the second loop by replacing arg on the loop's - incoming edge. */ - adjust_phi_and_debug_stmts (update_phi, second_preheader_e, arg); - } - - /* For epilogue peeling we have to make sure to copy all LC PHIs - for correct vectorization of live stmts. */ - if (loop == first) - { - basic_block orig_exit = second_loop_e->dest; - for (gsi_orig = gsi_start_phis (orig_exit); - !gsi_end_p (gsi_orig); gsi_next (&gsi_orig)) - { - gphi *orig_phi = gsi_orig.phi (); - tree orig_arg = PHI_ARG_DEF (orig_phi, 0); - if (TREE_CODE (orig_arg) != SSA_NAME || virtual_operand_p (orig_arg)) - continue; - - const_edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); - /* Already created in the above loop. */ - if (find_guard_arg (first, second, exit_e, orig_phi)) + tree var = PHI_ARG_DEF (phi, e->dest_idx); + if (TREE_CODE (var) != SSA_NAME) continue; - - tree new_res = copy_ssa_name (orig_arg); - gphi *lcphi = create_phi_node (new_res, between_bb); - add_phi_arg (lcphi, orig_arg, first_loop_e, UNKNOWN_LOCATION); + tree def = get_current_def (var); + if (!def) + continue; + if (operand_equal_p (def, + PHI_ARG_DEF (lcssa_phi, lcssa_edge), 0)) + return PHI_RESULT (phi); } } + return NULL_TREE; } /* Function slpeel_add_loop_guard adds guard skipping from the beginning @@ -2796,11 +2768,11 @@ slpeel_update_phi_nodes_for_guard1 (class loop *skip_loop, } } -/* LOOP and EPILOG are two consecutive loops in CFG and EPILOG is copied - from LOOP. Function slpeel_add_loop_guard adds guard skipping from a - point between the two loops to the end of EPILOG. Edges GUARD_EDGE - and MERGE_EDGE are the two pred edges of merge_bb at the end of EPILOG. - The CFG looks like: +/* LOOP and EPILOG are two consecutive loops in CFG connected by LOOP_EXIT edge + and EPILOG is copied from LOOP. Function slpeel_add_loop_guard adds guard + skipping from a point between the two loops to the end of EPILOG. Edges + GUARD_EDGE and MERGE_EDGE are the two pred edges of merge_bb at the end of + EPILOG. The CFG looks like: loop: header_a: @@ -2851,6 +2823,7 @@ slpeel_update_phi_nodes_for_guard1 (class loop *skip_loop, static void slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, + const_edge loop_exit, edge guard_edge, edge merge_edge) { gphi_iterator gsi; @@ -2859,13 +2832,11 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, gcc_assert (single_succ_p (merge_bb)); edge e = single_succ_edge (merge_bb); basic_block exit_bb = e->dest; - gcc_assert (single_pred_p (exit_bb)); - gcc_assert (single_pred (exit_bb) == single_exit (epilog)->dest); for (gsi = gsi_start_phis (exit_bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gphi *update_phi = gsi.phi (); - tree old_arg = PHI_ARG_DEF (update_phi, 0); + tree old_arg = PHI_ARG_DEF (update_phi, e->dest_idx); tree merge_arg = NULL_TREE; @@ -2877,8 +2848,8 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, if (!merge_arg) merge_arg = old_arg; - tree guard_arg - = find_guard_arg (loop, epilog, single_exit (loop), update_phi); + tree guard_arg = find_guard_arg (loop, epilog, loop_exit, + update_phi, e->dest_idx); /* If the var is live after loop but not a reduction, we simply use the old arg. */ if (!guard_arg) @@ -2898,21 +2869,6 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, } } -/* EPILOG loop is duplicated from the original loop for vectorizing, - the arg of its loop closed ssa PHI needs to be updated. */ - -static void -slpeel_update_phi_nodes_for_lcssa (class loop *epilog) -{ - gphi_iterator gsi; - basic_block exit_bb = single_exit (epilog)->dest; - - gcc_assert (single_pred_p (exit_bb)); - edge e = EDGE_PRED (exit_bb, 0); - for (gsi = gsi_start_phis (exit_bb); !gsi_end_p (gsi); gsi_next (&gsi)) - rename_use_op (PHI_ARG_DEF_PTR_FROM_EDGE (gsi.phi (), e)); -} - /* LOOP_VINFO is an epilogue loop whose corresponding main loop can be skipped. Return a value that equals: @@ -3255,8 +3211,7 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, e, &prolog_e); gcc_assert (prolog); prolog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, prolog_e, loop, - exit_e, true); + first_loop = prolog; reset_original_copy_tables (); @@ -3336,8 +3291,6 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo) = new_epilog_e; gcc_assert (epilog); epilog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, e, epilog, - new_epilog_e, false); bb_before_epilog = loop_preheader_edge (epilog)->src; /* Scalar version loop may be preferred. In this case, add guard @@ -3430,7 +3383,9 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, irred_flag); if (vect_epilogues) epilogue_vinfo->skip_this_loop_edge = guard_e; - slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, epilog_e); + edge main_iv = LOOP_VINFO_IV_EXIT (loop_vinfo); + slpeel_update_phi_nodes_for_guard2 (loop, epilog, main_iv, guard_e, + epilog_e); /* Only need to handle basic block before epilog loop if it's not the guard_bb, which is the case when skip_vector is true. */ if (guard_bb != bb_before_epilog) @@ -3441,8 +3396,6 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, } scale_loop_profile (epilog, prob_epilog, -1); } - else - slpeel_update_phi_nodes_for_lcssa (epilog); unsigned HOST_WIDE_INT bound; if (bound_scalar.is_constant (&bound)) diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index f1caa5f207d3b13da58c3a313b11d1ef98374349..327cab0f736da7f1bd3e024d666df46ef9208107 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -5877,7 +5877,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, basic_block exit_bb; tree scalar_dest; tree scalar_type; - gimple *new_phi = NULL, *phi; + gimple *new_phi = NULL, *phi = NULL; gimple_stmt_iterator exit_gsi; tree new_temp = NULL_TREE, new_name, new_scalar_dest; gimple *epilog_stmt = NULL; diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index 55b6771b271d5072fa1327d595e1dddb112cfdf6..25ceb6600673d71fd6012443403997e921066483 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -2183,7 +2183,7 @@ extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge, const_edge); class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, edge, class loop *, edge, - edge, edge *); + edge, edge *, bool = true); class loop *vect_loop_versioning (loop_vec_info, gimple *); extern class loop *vect_do_peeling (loop_vec_info, tree, tree, tree *, tree *, tree *, int, bool, bool, --- a/gcc/tree-loop-distribution.cc +++ b/gcc/tree-loop-distribution.cc @@ -950,7 +950,7 @@ copy_loop_before (class loop *loop, bool redirect_lc_phi_defs) initialize_original_copy_tables (); res = slpeel_tree_duplicate_loop_to_edge_cfg (loop, single_exit (loop), NULL, - NULL, preheader, NULL); + NULL, preheader, NULL, false); gcc_assert (res != NULL); /* When a not last partition is supposed to keep the LC PHIs computed diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 77f8e668bcc8beca99ba4052e1b12e0d17300262..0e8c0be5384aab2399ed93966e7bf4918f6c87a5 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -252,6 +252,9 @@ adjust_phi_and_debug_stmts (gimple *update_phi, edge e, tree new_def) { tree orig_def = PHI_ARG_DEF_FROM_EDGE (update_phi, e); + gcc_assert (TREE_CODE (orig_def) != SSA_NAME + || orig_def != new_def); + SET_PHI_ARG_DEF (update_phi, e->dest_idx, new_def); if (MAY_HAVE_DEBUG_BIND_STMTS) @@ -1445,12 +1448,19 @@ slpeel_duplicate_current_defs_from_edges (edge from, edge to) on E which is either the entry or exit of LOOP. If SCALAR_LOOP is non-NULL, assume LOOP and SCALAR_LOOP are equivalent and copy the basic blocks from SCALAR_LOOP instead of LOOP, but to either the - entry or exit of LOOP. */ + entry or exit of LOOP. If FLOW_LOOPS then connect LOOP to SCALAR_LOOP as a + continuation. This is correct for cases where one loop continues from the + other like in the vectorizer, but not true for uses in e.g. loop distribution + where the loop is duplicated and then modified. + + If UPDATED_DOMS is not NULL it is update with the list of basic blocks whoms + dominators were updated during the peeling. */ class loop * slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, class loop *scalar_loop, - edge scalar_exit, edge e, edge *new_e) + edge scalar_exit, edge e, edge *new_e, + bool flow_loops) { class loop *new_loop; basic_block *new_bbs, *bbs, *pbbs; @@ -1481,6 +1491,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, scalar_exit = exit; break; } + + gcc_assert (scalar_exit); } bbs = XNEWVEC (basic_block, scalar_loop->num_nodes + 1); @@ -1513,6 +1525,8 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, exit = loop_exit; basic_block new_preheader = new_bbs[0]; + gcc_assert (new_exit); + /* Record the new loop exit information. new_loop doesn't have SCEV data and so we must initialize the exit information. */ if (new_e) @@ -1551,6 +1565,19 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, for (unsigned i = (at_exit ? 0 : 1); i < scalar_loop->num_nodes + 1; i++) rename_variables_in_bb (new_bbs[i], duplicate_outer_loop); + /* Rename the exit uses. */ + for (edge exit : get_loop_exit_edges (new_loop)) + for (auto gsi = gsi_start_phis (exit->dest); + !gsi_end_p (gsi); gsi_next (&gsi)) + { + tree orig_def = PHI_ARG_DEF_FROM_EDGE (gsi.phi (), exit); + rename_use_op (PHI_ARG_DEF_PTR_FROM_EDGE (gsi.phi (), exit)); + if (MAY_HAVE_DEBUG_BIND_STMTS) + adjust_debug_stmts (orig_def, PHI_RESULT (gsi.phi ()), exit->dest); + } + + /* This condition happens when the loop has been versioned. e.g. due to ifcvt + versioning the loop. */ if (scalar_loop != loop) { /* If we copied from SCALAR_LOOP rather than LOOP, SSA_NAMEs from @@ -1564,28 +1591,83 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, EDGE_SUCC (loop->latch, 0)); } + auto loop_exits = get_loop_exit_edges (loop); + auto_vec doms; + if (at_exit) /* Add the loop copy at exit. */ { - if (scalar_loop != loop) + if (scalar_loop != loop && new_exit->dest != exit_dest) { - gphi_iterator gsi; new_exit = redirect_edge_and_branch (new_exit, exit_dest); + flush_pending_stmts (new_exit); + } - for (gsi = gsi_start_phis (exit_dest); !gsi_end_p (gsi); - gsi_next (&gsi)) - { - gphi *phi = gsi.phi (); - tree orig_arg = PHI_ARG_DEF_FROM_EDGE (phi, e); - location_t orig_locus - = gimple_phi_arg_location_from_edge (phi, e); + auto_vec new_phis; + hash_map new_phi_args; + /* First create the empty phi nodes so that when we flush the + statements they can be filled in. However because there is no order + between the PHI nodes in the exits and the loop headers we need to + order them base on the order of the two headers. First record the new + phi nodes. */ + for (auto gsi_from = gsi_start_phis (scalar_exit->dest); + !gsi_end_p (gsi_from); gsi_next (&gsi_from)) + { + gimple *from_phi = gsi_stmt (gsi_from); + tree new_res = copy_ssa_name (gimple_phi_result (from_phi)); + gphi *res = create_phi_node (new_res, new_preheader); + new_phis.safe_push (res); + } - add_phi_arg (phi, orig_arg, new_exit, orig_locus); + /* Then redirect the edges and flush the changes. This writes out the new + SSA names. */ + for (edge exit : loop_exits) + { + edge e = redirect_edge_and_branch (exit, new_preheader); + flush_pending_stmts (e); + } + + /* Record the new SSA names in the cache so that we can skip materializing + them again when we fill in the rest of the LCSSA variables. */ + for (auto phi : new_phis) + { + tree new_arg = gimple_phi_arg (phi, 0)->def; + new_phi_args.put (new_arg, gimple_phi_result (phi)); + } + + /* Copy the current loop LC PHI nodes between the original loop exit + block and the new loop header. This allows us to later split the + preheader block and still find the right LC nodes. */ + edge latch_new = single_succ_edge (new_preheader); + for (auto gsi_from = gsi_start_phis (loop->header), + gsi_to = gsi_start_phis (new_loop->header); + flow_loops && !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to); + gsi_next (&gsi_from), gsi_next (&gsi_to)) + { + gimple *from_phi = gsi_stmt (gsi_from); + gimple *to_phi = gsi_stmt (gsi_to); + tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi, + loop_latch_edge (loop)); + + /* Check if we've already created a new phi node during edge + redirection. If we have, only propagate the value downwards. */ + if (tree *res = new_phi_args.get (new_arg)) + { + adjust_phi_and_debug_stmts (to_phi, latch_new, *res); + continue; } + + tree new_res = copy_ssa_name (gimple_phi_result (from_phi)); + gphi *lcssa_phi = create_phi_node (new_res, e->dest); + + /* Main loop exit should use the final iter value. */ + add_phi_arg (lcssa_phi, new_arg, loop_exit, UNKNOWN_LOCATION); + + adjust_phi_and_debug_stmts (to_phi, latch_new, new_res); } - redirect_edge_and_branch_force (e, new_preheader); - flush_pending_stmts (e); + set_immediate_dominator (CDI_DOMINATORS, new_preheader, e->src); - if (was_imm_dom || duplicate_outer_loop) + + if ((was_imm_dom || duplicate_outer_loop)) set_immediate_dominator (CDI_DOMINATORS, exit_dest, new_exit->src); /* And remove the non-necessary forwarder again. Keep the other @@ -1598,6 +1680,22 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, } else /* Add the copy at entry. */ { + /* Copy the current loop LC PHI nodes between the original loop exit + block and the new loop header. This allows us to later split the + preheader block and still find the right LC nodes. */ + for (auto gsi_from = gsi_start_phis (new_loop->header), + gsi_to = gsi_start_phis (loop->header); + flow_loops && !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to); + gsi_next (&gsi_from), gsi_next (&gsi_to)) + { + gimple *from_phi = gsi_stmt (gsi_from); + gimple *to_phi = gsi_stmt (gsi_to); + tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi, + loop_latch_edge (new_loop)); + adjust_phi_and_debug_stmts (to_phi, loop_preheader_edge (loop), + new_arg); + } + if (scalar_loop != loop) { /* Remove the non-necessary forwarder of scalar_loop again. */ @@ -1627,29 +1725,6 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit, loop_preheader_edge (new_loop)->src); } - if (scalar_loop != loop) - { - /* Update new_loop->header PHIs, so that on the preheader - edge they are the ones from loop rather than scalar_loop. */ - gphi_iterator gsi_orig, gsi_new; - edge orig_e = loop_preheader_edge (loop); - edge new_e = loop_preheader_edge (new_loop); - - for (gsi_orig = gsi_start_phis (loop->header), - gsi_new = gsi_start_phis (new_loop->header); - !gsi_end_p (gsi_orig) && !gsi_end_p (gsi_new); - gsi_next (&gsi_orig), gsi_next (&gsi_new)) - { - gphi *orig_phi = gsi_orig.phi (); - gphi *new_phi = gsi_new.phi (); - tree orig_arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, orig_e); - location_t orig_locus - = gimple_phi_arg_location_from_edge (orig_phi, orig_e); - - add_phi_arg (new_phi, orig_arg, new_e, orig_locus); - } - } - free (new_bbs); free (bbs); @@ -2579,139 +2654,36 @@ vect_gen_vector_loop_niters_mult_vf (loop_vec_info loop_vinfo, /* LCSSA_PHI is a lcssa phi of EPILOG loop which is copied from LOOP, this function searches for the corresponding lcssa phi node in exit - bb of LOOP. If it is found, return the phi result; otherwise return - NULL. */ + bb of LOOP following the LCSSA_EDGE to the exit node. If it is found, + return the phi result; otherwise return NULL. */ static tree find_guard_arg (class loop *loop ATTRIBUTE_UNUSED, class loop *epilog ATTRIBUTE_UNUSED, - const_edge e, gphi *lcssa_phi) + const_edge e, gphi *lcssa_phi, int lcssa_edge = 0) { gphi_iterator gsi; - gcc_assert (single_pred_p (e->dest)); for (gsi = gsi_start_phis (e->dest); !gsi_end_p (gsi); gsi_next (&gsi)) { gphi *phi = gsi.phi (); - if (operand_equal_p (PHI_ARG_DEF (phi, 0), - PHI_ARG_DEF (lcssa_phi, 0), 0)) - return PHI_RESULT (phi); - } - return NULL_TREE; -} - -/* Function slpeel_tree_duplicate_loop_to_edge_cfg duplciates FIRST/SECOND - from SECOND/FIRST and puts it at the original loop's preheader/exit - edge, the two loops are arranged as below: - - preheader_a: - first_loop: - header_a: - i_1 = PHI; - ... - i_2 = i_1 + 1; - if (cond_a) - goto latch_a; - else - goto between_bb; - latch_a: - goto header_a; - - between_bb: - ;; i_x = PHI; ;; LCSSA phi node to be created for FIRST, - - second_loop: - header_b: - i_3 = PHI; ;; Use of i_0 to be replaced with i_x, - or with i_2 if no LCSSA phi is created - under condition of CREATE_LCSSA_FOR_IV_PHIS. - ... - i_4 = i_3 + 1; - if (cond_b) - goto latch_b; - else - goto exit_bb; - latch_b: - goto header_b; - - exit_bb: - - This function creates loop closed SSA for the first loop; update the - second loop's PHI nodes by replacing argument on incoming edge with the - result of newly created lcssa PHI nodes. IF CREATE_LCSSA_FOR_IV_PHIS - is false, Loop closed ssa phis will only be created for non-iv phis for - the first loop. - - This function assumes exit bb of the first loop is preheader bb of the - second loop, i.e, between_bb in the example code. With PHIs updated, - the second loop will execute rest iterations of the first. */ - -static void -slpeel_update_phi_nodes_for_loops (loop_vec_info loop_vinfo, - class loop *first, edge first_loop_e, - class loop *second, edge second_loop_e, - bool create_lcssa_for_iv_phis) -{ - gphi_iterator gsi_update, gsi_orig; - class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); - - edge first_latch_e = EDGE_SUCC (first->latch, 0); - edge second_preheader_e = loop_preheader_edge (second); - basic_block between_bb = first_loop_e->dest; - - gcc_assert (between_bb == second_preheader_e->src); - gcc_assert (single_pred_p (between_bb) && single_succ_p (between_bb)); - /* Either the first loop or the second is the loop to be vectorized. */ - gcc_assert (loop == first || loop == second); - - for (gsi_orig = gsi_start_phis (first->header), - gsi_update = gsi_start_phis (second->header); - !gsi_end_p (gsi_orig) && !gsi_end_p (gsi_update); - gsi_next (&gsi_orig), gsi_next (&gsi_update)) - { - gphi *orig_phi = gsi_orig.phi (); - gphi *update_phi = gsi_update.phi (); - - tree arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, first_latch_e); - /* Generate lcssa PHI node for the first loop. */ - gphi *vect_phi = (loop == first) ? orig_phi : update_phi; - stmt_vec_info vect_phi_info = loop_vinfo->lookup_stmt (vect_phi); - if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi_info)) + /* Nested loops with multiple exits can have different no# phi node + arguments between the main loop and epilog as epilog falls to the + second loop. */ + if (gimple_phi_num_args (phi) > e->dest_idx) { - tree new_res = copy_ssa_name (PHI_RESULT (orig_phi)); - gphi *lcssa_phi = create_phi_node (new_res, between_bb); - add_phi_arg (lcssa_phi, arg, first_loop_e, UNKNOWN_LOCATION); - arg = new_res; - } - - /* Update PHI node in the second loop by replacing arg on the loop's - incoming edge. */ - adjust_phi_and_debug_stmts (update_phi, second_preheader_e, arg); - } - - /* For epilogue peeling we have to make sure to copy all LC PHIs - for correct vectorization of live stmts. */ - if (loop == first) - { - basic_block orig_exit = second_loop_e->dest; - for (gsi_orig = gsi_start_phis (orig_exit); - !gsi_end_p (gsi_orig); gsi_next (&gsi_orig)) - { - gphi *orig_phi = gsi_orig.phi (); - tree orig_arg = PHI_ARG_DEF (orig_phi, 0); - if (TREE_CODE (orig_arg) != SSA_NAME || virtual_operand_p (orig_arg)) - continue; - - const_edge exit_e = LOOP_VINFO_IV_EXIT (loop_vinfo); - /* Already created in the above loop. */ - if (find_guard_arg (first, second, exit_e, orig_phi)) + tree var = PHI_ARG_DEF (phi, e->dest_idx); + if (TREE_CODE (var) != SSA_NAME) continue; - - tree new_res = copy_ssa_name (orig_arg); - gphi *lcphi = create_phi_node (new_res, between_bb); - add_phi_arg (lcphi, orig_arg, first_loop_e, UNKNOWN_LOCATION); + tree def = get_current_def (var); + if (!def) + continue; + if (operand_equal_p (def, + PHI_ARG_DEF (lcssa_phi, lcssa_edge), 0)) + return PHI_RESULT (phi); } } + return NULL_TREE; } /* Function slpeel_add_loop_guard adds guard skipping from the beginning @@ -2796,11 +2768,11 @@ slpeel_update_phi_nodes_for_guard1 (class loop *skip_loop, } } -/* LOOP and EPILOG are two consecutive loops in CFG and EPILOG is copied - from LOOP. Function slpeel_add_loop_guard adds guard skipping from a - point between the two loops to the end of EPILOG. Edges GUARD_EDGE - and MERGE_EDGE are the two pred edges of merge_bb at the end of EPILOG. - The CFG looks like: +/* LOOP and EPILOG are two consecutive loops in CFG connected by LOOP_EXIT edge + and EPILOG is copied from LOOP. Function slpeel_add_loop_guard adds guard + skipping from a point between the two loops to the end of EPILOG. Edges + GUARD_EDGE and MERGE_EDGE are the two pred edges of merge_bb at the end of + EPILOG. The CFG looks like: loop: header_a: @@ -2851,6 +2823,7 @@ slpeel_update_phi_nodes_for_guard1 (class loop *skip_loop, static void slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, + const_edge loop_exit, edge guard_edge, edge merge_edge) { gphi_iterator gsi; @@ -2859,13 +2832,11 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, gcc_assert (single_succ_p (merge_bb)); edge e = single_succ_edge (merge_bb); basic_block exit_bb = e->dest; - gcc_assert (single_pred_p (exit_bb)); - gcc_assert (single_pred (exit_bb) == single_exit (epilog)->dest); for (gsi = gsi_start_phis (exit_bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gphi *update_phi = gsi.phi (); - tree old_arg = PHI_ARG_DEF (update_phi, 0); + tree old_arg = PHI_ARG_DEF (update_phi, e->dest_idx); tree merge_arg = NULL_TREE; @@ -2877,8 +2848,8 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, if (!merge_arg) merge_arg = old_arg; - tree guard_arg - = find_guard_arg (loop, epilog, single_exit (loop), update_phi); + tree guard_arg = find_guard_arg (loop, epilog, loop_exit, + update_phi, e->dest_idx); /* If the var is live after loop but not a reduction, we simply use the old arg. */ if (!guard_arg) @@ -2898,21 +2869,6 @@ slpeel_update_phi_nodes_for_guard2 (class loop *loop, class loop *epilog, } } -/* EPILOG loop is duplicated from the original loop for vectorizing, - the arg of its loop closed ssa PHI needs to be updated. */ - -static void -slpeel_update_phi_nodes_for_lcssa (class loop *epilog) -{ - gphi_iterator gsi; - basic_block exit_bb = single_exit (epilog)->dest; - - gcc_assert (single_pred_p (exit_bb)); - edge e = EDGE_PRED (exit_bb, 0); - for (gsi = gsi_start_phis (exit_bb); !gsi_end_p (gsi); gsi_next (&gsi)) - rename_use_op (PHI_ARG_DEF_PTR_FROM_EDGE (gsi.phi (), e)); -} - /* LOOP_VINFO is an epilogue loop whose corresponding main loop can be skipped. Return a value that equals: @@ -3255,8 +3211,7 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, e, &prolog_e); gcc_assert (prolog); prolog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, prolog, prolog_e, loop, - exit_e, true); + first_loop = prolog; reset_original_copy_tables (); @@ -3336,8 +3291,6 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, LOOP_VINFO_EPILOGUE_IV_EXIT (loop_vinfo) = new_epilog_e; gcc_assert (epilog); epilog->force_vectorize = false; - slpeel_update_phi_nodes_for_loops (loop_vinfo, loop, e, epilog, - new_epilog_e, false); bb_before_epilog = loop_preheader_edge (epilog)->src; /* Scalar version loop may be preferred. In this case, add guard @@ -3430,7 +3383,9 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, irred_flag); if (vect_epilogues) epilogue_vinfo->skip_this_loop_edge = guard_e; - slpeel_update_phi_nodes_for_guard2 (loop, epilog, guard_e, epilog_e); + edge main_iv = LOOP_VINFO_IV_EXIT (loop_vinfo); + slpeel_update_phi_nodes_for_guard2 (loop, epilog, main_iv, guard_e, + epilog_e); /* Only need to handle basic block before epilog loop if it's not the guard_bb, which is the case when skip_vector is true. */ if (guard_bb != bb_before_epilog) @@ -3441,8 +3396,6 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1, } scale_loop_profile (epilog, prob_epilog, -1); } - else - slpeel_update_phi_nodes_for_lcssa (epilog); unsigned HOST_WIDE_INT bound; if (bound_scalar.is_constant (&bound)) diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index f1caa5f207d3b13da58c3a313b11d1ef98374349..327cab0f736da7f1bd3e024d666df46ef9208107 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -5877,7 +5877,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, basic_block exit_bb; tree scalar_dest; tree scalar_type; - gimple *new_phi = NULL, *phi; + gimple *new_phi = NULL, *phi = NULL; gimple_stmt_iterator exit_gsi; tree new_temp = NULL_TREE, new_name, new_scalar_dest; gimple *epilog_stmt = NULL; diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index 55b6771b271d5072fa1327d595e1dddb112cfdf6..25ceb6600673d71fd6012443403997e921066483 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -2183,7 +2183,7 @@ extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge, const_edge); class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, edge, class loop *, edge, - edge, edge *); + edge, edge *, bool = true); class loop *vect_loop_versioning (loop_vec_info, gimple *); extern class loop *vect_do_peeling (loop_vec_info, tree, tree, tree *, tree *, tree *, int, bool, bool,