From patchwork Mon Nov 6 07:37:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 161848 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2495706vqu; Sun, 5 Nov 2023 23:39:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IF60H1qzfgI+vc0Ig5PXYsVcX6ZBfQwFxFtf6c3q3BmnUXr6yDf4V2ZGngOIhPFFYAjW7Ad X-Received: by 2002:a05:620a:4625:b0:778:9888:ae6e with SMTP id br37-20020a05620a462500b007789888ae6emr35596762qkb.19.1699256340656; Sun, 05 Nov 2023 23:39:00 -0800 (PST) ARC-Seal: i=4; a=rsa-sha256; t=1699256340; cv=pass; d=google.com; s=arc-20160816; b=N6mHpFpoblITnISgxZTclJJpQXN33uRpj8apNG05/hEqaEz4y5QCnWFc1uQZt/9x95 tqsvp3glFnmmhP2ybUj2+wgoTLeQPi1cXL4d2V8koD+yFc47xaic3m2xIL84qUyjA+VD /yTuM38aAzsdOX2UVy/LtBjNRR6TAPwJtXAxMfNg7gZLdruhPuRGpaUSU0B/8bufMfSB nbJJiTH96nI1RTippiBKUBKqZEUTMdl2zRTujxxYwlamh/CJAFBbmROE8xDxNw9hy4po NwFNDFelgEnzJxLMho4khb1DoqQ1xLvhxQoKNYHH/7qQpQoMol8MdTjSi3smOJyHX0K7 XUXg== ARC-Message-Signature: i=4; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:original-authentication-results :nodisclaimer:mime-version:in-reply-to:content-disposition :message-id:subject:cc:to:from:date:authentication-results-original :dkim-signature:dkim-signature:arc-filter:dmarc-filter:delivered-to; bh=qg2yhuXN28bEBfjn5yMlAdpxRa/WYsoNrJ3hqQZ5F6U=; fh=6lLuKcPp5JwcfhLsQ40FQN1vJS1KxlKbf2GiViQwbCM=; b=TYLVd5bW9q/UmbOClnib/2GkMBCRl9zQONz/o6gsnH/buauMnea/XmROdfJlbXzDnf oFtiNKFSINxUeeiag5piYFNn2/iYFqNpaMlSE6K8EeOG9Q7VV/HoNsCn+/+VWMNwBAQQ XWhuo6/LZgZJpyAJ6SrLzjyNygz9bWlZq0eHwuti+ks8eLMVrtR+bStRvrFDDAXpZsm4 wd2f9wON6YhJalb7j+JONW2iA8MVj6xfNnPDpOS/EWpaWEfHTzv6IxUAn1suxfSrQVao 0EjiBolZqDYKGPQa6Dz0sXGSYvPDWi9SNWFWnNn+m4aAn6uKzXt8DTPnDRyfd04BFU6z 6Qzw== ARC-Authentication-Results: i=4; mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=g4F7xLxZ; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=g4F7xLxZ; arc=pass (i=3); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id m17-20020ae9e711000000b0076729db5823si5075142qka.240.2023.11.05.23.39.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Nov 2023 23:39:00 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=g4F7xLxZ; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=g4F7xLxZ; arc=pass (i=3); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 95BBD3856DF2 for ; Mon, 6 Nov 2023 07:38:57 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on2084.outbound.protection.outlook.com [40.107.15.84]) by sourceware.org (Postfix) with ESMTPS id 7BC983857700 for ; Mon, 6 Nov 2023 07:38:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 7BC983857700 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 7BC983857700 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.15.84 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1699256299; cv=pass; b=wq93kUW8s7YlYzP+UQcrulrG00AWm6ZeWl3cGcwulO9vKVrKSWrg13/yjaaEVrkbu2mPvFTj+iiJ7sldeprCBh/6GRhp8VAWuDcF9++R1DuhIZ03Le+bvE/mKP515ni3T10memewY7LbqwR0CrEsRJL4qYJyGAEBoTcOQNu1guc= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1699256299; c=relaxed/simple; bh=EStcmw4nbyv4WDtOfinh8x9kUWq1H1p+ZVWYoAlhw9g=; h=DKIM-Signature:DKIM-Signature:Date:From:To:Subject:Message-ID: MIME-Version; b=Vm7ujAvFdBINOZkChruDlHzL4jVnJ+wjMpgzvveniIKR2COrsJK8eTqVFlHH6ZVKptrgs/OeG3VHOWsdWqEV5VNQFEARFBkJRp7ZSAQTH9LTgGjQn9rieveC+ZKGdl3f5r7pBuAdVjsZvfrMm+dhQUKGevO1ZhpfN9nTf85OOpA= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=k+sJdxYkfXBoqmag30f4kvhTu4qbzd+7LjX6Dc6k88SUCRFIjfAqD8LdSzpq2ZAq4nCeUuvE/LMsDjOIftIx0H8+qqDraIRah9+BRnhmszp/8LwDhReUPaaKPe9nzc3ZIYkoMGLtlAmvwxCCLGTBGxOvsnG8kTg0xyvSr4duZlsj5YhDf2JcWcZlG/J8pSORgZ9Tg9wF/lfSEfOZ+fY/rYpWZOeIoZZBI6poGzxGlrOBYz9tccDzOGn+p31wDLBFLxByRpiphodgIR7HlSw+Aledib30nDYM0CRsrXpEJ0sXFhsN8UfpYFgXpXXYgfS2hKJjsw4buPdYabFpckff6w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qg2yhuXN28bEBfjn5yMlAdpxRa/WYsoNrJ3hqQZ5F6U=; b=fiqz7FZB45RR8jwenL9pP0naRVoGmG3BbA8EtbVKpQOCq5aoJvkt3pQdimvXqdUt99SSeMUtZ8IOtNaEbvpm1xDxsAPdefRDF0BB2GLrARk1Qo5JxFSFc5B3aA/ilb3DV6ok8gPaAxMZ2kwepvmIudqM2qS+i1YcNec3mb8x4vews0IOTMZF4mgBmBRfVW88Qp3RX8ZvzE6ZPzDMcAW+TMgj9hLBx5aFsJ5UAGeVx4+y/5XZDRnu8WMA51YVOSeq5+YljFYXEM5lDnC9eL2j5obQiM+pyA3xK4vzPzSDmTLD85v47MJ80s3g+imxmb6H7Mq59c1t1RqNUfw4dzAVyQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qg2yhuXN28bEBfjn5yMlAdpxRa/WYsoNrJ3hqQZ5F6U=; b=g4F7xLxZrkdqQo9pAc2CtgBbseCZwNp+zWIeOSvI/Gt+xmssm/m+sp8vTDc8+NzpT/bqLy/07kFlzfsfvo6GkSvBEJqUFZDy0CGHwlK+95imDGEV5OgBBCEf1h/rq4nYxZMaes2Vhz8jj5v2kwVtW0sc2qvot7qiMZKVz/gwHKE= Received: from DB8P191CA0008.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:130::18) by AS8PR08MB9266.eurprd08.prod.outlook.com (2603:10a6:20b:5a2::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.27; Mon, 6 Nov 2023 07:38:07 +0000 Received: from DB5PEPF00014B90.eurprd02.prod.outlook.com (2603:10a6:10:130:cafe::e6) by DB8P191CA0008.outlook.office365.com (2603:10a6:10:130::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.28 via Frontend Transport; Mon, 6 Nov 2023 07:38:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DB5PEPF00014B90.mail.protection.outlook.com (10.167.8.228) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.16 via Frontend Transport; Mon, 6 Nov 2023 07:38:07 +0000 Received: ("Tessian outbound 8289ea11ec17:v228"); Mon, 06 Nov 2023 07:38:07 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 9713b032ebae445e X-CR-MTA-TID: 64aa7808 Received: from 2556f0629c09.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id AA551479-4DC5-433C-9FD1-19803145F123.1; Mon, 06 Nov 2023 07:38:00 +0000 Received: from EUR03-AM7-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2556f0629c09.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 06 Nov 2023 07:38:00 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EsUaAOKfwlgpirqjbf9/rQpd5Z7LDrKKMprJH5NMZ71r8zrIFbmcot7Ku0H1hN3OCZlnXY6nV63ZdTIAtS1bVcNqCTU0OF+I8g5lhftsfhvSL/nmZWGz1e1PVBodvuuys3B0dbquULG0peLiFoKkGRvpZvITJQ+Kf4lLuHnjtM3Vcm5RR1XVls6k8tnCU6cVFa0FUDpvxoP6fKe3i1oi9WrGH2t+2FhGAUDIg2mhTJvYsnkoJNAawzvvEwo6xuWoocLthN4p9eR0w5EDq6Y+4yMyX1rNdrm8K+A/Pdv+/aKNoroN96u9lU0zU75JI0v7bX/ESE7ZzFhtHjyAMZONXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qg2yhuXN28bEBfjn5yMlAdpxRa/WYsoNrJ3hqQZ5F6U=; b=ZiA2GJIiIAD7GDDUaJQubq1xSTMRpgme1yxJgjMb4Z/kH4co6o5cCNmnPr1Tad5IipRvk7fkMSUOSh9bmz6QtzKXddJ+4B9ZFW8bUL49cF7sjQo2AqYAvCV+zgyd2vLnrBBltxU1iGzM6hphUJ+DY/LwQhNAi0gnc79otgBQCsWjuMLthZsujnjPvkXj+cfduE+PBvhXwYl9C/lhXcSEVl6pkxvxxRQoBh7xHk7Jq0+YWGjJhOSB3VtPiOyIwbojyJmDNkFMv9htSA+PGQZkXwNnaWKkpIEXq3viSVMT9Pg2WFBkItLyGpEZP1UaIWRPNQ80UipUficeiZZWxGYf+Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qg2yhuXN28bEBfjn5yMlAdpxRa/WYsoNrJ3hqQZ5F6U=; b=g4F7xLxZrkdqQo9pAc2CtgBbseCZwNp+zWIeOSvI/Gt+xmssm/m+sp8vTDc8+NzpT/bqLy/07kFlzfsfvo6GkSvBEJqUFZDy0CGHwlK+95imDGEV5OgBBCEf1h/rq4nYxZMaes2Vhz8jj5v2kwVtW0sc2qvot7qiMZKVz/gwHKE= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by DU0PR08MB9581.eurprd08.prod.outlook.com (2603:10a6:10:44b::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.28; Mon, 6 Nov 2023 07:37:57 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::26aa:efdd:a74a:27d0]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::26aa:efdd:a74a:27d0%5]) with mapi id 15.20.6954.028; Mon, 6 Nov 2023 07:37:57 +0000 Date: Mon, 6 Nov 2023 07:37:55 +0000 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, rguenther@suse.de, jlaw@ventanamicro.com Subject: [PATCH 3/21]middle-end: Implement code motion and dependency analysis for early breaks Message-ID: Content-Disposition: inline In-Reply-To: X-ClientProxiedBy: LO4P265CA0313.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:390::15) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|DU0PR08MB9581:EE_|DB5PEPF00014B90:EE_|AS8PR08MB9266:EE_ X-MS-Office365-Filtering-Correlation-Id: d865993d-d1f8-4ea4-1ad4-08dbde9b519e x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: LTm8bh/FC3HzvVe3sqGC7xpNxvwVVE41yeUjhWBf33+avlZ+d4xZRA3+cuCItS7bDIZtAp4/rHpjjRSQ2UF2MLQH1nGvGyPF6R/5C+vl9DwZTDsJIv8FAKFWizE21lLFtl3S+pfdAspudmnCZYEYSCbtWFR7nCu4BzjcsO7pwBPtulnjsOWO8H7ZWw/jlx2Bf0X6trUSja0nts+xfTa1+83CcWm+YjQmxAWytznv1pF27ckvu/gYYplTDsYb6i4+YBRMQsX3uTIUGDTiyAaNw+wwlHysj2KKuJeha31ku1oFSWkqXlQLvwZ+OYpMMQdiuFNNiAEwB6zm599RLEZOdzNeE+D6VApIm4EuRQaEHoOreXAGr5NkvDWYHArLEB/ptgW1kcrygmAPnR4ZUrLY9zgbyT1q3Tjsjp/RykIWQzWQZ1t/iSBI7q0jsnc+wt4adRrdBG5HGLl26IMq9M+aJvI1J6axa6WTfriCODOglu1eF5Ar+cfICTK172nU8QuzTFWO96o0OtcfWZs7VS8Yo7WiEPgyqrswTxUvAMpah4BbF8bJvQI1Gt0DUMmaOU1JzRX7v/l1VzSrrvj5GSVRPn5HWegak8kZWNr62JypFNbe1IfETpv3g4QrRWePwmyC X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(376002)(136003)(346002)(39860400002)(366004)(230922051799003)(64100799003)(451199024)(186009)(1800799009)(66899024)(6506007)(478600001)(2616005)(44144004)(6512007)(6486002)(33964004)(36756003)(38100700002)(86362001)(66946007)(5660300002)(235185007)(44832011)(316002)(66476007)(66556008)(41300700001)(4743002)(83380400001)(2906002)(30864003)(26005)(8676002)(4326008)(6916009)(8936002)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9581 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5PEPF00014B90.eurprd02.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: afafc0eb-478a-491f-a5b5-08dbde9b4bbc X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pxWBLM0MV6Nf4Qs2ib47YweQR4DLv+sWa3nP5MZle71nObEdIFldYaDkjlmfN7jmh/ezIIl/mUAudo3FhiveEmaO7b1LIGvLnQ+jVucmDsXNkTHu+nSu/OeWGrDwoicsIKcy57WJtbC6jadZTivQY+6/UtzajNijBBC0pRP3HSd0UiCYKkZbHHuoTNsi5vM+QzuH4PsrxxGvQ/SpNkx9SPppF7QwNnkEjz+l4mAFzLj2SA7eUaP7QEz037hA15t+t9++ioF21453CXfbKSnTx4M/KMVJanK7aU13kXn/8+4IunrL7u15RH6avLF+berdaMx4OYVBH/HUy60gLSOGhY6jBH3xFzKL33QlizBBMhYLJPF4uTp6eNT3QtShiGG8Q4/SHmkTSyoOFoOiIx972Wf4c/vqdjlpvMgPw4LIPdumI4SUKW1RTa/52pEz+Tksb1AX7dzmC9SwY75GxNT0Ciwmx+zmpOdUvJT+P2jgxCbtQgSleturIpM1+otjz+KUCcia5UwIubjt91o+ItrvXMG0pUB/PaK+kQXzE02yCUH2+YkTwSU2MHgdbm3yfBwb/xafZBz2XrsfH2oRdw7MiwCZJZn0HGgceGgPUsbaZF7WPaFnUZOcSTqUXjDxUHMWvFij1BOCZmnpsEdOR6hQXV1uzCDY5SjdIYru/CJZlHtUXx3aQwJEc/kP8iE2GxW5d6mRLVX29T5+ku0waeAutvZREPbtPIYkFQDLSp/+lEk6/77ExuDRlYVM8OuFrGW+CDtTTB4RIrmvCB8Itj5NPw== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(396003)(376002)(346002)(39860400002)(230922051799003)(186009)(64100799003)(1800799009)(82310400011)(451199024)(46966006)(36840700001)(40470700004)(6916009)(4326008)(8676002)(8936002)(41300700001)(36756003)(40480700001)(2906002)(235185007)(36860700001)(5660300002)(6486002)(66899024)(6512007)(356005)(316002)(86362001)(44832011)(47076005)(30864003)(81166007)(4743002)(70206006)(70586007)(83380400001)(336012)(40460700003)(26005)(82740400003)(2616005)(107886003)(6506007)(478600001)(33964004)(44144004)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2023 07:38:07.3514 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d865993d-d1f8-4ea4-1ad4-08dbde9b519e X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DB5PEPF00014B90.eurprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9266 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781799416876254910 X-GMAIL-MSGID: 1781799416876254910 Hi All, When performing early break vectorization we need to be sure that the vector operations are safe to perform. A simple example is e.g. for (int i = 0; i < N; i++) { vect_b[i] = x + i; if (vect_a[i]*2 != x) break; vect_a[i] = x; } where the store to vect_b is not allowed to be executed unconditionally since if we exit through the early break it wouldn't have been done for the full VF iteration. Effective the code motion determines: - is it safe/possible to vectorize the function - what updates to the VUSES should be performed if we do - Which statements need to be moved - Which statements can't be moved: * values that are live must be reachable through all exits * values that aren't single use and shared by the use/def chain of the cond - The final insertion point of the instructions. In the cases we have multiple early exist statements this should be the one closest to the loop latch itself. After motion the loop above is: for (int i = 0; i < N; i++) { ... y = x + i; if (vect_a[i]*2 != x) break; vect_b[i] = y; vect_a[i] = x; } The operation is split into two, during data ref analysis we determine validity of the operation and generate a worklist of actions to perform if we vectorize. After peeling and just before statetement tranformation we replay this worklist which moves the statements and updates book keeping only in the main loop that's to be vectorized. This includes updating of USES in exit blocks. At the moment we don't support this for epilog nomasks since the additional vectorized epilog's stmt UIDs are not found. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * tree-vect-data-refs.cc (validate_early_exit_stmts): New. (vect_analyze_early_break_dependences): New. (vect_analyze_data_ref_dependences): Use them. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize early_breaks. (move_early_exit_stmts): New. (vect_transform_loop): use it/ * tree-vect-stmts.cc (vect_is_simple_use): Use vect_early_exit_def. * tree-vectorizer.h (enum vect_def_type): Add vect_early_exit_def. (class _loop_vec_info): Add early_breaks, early_break_conflict, early_break_vuses. (LOOP_VINFO_EARLY_BREAKS): New. (LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS): New. (LOOP_VINFO_EARLY_BRK_DEST_BB): New. (LOOP_VINFO_EARLY_BRK_VUSES): New. --- inline copy of patch -- diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index d5c9c4a11c2e5d8fd287f412bfa86d081c2f8325..0fc4f325980be0474f628c32b9ce7be77f3e1d60 100644 --- diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index d5c9c4a11c2e5d8fd287f412bfa86d081c2f8325..0fc4f325980be0474f628c32b9ce7be77f3e1d60 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -613,6 +613,332 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, return opt_result::success (); } +/* This function tries to validate whether an early break vectorization + is possible for the current instruction sequence. Returns True i + possible, otherwise False. + + Requirements: + - Any memory access must be to a fixed size buffer. + - There must not be any loads and stores to the same object. + - Multiple loads are allowed as long as they don't alias. + + NOTE: + This implemementation is very conservative. Any overlappig loads/stores + that take place before the early break statement gets rejected aside from + WAR dependencies. + + i.e.: + + a[i] = 8 + c = a[i] + if (b[i]) + ... + + is not allowed, but + + c = a[i] + a[i] = 8 + if (b[i]) + ... + + is which is the common case. + + Arguments: + - LOOP_VINFO: loop information for the current loop. + - CHAIN: Currently detected sequence of instructions that need to be moved + if we are to vectorize this early break. + - FIXED: Sequences of SSA_NAMEs that must not be moved, they are reachable from + one or more cond conditions. If this set overlaps with CHAIN then FIXED + takes precedence. This deals with non-single use cases. + - LOADS: List of all loads found during traversal. + - BASES: List of all load data references found during traversal. + - GSTMT: Current position to inspect for validity. The sequence + will be moved upwards from this point. + - REACHING_VUSE: The dominating VUSE found so far. */ + +static bool +validate_early_exit_stmts (loop_vec_info loop_vinfo, hash_set *chain, + hash_set *fixed, vec *loads, + vec *bases, tree *reaching_vuse, + gimple_stmt_iterator *gstmt) +{ + if (gsi_end_p (*gstmt)) + return true; + + gimple *stmt = gsi_stmt (*gstmt); + /* ?? Do I need to move debug statements? not quite sure.. */ + if (gimple_has_ops (stmt) + && !is_gimple_debug (stmt)) + { + tree dest = NULL_TREE; + /* Try to find the SSA_NAME being defined. For Statements with an LHS + use the LHS, if not, assume that the first argument of a call is the + value being defined. e.g. MASKED_LOAD etc. */ + if (gimple_has_lhs (stmt)) + dest = gimple_get_lhs (stmt); + else if (const gcall *call = dyn_cast (stmt)) + dest = gimple_arg (call, 0); + else if (const gcond *cond = dyn_cast (stmt)) + { + /* Operands of conds are ones we can't move. */ + fixed->add (gimple_cond_lhs (cond)); + fixed->add (gimple_cond_rhs (cond)); + } + + bool move = false; + + stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (stmt); + if (!stmt_vinfo) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported. Unknown" + " statement: %G", stmt); + return false; + } + + auto dr_ref = STMT_VINFO_DATA_REF (stmt_vinfo); + if (dr_ref) + { + /* We currently only support statically allocated objects due to + not having first-faulting loads support or peeling for alignment + support. Compute the size of the referenced object (it could be + dynamically allocated). */ + tree obj = DR_BASE_ADDRESS (dr_ref); + if (!obj || TREE_CODE (obj) != ADDR_EXPR) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return false; + } + + tree refop = TREE_OPERAND (obj, 0); + tree refbase = get_base_address (refop); + if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) + || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return false; + } + + if (DR_IS_READ (dr_ref)) + { + loads->safe_push (dest); + bases->safe_push (dr_ref); + } + else if (DR_IS_WRITE (dr_ref)) + { + for (auto dr : bases) + if (same_data_refs_base_objects (dr, dr_ref)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, + vect_location, + "early breaks only supported," + " overlapping loads and stores found" + " before the break statement.\n"); + return false; + } + /* Any writes starts a new chain. */ + move = true; + } + } + + /* If a statement is live and escapes the loop through usage in the loop + epilogue then we can't move it since we need to maintain its + reachability through all exits. */ + bool skip = false; + if (STMT_VINFO_LIVE_P (stmt_vinfo) + && !(dr_ref && DR_IS_WRITE (dr_ref))) + { + imm_use_iterator imm_iter; + use_operand_p use_p; + FOR_EACH_IMM_USE_FAST (use_p, imm_iter, dest) + { + basic_block bb = gimple_bb (USE_STMT (use_p)); + skip = bb == LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; + if (skip) + break; + } + } + + /* If we found the defining statement of a something that's part of the + chain then expand the chain with the new SSA_VARs being used. */ + if (!skip && (chain->contains (dest) || move)) + { + move = true; + for (unsigned x = 0; x < gimple_num_args (stmt); x++) + { + tree var = gimple_arg (stmt, x); + if (TREE_CODE (var) == SSA_NAME) + { + if (fixed->contains (dest)) + { + move = false; + fixed->add (var); + } + else + chain->add (var); + } + else + { + use_operand_p use_p; + ssa_op_iter iter; + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE) + { + tree op = USE_FROM_PTR (use_p); + gcc_assert (TREE_CODE (op) == SSA_NAME); + if (fixed->contains (dest)) + { + move = false; + fixed->add (op); + } + else + chain->add (op); + } + } + } + + if (dump_enabled_p ()) + { + if (move) + dump_printf_loc (MSG_NOTE, vect_location, + "found chain %G", stmt); + else + dump_printf_loc (MSG_NOTE, vect_location, + "ignored chain %G, not single use", stmt); + } + } + + if (move) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "==> recording stmt %G", stmt); + + for (tree ref : loads) + if (stmt_may_clobber_ref_p (stmt, ref, true)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported as memory used" + " may alias.\n"); + return false; + } + + /* If we've moved a VDEF, extract the defining MEM and update + usages of it. */ + tree vdef; + if ((vdef = gimple_vdef (stmt))) + { + /* This statement is to be moved. */ + LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).safe_push (stmt); + *reaching_vuse = gimple_vuse (stmt); + } + } + } + + gsi_prev (gstmt); + + if (!validate_early_exit_stmts (loop_vinfo, chain, fixed, loads, bases, + reaching_vuse, gstmt)) + return false; + + if (gimple_vuse (stmt) && !gimple_vdef (stmt)) + { + LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo).safe_push (stmt); + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "marked statement for vUSE update: %G", stmt); + } + + return true; +} + +/* Funcion vect_analyze_early_break_dependences. + + Examime all the data references in the loop and make sure that if we have + mulitple exits that we are able to safely move stores such that they become + safe for vectorization. The function also calculates the place where to move + the instructions to and computes what the new vUSE chain should be. + + This works in tandem with the CFG that will be produced by + slpeel_tree_duplicate_loop_to_edge_cfg later on. */ + +static opt_result +vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) +{ + DUMP_VECT_SCOPE ("vect_analyze_early_break_dependences"); + + hash_set chain, fixed; + auto_vec loads; + auto_vec bases; + basic_block dest_bb = NULL; + tree vuse = NULL; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "loop contains multiple exits, analyzing" + " statement dependencies.\n"); + + for (gcond *c : LOOP_VINFO_LOOP_CONDS (loop_vinfo)) + { + stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (c); + if (STMT_VINFO_TYPE (loop_cond_info) != loop_exit_ctrl_vec_info_type) + continue; + + gimple *stmt = STMT_VINFO_STMT (loop_cond_info); + gimple_stmt_iterator gsi = gsi_for_stmt (stmt); + + /* Initiaze the vuse chain with the one at the early break. */ + if (!vuse) + vuse = gimple_vuse (c); + + if (!validate_early_exit_stmts (loop_vinfo, &chain, &fixed, &loads, + &bases, &vuse, &gsi)) + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + + /* Save destination as we go, BB are visited in order and the last one + is where statements should be moved to. */ + if (!dest_bb) + dest_bb = gimple_bb (c); + else + { + basic_block curr_bb = gimple_bb (c); + if (dominated_by_p (CDI_DOMINATORS, curr_bb, dest_bb)) + dest_bb = curr_bb; + } + } + + dest_bb = FALLTHRU_EDGE (dest_bb)->dest; + gcc_assert (dest_bb); + LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo) = dest_bb; + + /* TODO: Remove? It's useful debug statement but may be too much. */ + for (auto g : LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "updated use: %T, mem_ref: %G", + vuse, g); + } + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "recorded statements to be moved to BB %d\n", + LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo)->index); + + return opt_result::success (); +} + /* Function vect_analyze_data_ref_dependences. Examine all the data references in the loop, and make sure there do not @@ -657,6 +983,11 @@ vect_analyze_data_ref_dependences (loop_vec_info loop_vinfo, return res; } + /* If we have early break statements in the loop, check to see if they + are of a form we can vectorizer. */ + if (LOOP_VINFO_EARLY_BREAKS (loop_vinfo)) + return vect_analyze_early_break_dependences (loop_vinfo); + return opt_result::success (); } diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 40f167d279589a5b97f618720cfbc0d41b7f2342..c123398aad207082384a2079c5234033c3d825ea 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -1040,6 +1040,7 @@ _loop_vec_info::_loop_vec_info (class loop *loop_in, vec_info_shared *shared) partial_load_store_bias (0), peeling_for_gaps (false), peeling_for_niter (false), + early_breaks (false), no_data_dependencies (false), has_mask_store (false), scalar_loop_scaling (profile_probability::uninitialized ()), @@ -11392,6 +11393,55 @@ update_epilogue_loop_vinfo (class loop *epilogue, tree advance) epilogue_vinfo->shared->save_datarefs (); } +/* When vectorizing early break statements instructions that happen before + the early break in the current BB need to be moved to after the early + break. This function deals with that and assumes that any validity + checks has already been performed. + + While moving the instructions if it encounters a VUSE or VDEF it then + corrects the VUSES as it moves the statements along. GDEST is the location + in which to insert the new statements. */ + +static void +move_early_exit_stmts (loop_vec_info loop_vinfo) +{ + if (LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).is_empty ()) + return; + + /* Move all stmts that need moving. */ + basic_block dest_bb = LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo); + gimple_stmt_iterator dest_gsi = gsi_start_bb (dest_bb); + + for (gimple *stmt : LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo)) + { + /* Check to see if statement is still required for vect or has been + elided. */ + auto stmt_info = loop_vinfo->lookup_stmt (stmt); + if (!stmt_info) + continue; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "moving stmt %G", stmt); + + gimple_stmt_iterator stmt_gsi = gsi_for_stmt (stmt); + gsi_move_before (&stmt_gsi, &dest_gsi); + gsi_prev (&dest_gsi); + update_stmt (stmt); + } + + /* Update all the stmts with their new reaching VUSES. */ + tree vuse = gimple_vuse (LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).last ()); + for (auto p : LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "updating vuse to %T for stmt %G", vuse, p); + unlink_stmt_vdef (p); + gimple_set_vuse (p, vuse); + update_stmt (p); + } +} + /* Function vect_transform_loop. The analysis phase has determined that the loop is vectorizable. @@ -11541,6 +11591,11 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) vect_schedule_slp (loop_vinfo, LOOP_VINFO_SLP_INSTANCES (loop_vinfo)); } + /* Handle any code motion that we need to for early-break vectorization after + we've done peeling but just before we start vectorizing. */ + if (LOOP_VINFO_EARLY_BREAKS (loop_vinfo)) + move_early_exit_stmts (loop_vinfo); + /* FORNOW: the vectorizer supports only loops which body consist of one basic block (header + empty latch). When the vectorizer will support more involved loop forms, the order by which the BBs are diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index 99ba75e98c0d185edd78c7b8b9947618d18576cc..42cebb92789247434a91cb8e74c0557e75d1ea2c 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -13511,6 +13511,9 @@ vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt, case vect_first_order_recurrence: dump_printf (MSG_NOTE, "first order recurrence\n"); break; + case vect_early_exit_def: + dump_printf (MSG_NOTE, "early exit\n"); + break; case vect_unknown_def_type: dump_printf (MSG_NOTE, "unknown\n"); break; diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index a4043e4a6568a9e8cfaf9298fe940289e165f9e2..1418913d2c308b0cf78352e29dc9958746fb9c94 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -66,6 +66,7 @@ enum vect_def_type { vect_double_reduction_def, vect_nested_cycle, vect_first_order_recurrence, + vect_early_exit_def, vect_unknown_def_type }; @@ -888,6 +889,10 @@ public: we need to peel off iterations at the end to form an epilogue loop. */ bool peeling_for_niter; + /* When the loop has early breaks that we can vectorize we need to peel + the loop for the break finding loop. */ + bool early_breaks; + /* List of loop additional IV conditionals found in the loop. */ auto_vec conds; @@ -942,6 +947,20 @@ public: /* The controlling loop IV for the scalar loop being vectorized. This IV controls the natural exits of the loop. */ edge scalar_loop_iv_exit; + + /* Used to store the list of statements needing to be moved if doing early + break vectorization as they would violate the scalar loop semantics if + vectorized in their current location. These are stored in order that they need + to be moved. */ + auto_vec early_break_conflict; + + /* The final basic block where to move statements to. In the case of + multiple exits this could be pretty far away. */ + basic_block early_break_dest_bb; + + /* Statements whose VUSES need updating if early break vectorization is to + happen. */ + auto_vec early_break_vuses; } *loop_vec_info; /* Access Functions. */ @@ -996,6 +1015,10 @@ public: #define LOOP_VINFO_REDUCTION_CHAINS(L) (L)->reduction_chains #define LOOP_VINFO_PEELING_FOR_GAPS(L) (L)->peeling_for_gaps #define LOOP_VINFO_PEELING_FOR_NITER(L) (L)->peeling_for_niter +#define LOOP_VINFO_EARLY_BREAKS(L) (L)->early_breaks +#define LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS(L) (L)->early_break_conflict +#define LOOP_VINFO_EARLY_BRK_DEST_BB(L) (L)->early_break_dest_bb +#define LOOP_VINFO_EARLY_BRK_VUSES(L) (L)->early_break_vuses #define LOOP_VINFO_LOOP_CONDS(L) (L)->conds #define LOOP_VINFO_LOOP_IV_COND(L) (L)->loop_iv_cond #define LOOP_VINFO_NO_DATA_DEPENDENCIES(L) (L)->no_data_dependencies --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -613,6 +613,332 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, return opt_result::success (); } +/* This function tries to validate whether an early break vectorization + is possible for the current instruction sequence. Returns True i + possible, otherwise False. + + Requirements: + - Any memory access must be to a fixed size buffer. + - There must not be any loads and stores to the same object. + - Multiple loads are allowed as long as they don't alias. + + NOTE: + This implemementation is very conservative. Any overlappig loads/stores + that take place before the early break statement gets rejected aside from + WAR dependencies. + + i.e.: + + a[i] = 8 + c = a[i] + if (b[i]) + ... + + is not allowed, but + + c = a[i] + a[i] = 8 + if (b[i]) + ... + + is which is the common case. + + Arguments: + - LOOP_VINFO: loop information for the current loop. + - CHAIN: Currently detected sequence of instructions that need to be moved + if we are to vectorize this early break. + - FIXED: Sequences of SSA_NAMEs that must not be moved, they are reachable from + one or more cond conditions. If this set overlaps with CHAIN then FIXED + takes precedence. This deals with non-single use cases. + - LOADS: List of all loads found during traversal. + - BASES: List of all load data references found during traversal. + - GSTMT: Current position to inspect for validity. The sequence + will be moved upwards from this point. + - REACHING_VUSE: The dominating VUSE found so far. */ + +static bool +validate_early_exit_stmts (loop_vec_info loop_vinfo, hash_set *chain, + hash_set *fixed, vec *loads, + vec *bases, tree *reaching_vuse, + gimple_stmt_iterator *gstmt) +{ + if (gsi_end_p (*gstmt)) + return true; + + gimple *stmt = gsi_stmt (*gstmt); + /* ?? Do I need to move debug statements? not quite sure.. */ + if (gimple_has_ops (stmt) + && !is_gimple_debug (stmt)) + { + tree dest = NULL_TREE; + /* Try to find the SSA_NAME being defined. For Statements with an LHS + use the LHS, if not, assume that the first argument of a call is the + value being defined. e.g. MASKED_LOAD etc. */ + if (gimple_has_lhs (stmt)) + dest = gimple_get_lhs (stmt); + else if (const gcall *call = dyn_cast (stmt)) + dest = gimple_arg (call, 0); + else if (const gcond *cond = dyn_cast (stmt)) + { + /* Operands of conds are ones we can't move. */ + fixed->add (gimple_cond_lhs (cond)); + fixed->add (gimple_cond_rhs (cond)); + } + + bool move = false; + + stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (stmt); + if (!stmt_vinfo) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported. Unknown" + " statement: %G", stmt); + return false; + } + + auto dr_ref = STMT_VINFO_DATA_REF (stmt_vinfo); + if (dr_ref) + { + /* We currently only support statically allocated objects due to + not having first-faulting loads support or peeling for alignment + support. Compute the size of the referenced object (it could be + dynamically allocated). */ + tree obj = DR_BASE_ADDRESS (dr_ref); + if (!obj || TREE_CODE (obj) != ADDR_EXPR) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return false; + } + + tree refop = TREE_OPERAND (obj, 0); + tree refbase = get_base_address (refop); + if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) + || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return false; + } + + if (DR_IS_READ (dr_ref)) + { + loads->safe_push (dest); + bases->safe_push (dr_ref); + } + else if (DR_IS_WRITE (dr_ref)) + { + for (auto dr : bases) + if (same_data_refs_base_objects (dr, dr_ref)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, + vect_location, + "early breaks only supported," + " overlapping loads and stores found" + " before the break statement.\n"); + return false; + } + /* Any writes starts a new chain. */ + move = true; + } + } + + /* If a statement is live and escapes the loop through usage in the loop + epilogue then we can't move it since we need to maintain its + reachability through all exits. */ + bool skip = false; + if (STMT_VINFO_LIVE_P (stmt_vinfo) + && !(dr_ref && DR_IS_WRITE (dr_ref))) + { + imm_use_iterator imm_iter; + use_operand_p use_p; + FOR_EACH_IMM_USE_FAST (use_p, imm_iter, dest) + { + basic_block bb = gimple_bb (USE_STMT (use_p)); + skip = bb == LOOP_VINFO_IV_EXIT (loop_vinfo)->dest; + if (skip) + break; + } + } + + /* If we found the defining statement of a something that's part of the + chain then expand the chain with the new SSA_VARs being used. */ + if (!skip && (chain->contains (dest) || move)) + { + move = true; + for (unsigned x = 0; x < gimple_num_args (stmt); x++) + { + tree var = gimple_arg (stmt, x); + if (TREE_CODE (var) == SSA_NAME) + { + if (fixed->contains (dest)) + { + move = false; + fixed->add (var); + } + else + chain->add (var); + } + else + { + use_operand_p use_p; + ssa_op_iter iter; + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE) + { + tree op = USE_FROM_PTR (use_p); + gcc_assert (TREE_CODE (op) == SSA_NAME); + if (fixed->contains (dest)) + { + move = false; + fixed->add (op); + } + else + chain->add (op); + } + } + } + + if (dump_enabled_p ()) + { + if (move) + dump_printf_loc (MSG_NOTE, vect_location, + "found chain %G", stmt); + else + dump_printf_loc (MSG_NOTE, vect_location, + "ignored chain %G, not single use", stmt); + } + } + + if (move) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "==> recording stmt %G", stmt); + + for (tree ref : loads) + if (stmt_may_clobber_ref_p (stmt, ref, true)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported as memory used" + " may alias.\n"); + return false; + } + + /* If we've moved a VDEF, extract the defining MEM and update + usages of it. */ + tree vdef; + if ((vdef = gimple_vdef (stmt))) + { + /* This statement is to be moved. */ + LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).safe_push (stmt); + *reaching_vuse = gimple_vuse (stmt); + } + } + } + + gsi_prev (gstmt); + + if (!validate_early_exit_stmts (loop_vinfo, chain, fixed, loads, bases, + reaching_vuse, gstmt)) + return false; + + if (gimple_vuse (stmt) && !gimple_vdef (stmt)) + { + LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo).safe_push (stmt); + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "marked statement for vUSE update: %G", stmt); + } + + return true; +} + +/* Funcion vect_analyze_early_break_dependences. + + Examime all the data references in the loop and make sure that if we have + mulitple exits that we are able to safely move stores such that they become + safe for vectorization. The function also calculates the place where to move + the instructions to and computes what the new vUSE chain should be. + + This works in tandem with the CFG that will be produced by + slpeel_tree_duplicate_loop_to_edge_cfg later on. */ + +static opt_result +vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) +{ + DUMP_VECT_SCOPE ("vect_analyze_early_break_dependences"); + + hash_set chain, fixed; + auto_vec loads; + auto_vec bases; + basic_block dest_bb = NULL; + tree vuse = NULL; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "loop contains multiple exits, analyzing" + " statement dependencies.\n"); + + for (gcond *c : LOOP_VINFO_LOOP_CONDS (loop_vinfo)) + { + stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (c); + if (STMT_VINFO_TYPE (loop_cond_info) != loop_exit_ctrl_vec_info_type) + continue; + + gimple *stmt = STMT_VINFO_STMT (loop_cond_info); + gimple_stmt_iterator gsi = gsi_for_stmt (stmt); + + /* Initiaze the vuse chain with the one at the early break. */ + if (!vuse) + vuse = gimple_vuse (c); + + if (!validate_early_exit_stmts (loop_vinfo, &chain, &fixed, &loads, + &bases, &vuse, &gsi)) + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + + /* Save destination as we go, BB are visited in order and the last one + is where statements should be moved to. */ + if (!dest_bb) + dest_bb = gimple_bb (c); + else + { + basic_block curr_bb = gimple_bb (c); + if (dominated_by_p (CDI_DOMINATORS, curr_bb, dest_bb)) + dest_bb = curr_bb; + } + } + + dest_bb = FALLTHRU_EDGE (dest_bb)->dest; + gcc_assert (dest_bb); + LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo) = dest_bb; + + /* TODO: Remove? It's useful debug statement but may be too much. */ + for (auto g : LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "updated use: %T, mem_ref: %G", + vuse, g); + } + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "recorded statements to be moved to BB %d\n", + LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo)->index); + + return opt_result::success (); +} + /* Function vect_analyze_data_ref_dependences. Examine all the data references in the loop, and make sure there do not @@ -657,6 +983,11 @@ vect_analyze_data_ref_dependences (loop_vec_info loop_vinfo, return res; } + /* If we have early break statements in the loop, check to see if they + are of a form we can vectorizer. */ + if (LOOP_VINFO_EARLY_BREAKS (loop_vinfo)) + return vect_analyze_early_break_dependences (loop_vinfo); + return opt_result::success (); } diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 40f167d279589a5b97f618720cfbc0d41b7f2342..c123398aad207082384a2079c5234033c3d825ea 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -1040,6 +1040,7 @@ _loop_vec_info::_loop_vec_info (class loop *loop_in, vec_info_shared *shared) partial_load_store_bias (0), peeling_for_gaps (false), peeling_for_niter (false), + early_breaks (false), no_data_dependencies (false), has_mask_store (false), scalar_loop_scaling (profile_probability::uninitialized ()), @@ -11392,6 +11393,55 @@ update_epilogue_loop_vinfo (class loop *epilogue, tree advance) epilogue_vinfo->shared->save_datarefs (); } +/* When vectorizing early break statements instructions that happen before + the early break in the current BB need to be moved to after the early + break. This function deals with that and assumes that any validity + checks has already been performed. + + While moving the instructions if it encounters a VUSE or VDEF it then + corrects the VUSES as it moves the statements along. GDEST is the location + in which to insert the new statements. */ + +static void +move_early_exit_stmts (loop_vec_info loop_vinfo) +{ + if (LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).is_empty ()) + return; + + /* Move all stmts that need moving. */ + basic_block dest_bb = LOOP_VINFO_EARLY_BRK_DEST_BB (loop_vinfo); + gimple_stmt_iterator dest_gsi = gsi_start_bb (dest_bb); + + for (gimple *stmt : LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo)) + { + /* Check to see if statement is still required for vect or has been + elided. */ + auto stmt_info = loop_vinfo->lookup_stmt (stmt); + if (!stmt_info) + continue; + + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, "moving stmt %G", stmt); + + gimple_stmt_iterator stmt_gsi = gsi_for_stmt (stmt); + gsi_move_before (&stmt_gsi, &dest_gsi); + gsi_prev (&dest_gsi); + update_stmt (stmt); + } + + /* Update all the stmts with their new reaching VUSES. */ + tree vuse = gimple_vuse (LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS (loop_vinfo).last ()); + for (auto p : LOOP_VINFO_EARLY_BRK_VUSES (loop_vinfo)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "updating vuse to %T for stmt %G", vuse, p); + unlink_stmt_vdef (p); + gimple_set_vuse (p, vuse); + update_stmt (p); + } +} + /* Function vect_transform_loop. The analysis phase has determined that the loop is vectorizable. @@ -11541,6 +11591,11 @@ vect_transform_loop (loop_vec_info loop_vinfo, gimple *loop_vectorized_call) vect_schedule_slp (loop_vinfo, LOOP_VINFO_SLP_INSTANCES (loop_vinfo)); } + /* Handle any code motion that we need to for early-break vectorization after + we've done peeling but just before we start vectorizing. */ + if (LOOP_VINFO_EARLY_BREAKS (loop_vinfo)) + move_early_exit_stmts (loop_vinfo); + /* FORNOW: the vectorizer supports only loops which body consist of one basic block (header + empty latch). When the vectorizer will support more involved loop forms, the order by which the BBs are diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index 99ba75e98c0d185edd78c7b8b9947618d18576cc..42cebb92789247434a91cb8e74c0557e75d1ea2c 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -13511,6 +13511,9 @@ vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt, case vect_first_order_recurrence: dump_printf (MSG_NOTE, "first order recurrence\n"); break; + case vect_early_exit_def: + dump_printf (MSG_NOTE, "early exit\n"); + break; case vect_unknown_def_type: dump_printf (MSG_NOTE, "unknown\n"); break; diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index a4043e4a6568a9e8cfaf9298fe940289e165f9e2..1418913d2c308b0cf78352e29dc9958746fb9c94 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -66,6 +66,7 @@ enum vect_def_type { vect_double_reduction_def, vect_nested_cycle, vect_first_order_recurrence, + vect_early_exit_def, vect_unknown_def_type }; @@ -888,6 +889,10 @@ public: we need to peel off iterations at the end to form an epilogue loop. */ bool peeling_for_niter; + /* When the loop has early breaks that we can vectorize we need to peel + the loop for the break finding loop. */ + bool early_breaks; + /* List of loop additional IV conditionals found in the loop. */ auto_vec conds; @@ -942,6 +947,20 @@ public: /* The controlling loop IV for the scalar loop being vectorized. This IV controls the natural exits of the loop. */ edge scalar_loop_iv_exit; + + /* Used to store the list of statements needing to be moved if doing early + break vectorization as they would violate the scalar loop semantics if + vectorized in their current location. These are stored in order that they need + to be moved. */ + auto_vec early_break_conflict; + + /* The final basic block where to move statements to. In the case of + multiple exits this could be pretty far away. */ + basic_block early_break_dest_bb; + + /* Statements whose VUSES need updating if early break vectorization is to + happen. */ + auto_vec early_break_vuses; } *loop_vec_info; /* Access Functions. */ @@ -996,6 +1015,10 @@ public: #define LOOP_VINFO_REDUCTION_CHAINS(L) (L)->reduction_chains #define LOOP_VINFO_PEELING_FOR_GAPS(L) (L)->peeling_for_gaps #define LOOP_VINFO_PEELING_FOR_NITER(L) (L)->peeling_for_niter +#define LOOP_VINFO_EARLY_BREAKS(L) (L)->early_breaks +#define LOOP_VINFO_EARLY_BRK_CONFLICT_STMTS(L) (L)->early_break_conflict +#define LOOP_VINFO_EARLY_BRK_DEST_BB(L) (L)->early_break_dest_bb +#define LOOP_VINFO_EARLY_BRK_VUSES(L) (L)->early_break_vuses #define LOOP_VINFO_LOOP_CONDS(L) (L)->conds #define LOOP_VINFO_LOOP_IV_COND(L) (L)->loop_iv_cond #define LOOP_VINFO_NO_DATA_DEPENDENCIES(L) (L)->no_data_dependencies