From patchwork Mon Jan 29 15:04:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 193531 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp626120dyb; Mon, 29 Jan 2024 07:07:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IGbYPpAz8t+q/m2QMi30VTfg0L3FCYpXEYBzL0bphmoj9rV0StVE4UCXRoLcyfdI3EX3cB2 X-Received: by 2002:a05:6214:626:b0:68c:5b0b:601 with SMTP id a6-20020a056214062600b0068c5b0b0601mr394683qvx.59.1706540762848; Mon, 29 Jan 2024 07:06:02 -0800 (PST) ARC-Seal: i=4; a=rsa-sha256; t=1706540762; cv=pass; d=google.com; s=arc-20160816; b=BqrAin2Qj3sVtIxCgLidsMcohFpcDNwUm5DIjbcb7lZ1PLuQdjtjxf7DNR1Lpb5V9I cGfgO6Tz0J+MXPwr3z0fqTkpYi+xvbWpb0aPZi9uw7NO0G4elJ1ogwh46PUgBVPT+yD+ Bezz7hL3H3azIOUXVEPXBk+U2cQ/pExNVh/YvWOFM/5YZR1jdMmns0eSiRxX60Ft9JVP eDXzIMXXx/C6wX3qyChqC1yEXxf8pc6lWyzvlE+6352rE8ItGSfWGmH3BDaGVjrnSxWa Y+uxcWBIgl5IzogjaQqMj0TJeKlNJPqhuUoyTzQ4L4gECaOPslTkYiMYCzSluPoiF45M HfGA== ARC-Message-Signature: i=4; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:original-authentication-results :nodisclaimer:mime-version:content-disposition:message-id:subject:cc :to:from:date:authentication-results-original:dkim-signature :dkim-signature:arc-filter:dmarc-filter:delivered-to; bh=RAWnWiKl1CM5M0meeo9ENmdPCuc4Chv+rqdIs14oMQw=; fh=6lLuKcPp5JwcfhLsQ40FQN1vJS1KxlKbf2GiViQwbCM=; b=lrreXas9ps9jujUKnGN9CoIiTTb9NC6+M0FNMkweRVOt3gy/3YFPh/wUo3EzXonwiF LcR0mtxQVmMz5vVVfnxj6oU0U4PywBrxRzTOrcrndONAoFKj/PSM6FxB8kSmS21QUwPL 9PiTyEHOI1VoB5FcyTNv6yUc3TXGgmCRNY+/PsvJmsUZkMWsdJntdD9uXlnXUyPvRWXV tC1Nvqn8xWYQ+TrqaPwACkDQa4dDoStWHd3COnUZPgmnwk3pFFEneNHN5Ne5+VX/HA6b 2x4aFRmhRnKrCRKnA+jbsrpICmALMupzjEt7b0/LkKvzTd8+ce40JaJLZ+ODREM/xVff bsVQ== ARC-Authentication-Results: i=4; mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=UF3EWFiV; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=UF3EWFiV; arc=pass (i=3); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id jx8-20020a0562142b0800b0067f8600c0eesi7885213qvb.366.2024.01.29.07.06.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 07:06:02 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=UF3EWFiV; dkim=pass header.i=@armh.onmicrosoft.com header.s=selector2-armh-onmicrosoft-com header.b=UF3EWFiV; arc=pass (i=3); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 849803858004 for ; Mon, 29 Jan 2024 15:06:02 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR02-AM0-obe.outbound.protection.outlook.com (mail-am0eur02on2072.outbound.protection.outlook.com [40.107.247.72]) by sourceware.org (Postfix) with ESMTPS id E81913857C55 for ; Mon, 29 Jan 2024 15:05:01 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E81913857C55 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org E81913857C55 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.247.72 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1706540713; cv=pass; b=C+/linP9MS9kN8+v0VI1smsH7hDWW94cLsNmWkzllFKewQ1j68xpgkRg7vZNod8744GVVtOg/5Ye61H8Rpl76sjq1Fxa+jKujDQMrsGn3punOkkDlsuZ/uFIvRKOzlwvJ+SIX41cm5/ZKMKyNh84Mk3hjCxdPd66Je5/WJXCe3w= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1706540713; c=relaxed/simple; bh=Ol+0nkUjGQAWgs46DhY9Q3ouKmNoPwWkDLJNWxwzRoI=; h=DKIM-Signature:DKIM-Signature:Date:From:To:Subject:Message-ID: MIME-Version; b=Hl3F9lN1Yu6u7w+uEZ/rV8gqmFhR2a4L9ShCPytPdqZIFl9ti5X6pyq0IsHsGLST05ao+9m9gAEaeFBguV83/QmfKruG9FS1uF4ELv969HivgjiyN4lW6gqWPOdFiamrnVSQ4vBpmzsfYQNNpUFAPeBFtAcEH6fvQpDNjVJZuEY= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=ZO4F25cPFt57QpV6FNq9xEVEJtqjUsOx/MlVh+cVMj7jjMvtKh7tymaqMZTaQwZOrSo82IuA6eRr6wDzJ8tvvbuafDGpRnWpV/mpyljjT0nmm2+r33Dqa90WWUulXFDR3u6wH36m3NNst9+yXqBOVpNCwhe1GGErrmR1bjFteMARFMBVWx0312I0DNa+xkuoaXRGNW64q1VAgKFKncTzfcYZ1rlweqixS9gzJv2fNC90+Wm2VfBLr6V9sfEViHgK9bWiGqbwgtW/WeKrye1ORkuyrg1A9xzZwRGK5w6WNLUN4oRd4Kfcc2nifgK0gPLihw2V7lfP374bRHUJQNH+3g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RAWnWiKl1CM5M0meeo9ENmdPCuc4Chv+rqdIs14oMQw=; b=k1G5/+65wv0fOOO4/6CVYnzekyREXqV7rGX3xCxmDCIJQfwCaYlgJRQE6ySv6DxM6jniLJybrNGbaR0wWnJblbJWoj+wRN0V7R11T07NEGfmG1RRBn0J/dJ3slONbOr0Xnadh9O9ucBCUhNUtJLRsO0lcA9MBk7U4tgWgulKZ7NRfsjf/e17yIXRcxBK2I4Fu4w2576ZkFqGfs1suTRrc5FU9tt27PNz365Pecpe3+baQglayXbiGCmU+q3yQU5gWAKJYhUtMWV4xYGJerpZOdG4Epl6kW1ZDXP17LiOqa7UM6+C8b/lIhpsdzcV65IkkhROgn88ZYQPKEgmE/wyRw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RAWnWiKl1CM5M0meeo9ENmdPCuc4Chv+rqdIs14oMQw=; b=UF3EWFiVNVxuFghZ7eMOfM1ZKhFpcOTifrR/usqhk86tF3YY7klIiyQGW6k+FGOh8l6vQLbyWPekv+YNPPFRM4rrZmoPZU1IX2XxMvDxRq+cizv73t+ppgo+l5UYuK+FLlaWNnEez6buoewb+lzQoAW1srZuQX5kHVSZjzrmxU8= Received: from AM6P192CA0076.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::17) by DU0PR08MB9824.eurprd08.prod.outlook.com (2603:10a6:10:443::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.32; Mon, 29 Jan 2024 15:04:57 +0000 Received: from AMS0EPF000001A2.eurprd05.prod.outlook.com (2603:10a6:209:8d:cafe::9) by AM6P192CA0076.outlook.office365.com (2603:10a6:209:8d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.32 via Frontend Transport; Mon, 29 Jan 2024 15:04:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AMS0EPF000001A2.mail.protection.outlook.com (10.167.16.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Mon, 29 Jan 2024 15:04:56 +0000 Received: ("Tessian outbound 31df1b57f90c:v228"); Mon, 29 Jan 2024 15:04:56 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 03728f02bb2a570e X-CR-MTA-TID: 64aa7808 Received: from eae0548070c1.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id BC59446F-71B5-48D0-9464-C8850D14B3AF.1; Mon, 29 Jan 2024 15:04:49 +0000 Received: from EUR02-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eae0548070c1.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 29 Jan 2024 15:04:49 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=myr6el0UgezOlwVLXqYqEUbreHoh+LuUWb97tKc3gp7RQ42nr6R6aYiR84WqmgvtO7MVd4VgTD+s99M+G+iZrT56UZymvFITw3pwfIU68fR5gYe4Qt4ZbaS5N7zIYZlo2mqG39fw5sN9tXfEFAAuzqYU6zXGs5v2matkjG4l4PtDeL7jSaZL7/gux6e8X1foxBZ/F1ZqZB/xZ+COQZ+8JR43gPs5RK7pDRhyhG+OjQC2QSj9JrcEzR+AJXVLj1H0t22VoKbdAvRHwXQPVWVj2yOt9iG9cGRGe+rc1nxAuqaq7oFuQT3jk0IW4wfnzaxA3ZypvJ2m8E2RX+eJjImFXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RAWnWiKl1CM5M0meeo9ENmdPCuc4Chv+rqdIs14oMQw=; b=EMCBemzSWmt9tGoqFUrem0zr69aMg+rDoIniCJ9G7qgxOy9xKOHFS45qGGaXFNdMlbHfIgSLMGRZ+IHaXqROWho3AORgFWgTM3mxUsWhczpnZPu7QIzlnXF1ZPpsCm6QenEF7kJt/kGBf8Of8FJJ5jl4BIRWv1Ey3m2u4tnm1s2m71XnIYU6LEVwt7q2rgLciGn8Xdn0mwiLslGeXiBdFXuAC7cToUEb8VN0K45mAq0NJgV+aN2NvuTsVCaxV/l9kLKtNqqO/MyRKxVOPC+W5/6UrT3vtyUUD6zwtNjhPfQwEtGwBOXq7wCBnkCBjhi+eqfORCKQoqEC784bBTYpjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RAWnWiKl1CM5M0meeo9ENmdPCuc4Chv+rqdIs14oMQw=; b=UF3EWFiVNVxuFghZ7eMOfM1ZKhFpcOTifrR/usqhk86tF3YY7klIiyQGW6k+FGOh8l6vQLbyWPekv+YNPPFRM4rrZmoPZU1IX2XxMvDxRq+cizv73t+ppgo+l5UYuK+FLlaWNnEez6buoewb+lzQoAW1srZuQX5kHVSZjzrmxU8= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by AS1PR08MB7540.eurprd08.prod.outlook.com (2603:10a6:20b:470::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.32; Mon, 29 Jan 2024 15:04:44 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::810c:8495:3f0a:ef8]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::810c:8495:3f0a:ef8%7]) with mapi id 15.20.7228.029; Mon, 29 Jan 2024 15:04:44 +0000 Date: Mon, 29 Jan 2024 15:04:40 +0000 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, rguenther@suse.de, jlaw@ventanamicro.com Subject: [PATCH]middle-end: check memory accesses in the destination block [PR113588]. Message-ID: Content-Disposition: inline X-ClientProxiedBy: LO2P265CA0503.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:13b::10) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|AS1PR08MB7540:EE_|AMS0EPF000001A2:EE_|DU0PR08MB9824:EE_ X-MS-Office365-Filtering-Correlation-Id: 992c1382-d5d7-43aa-687f-08dc20dba7cb x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: GW1OzNIsPpM1YOBDu8Jk29ngehfc3oTlfrxWES2r7UHXcDhsle5TgXnaILtm+J4O0I6sucgNlZ4JKMzNfU4et0ZtCXa6OE7QEld+il+7xYOPbOw2WrHQuRGiQvWGHY2fIN1p6z6i/VHS+UwRIAriYiH3P2a3LUJTeKhVJMxuZ6qmWTxQO1sLWLF4ok2JkeLKC29gnb8Z34ZcHuw2PmJBA83L0Nclizb9ERCqcdNHxFru0c4vKlUJJ0Jz0cVe4ggTcTpHbqjDpIPowOQX1ifjQiO4B9vwaUNOPj7ockD3biKnT2wOpXmm1akn1XkHTuf7iGoY/cCXP3xLogutMYLpUySNM/ekZPYTjgUPdfL+c58XjOK8lPuWFr1FbxbK4FgwR8DJXUPms/9ME0oBDZoA+viD5bcKkCMqtnqzODrR9RNe0jFw5aDwmbml9ZtV8Nbuv6HFO7P42nFulix5o+0/jxpZE6PsWsV9JOXT1bVqRcXlRe+dUgPGxrUFlHrYxqlhuOq6/zuMtnFJ9p4tbqOYJ8FLfbWZ//Ljan51KDOCHS7/7/rDTkDrQjjb69qUhvoG5m12fdULCY7W1+vKStwg/CMY5INJDDnc0iRsGBwQdPHyi/ygWYvDWqMNXpV68FHE X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(39860400002)(136003)(396003)(376002)(346002)(230922051799003)(1800799012)(451199024)(186009)(64100799003)(84970400001)(83380400001)(36756003)(86362001)(44832011)(4326008)(235185007)(8676002)(8936002)(5660300002)(33964004)(44144004)(4743002)(2616005)(26005)(38100700002)(316002)(6916009)(66946007)(6512007)(66476007)(6486002)(30864003)(66556008)(41300700001)(2906002)(6666004)(6506007)(478600001)(4216001)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7540 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AMS0EPF000001A2.eurprd05.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 6ddf1aa8-c4d7-4bf6-0273-08dc20dba080 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Yn839a2+BpjKdpeAa7GKugR2i3YKluMieQqI5HQqlE8H+e0hcePdD+ZnAWe2C0YyoDPhSNbGZL3EIcQc3xTMbDCMJ4CqdDLNUjwy9scvBLODzxkY8KuH/Icij6shRpUcb4z3JznwptiQUUT9ZvpefJPWm99+7RN/Dp5iiVO59hj/zTBu4Q4z+PwT22omSI/gpEZA6jZ833l8n5WcBkKby47wzaHy5xT0bW8kRtd1oATS/aNmnOMwKShjTTg+yMvMdQG1zfcOKk7kiXGQsWrUQ66pY4jTE4PHFsuW4C5YG6sqmN7MBqLe9G+jf4WDzdwstYkL2dYl8ifN4FkRO0BrmQEq4LjIh2PVH47iiKitBTxsXfa/rAmC4W+2vldAGsrZdKkNt3rTVvmo3zmYGguxhGPPxTGzX+0QZSP1ypdI2LGF4cx3+UYLfOWDcD1D+56XH9ywqoAW9k2ErmElTGM9XosTJkqcnD4Okv85I7Sj4nMO/gM0Z0Ear6v46E2riujvb+yVO/EiGzl0hYVxelbKSeyD46oB9Fxbqk6tmIovl1zSr6n3wiEdC0ZguYWjyOU6S9G91hScydgYaxqfK2C7Q5bU6JZoaALe8/bMsVdy14bpj3VOfizwCgKRmymnwzEdxLb6pZWuKJbyHmxYOsZnz78OzKpu5E8WqSX2ANEVTT43KokYGMToFKEoabhHGiKwo3EUtarB3qRydmm4fBkf1d91j0hWBPm80XY064TQdds1f1cARRj2mn6MBusSvr2KD93jV6uaKzBFYN3KvEziFw== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(376002)(39860400002)(136003)(230922051799003)(64100799003)(186009)(82310400011)(451199024)(1800799012)(40470700004)(46966006)(36840700001)(41300700001)(82740400003)(81166007)(356005)(40460700003)(84970400001)(40480700001)(36860700001)(47076005)(44144004)(70586007)(6506007)(86362001)(316002)(6916009)(6486002)(33964004)(8676002)(8936002)(70206006)(44832011)(4743002)(336012)(2616005)(5660300002)(235185007)(6512007)(26005)(478600001)(6666004)(107886003)(30864003)(2906002)(4326008)(83380400001)(36756003)(4216001)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jan 2024 15:04:56.4195 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 992c1382-d5d7-43aa-687f-08dc20dba7cb X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF000001A2.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9824 X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789437686947730292 X-GMAIL-MSGID: 1789437686947730292 Hi All, When analyzing loads for early break it was always the intention that for the exit where things get moved to we only check the loads that can be reached from the condition. However the main loop checks all loads and we skip the destination BB. As such we never actually check the loads reachable from the COND in the last BB unless this BB was also the exit chosen by the vectorizer. This leads us to incorrectly vectorize the loop in the PR and in doing so access out of bounds. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: PR tree-optimization/113588 * tree-vect-data-refs.cc (vect_analyze_early_break_dependences_1): New. (vect_analyze_data_ref_dependence): Use it. (vect_analyze_early_break_dependences): Update comments. gcc/testsuite/ChangeLog: PR tree-optimization/113588 * gcc.dg/vect/vect-early-break_108-pr113588.c: New test. * gcc.dg/vect/vect-early-break_109-pr113588.c: New test. --- inline copy of patch -- diff --git a/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c b/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c new file mode 100644 index 0000000000000000000000000000000000000000..e488619c9aac41fafbcf479818392a6bb7c6924f --- diff --git a/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c b/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c new file mode 100644 index 0000000000000000000000000000000000000000..e488619c9aac41fafbcf479818392a6bb7c6924f --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c @@ -0,0 +1,15 @@ +/* { dg-do compile } */ +/* { dg-add-options vect_early_break } */ +/* { dg-require-effective-target vect_early_break } */ +/* { dg-require-effective-target vect_int } */ + +/* { dg-final { scan-tree-dump-not "LOOP VECTORIZED" "vect" } } */ + +int foo (const char *s, unsigned long n) +{ + unsigned long len = 0; + while (*s++ && n--) + ++len; + return len; +} + diff --git a/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c b/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c new file mode 100644 index 0000000000000000000000000000000000000000..488c19d3ede809631d1a7ede0e7f7bcdc7a1ae43 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c @@ -0,0 +1,44 @@ +/* { dg-add-options vect_early_break } */ +/* { dg-require-effective-target vect_early_break } */ +/* { dg-require-effective-target vect_int } */ +/* { dg-require-effective-target mmap } */ + +/* { dg-final { scan-tree-dump-not "LOOP VECTORIZED" "vect" } } */ + +#include +#include + +#include "tree-vect.h" + +__attribute__((noipa)) +int foo (const char *s, unsigned long n) +{ + unsigned long len = 0; + while (*s++ && n--) + ++len; + return len; +} + +int main() +{ + + check_vect (); + + long pgsz = sysconf (_SC_PAGESIZE); + void *p = mmap (NULL, pgsz * 3, PROT_READ|PROT_WRITE, + MAP_ANONYMOUS|MAP_PRIVATE, 0, 0); + if (p == MAP_FAILED) + return 0; + mprotect (p, pgsz, PROT_NONE); + mprotect (p+2*pgsz, pgsz, PROT_NONE); + char *p1 = p + pgsz; + p1[0] = 1; + p1[1] = 0; + foo (p1, 1000); + p1 = p + 2*pgsz - 2; + p1[0] = 1; + p1[1] = 0; + foo (p1, 1000); + return 0; +} + diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index f592aeb8028afd4fd70e2175104efab2a2c0d82e..52cef242a7ce5d0e525bff639fa1dc2f0a6f30b9 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -619,10 +619,69 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, return opt_result::success (); } -/* Funcion vect_analyze_early_break_dependences. +/* Function vect_analyze_early_break_dependences_1 - Examime all the data references in the loop and make sure that if we have - mulitple exits that we are able to safely move stores such that they become + Helper function of vect_analyze_early_break_dependences which performs safety + analysis for load operations in an early break. */ + +static opt_result +vect_analyze_early_break_dependences_1 (data_reference *dr_ref, gimple *stmt) +{ + /* We currently only support statically allocated objects due to + not having first-faulting loads support or peeling for + alignment support. Compute the size of the referenced object + (it could be dynamically allocated). */ + tree obj = DR_BASE_ADDRESS (dr_ref); + if (!obj || TREE_CODE (obj) != ADDR_EXPR) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + tree refop = TREE_OPERAND (obj, 0); + tree refbase = get_base_address (refop); + if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) + || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on" + " statically allocated objects.\n"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + /* Check if vector accesses to the object will be within bounds. + must be a constant or assume loop will be versioned or niters + bounded by VF so accesses are within range. */ + if (!ref_within_array_bound (stmt, DR_REF (dr_ref))) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported: vectorization " + "would %s beyond size of obj.", + DR_IS_READ (dr_ref) ? "read" : "write"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + return opt_result::success (); +} + +/* Function vect_analyze_early_break_dependences. + + Examine all the data references in the loop and make sure that if we have + multiple exits that we are able to safely move stores such that they become safe for vectorization. The function also calculates the place where to move the instructions to and computes what the new vUSE chain should be. @@ -639,7 +698,7 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, - Multiple loads are allowed as long as they don't alias. NOTE: - This implemementation is very conservative. Any overlappig loads/stores + This implementation is very conservative. Any overlapping loads/stores that take place before the early break statement gets rejected aside from WAR dependencies. @@ -668,7 +727,6 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) auto_vec bases; basic_block dest_bb = NULL; - hash_set visited; class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); class loop *loop_nest = loop_outer (loop); @@ -683,6 +741,7 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) dest_bb = single_pred (loop->latch); basic_block bb = dest_bb; + /* First analyse all blocks leading to dest_bb excluding dest_bb itself. */ do { /* If the destination block is also the header then we have nothing to do. */ @@ -707,53 +766,11 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) if (!dr_ref) continue; - /* We currently only support statically allocated objects due to - not having first-faulting loads support or peeling for - alignment support. Compute the size of the referenced object - (it could be dynamically allocated). */ - tree obj = DR_BASE_ADDRESS (dr_ref); - if (!obj || TREE_CODE (obj) != ADDR_EXPR) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks only supported on statically" - " allocated objects.\n"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } - - tree refop = TREE_OPERAND (obj, 0); - tree refbase = get_base_address (refop); - if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) - || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks only supported on" - " statically allocated objects.\n"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } - - /* Check if vector accesses to the object will be within bounds. - must be a constant or assume loop will be versioned or niters - bounded by VF so accesses are within range. */ - if (!ref_within_array_bound (stmt, DR_REF (dr_ref))) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks not supported: vectorization " - "would %s beyond size of obj.", - DR_IS_READ (dr_ref) ? "read" : "write"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } + /* Check if the operation is one we can safely do. */ + opt_result res + = vect_analyze_early_break_dependences_1 (dr_ref, stmt); + if (!res) + return res; if (DR_IS_READ (dr_ref)) bases.safe_push (dr_ref); @@ -817,6 +834,51 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) } while (bb != loop->header); + /* For the destination BB we need to only analyze loads reachable from the early + break statement itself. */ + auto_vec workset; + hash_set visited; + gimple *last_stmt = gsi_stmt (gsi_last_bb (dest_bb)); + gcond *last_cond = dyn_cast (last_stmt); + /* If the cast fails we have a different control flow statement in the latch. Most + commonly this is a switch. */ + if (!last_cond) + return opt_result::failure_at (last_stmt, + "can't safely apply code motion to dependencies" + " to vectorize the early exit, unknown control fow" + " in stmt %G", last_stmt); + workset.safe_push (gimple_cond_lhs (last_cond)); + workset.safe_push (gimple_cond_rhs (last_cond)); + + imm_use_iterator imm_iter; + use_operand_p use_p; + tree lhs; + do + { + tree op = workset.pop (); + if (visited.add (op)) + continue; + stmt_vec_info stmt_vinfo = loop_vinfo->lookup_def (op); + + /* Not defined in loop, don't care. */ + if (!stmt_vinfo) + continue; + gimple *stmt = STMT_VINFO_STMT (stmt_vinfo); + auto dr_ref = STMT_VINFO_DATA_REF (stmt_vinfo); + if (dr_ref) + { + opt_result res + = vect_analyze_early_break_dependences_1 (dr_ref, stmt); + if (!res) + return res; + } + else + FOR_EACH_IMM_USE_FAST (use_p, imm_iter, op) + if ((lhs = gimple_get_lhs (USE_STMT (use_p)))) + workset.safe_push (lhs); + } + while (!workset.is_empty ()); + /* We don't allow outer -> inner loop transitions which should have been trapped already during loop form analysis. */ gcc_assert (dest_bb->loop_father == loop); --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-early-break_108-pr113588.c @@ -0,0 +1,15 @@ +/* { dg-do compile } */ +/* { dg-add-options vect_early_break } */ +/* { dg-require-effective-target vect_early_break } */ +/* { dg-require-effective-target vect_int } */ + +/* { dg-final { scan-tree-dump-not "LOOP VECTORIZED" "vect" } } */ + +int foo (const char *s, unsigned long n) +{ + unsigned long len = 0; + while (*s++ && n--) + ++len; + return len; +} + diff --git a/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c b/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c new file mode 100644 index 0000000000000000000000000000000000000000..488c19d3ede809631d1a7ede0e7f7bcdc7a1ae43 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-early-break_109-pr113588.c @@ -0,0 +1,44 @@ +/* { dg-add-options vect_early_break } */ +/* { dg-require-effective-target vect_early_break } */ +/* { dg-require-effective-target vect_int } */ +/* { dg-require-effective-target mmap } */ + +/* { dg-final { scan-tree-dump-not "LOOP VECTORIZED" "vect" } } */ + +#include +#include + +#include "tree-vect.h" + +__attribute__((noipa)) +int foo (const char *s, unsigned long n) +{ + unsigned long len = 0; + while (*s++ && n--) + ++len; + return len; +} + +int main() +{ + + check_vect (); + + long pgsz = sysconf (_SC_PAGESIZE); + void *p = mmap (NULL, pgsz * 3, PROT_READ|PROT_WRITE, + MAP_ANONYMOUS|MAP_PRIVATE, 0, 0); + if (p == MAP_FAILED) + return 0; + mprotect (p, pgsz, PROT_NONE); + mprotect (p+2*pgsz, pgsz, PROT_NONE); + char *p1 = p + pgsz; + p1[0] = 1; + p1[1] = 0; + foo (p1, 1000); + p1 = p + 2*pgsz - 2; + p1[0] = 1; + p1[1] = 0; + foo (p1, 1000); + return 0; +} + diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index f592aeb8028afd4fd70e2175104efab2a2c0d82e..52cef242a7ce5d0e525bff639fa1dc2f0a6f30b9 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -619,10 +619,69 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, return opt_result::success (); } -/* Funcion vect_analyze_early_break_dependences. +/* Function vect_analyze_early_break_dependences_1 - Examime all the data references in the loop and make sure that if we have - mulitple exits that we are able to safely move stores such that they become + Helper function of vect_analyze_early_break_dependences which performs safety + analysis for load operations in an early break. */ + +static opt_result +vect_analyze_early_break_dependences_1 (data_reference *dr_ref, gimple *stmt) +{ + /* We currently only support statically allocated objects due to + not having first-faulting loads support or peeling for + alignment support. Compute the size of the referenced object + (it could be dynamically allocated). */ + tree obj = DR_BASE_ADDRESS (dr_ref); + if (!obj || TREE_CODE (obj) != ADDR_EXPR) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on statically" + " allocated objects.\n"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + tree refop = TREE_OPERAND (obj, 0); + tree refbase = get_base_address (refop); + if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) + || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks only supported on" + " statically allocated objects.\n"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + /* Check if vector accesses to the object will be within bounds. + must be a constant or assume loop will be versioned or niters + bounded by VF so accesses are within range. */ + if (!ref_within_array_bound (stmt, DR_REF (dr_ref))) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "early breaks not supported: vectorization " + "would %s beyond size of obj.", + DR_IS_READ (dr_ref) ? "read" : "write"); + return opt_result::failure_at (stmt, + "can't safely apply code motion to " + "dependencies of %G to vectorize " + "the early exit.\n", stmt); + } + + return opt_result::success (); +} + +/* Function vect_analyze_early_break_dependences. + + Examine all the data references in the loop and make sure that if we have + multiple exits that we are able to safely move stores such that they become safe for vectorization. The function also calculates the place where to move the instructions to and computes what the new vUSE chain should be. @@ -639,7 +698,7 @@ vect_analyze_data_ref_dependence (struct data_dependence_relation *ddr, - Multiple loads are allowed as long as they don't alias. NOTE: - This implemementation is very conservative. Any overlappig loads/stores + This implementation is very conservative. Any overlapping loads/stores that take place before the early break statement gets rejected aside from WAR dependencies. @@ -668,7 +727,6 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) auto_vec bases; basic_block dest_bb = NULL; - hash_set visited; class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); class loop *loop_nest = loop_outer (loop); @@ -683,6 +741,7 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) dest_bb = single_pred (loop->latch); basic_block bb = dest_bb; + /* First analyse all blocks leading to dest_bb excluding dest_bb itself. */ do { /* If the destination block is also the header then we have nothing to do. */ @@ -707,53 +766,11 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) if (!dr_ref) continue; - /* We currently only support statically allocated objects due to - not having first-faulting loads support or peeling for - alignment support. Compute the size of the referenced object - (it could be dynamically allocated). */ - tree obj = DR_BASE_ADDRESS (dr_ref); - if (!obj || TREE_CODE (obj) != ADDR_EXPR) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks only supported on statically" - " allocated objects.\n"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } - - tree refop = TREE_OPERAND (obj, 0); - tree refbase = get_base_address (refop); - if (!refbase || !DECL_P (refbase) || !DECL_SIZE (refbase) - || TREE_CODE (DECL_SIZE (refbase)) != INTEGER_CST) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks only supported on" - " statically allocated objects.\n"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } - - /* Check if vector accesses to the object will be within bounds. - must be a constant or assume loop will be versioned or niters - bounded by VF so accesses are within range. */ - if (!ref_within_array_bound (stmt, DR_REF (dr_ref))) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "early breaks not supported: vectorization " - "would %s beyond size of obj.", - DR_IS_READ (dr_ref) ? "read" : "write"); - return opt_result::failure_at (stmt, - "can't safely apply code motion to " - "dependencies of %G to vectorize " - "the early exit.\n", stmt); - } + /* Check if the operation is one we can safely do. */ + opt_result res + = vect_analyze_early_break_dependences_1 (dr_ref, stmt); + if (!res) + return res; if (DR_IS_READ (dr_ref)) bases.safe_push (dr_ref); @@ -817,6 +834,51 @@ vect_analyze_early_break_dependences (loop_vec_info loop_vinfo) } while (bb != loop->header); + /* For the destination BB we need to only analyze loads reachable from the early + break statement itself. */ + auto_vec workset; + hash_set visited; + gimple *last_stmt = gsi_stmt (gsi_last_bb (dest_bb)); + gcond *last_cond = dyn_cast (last_stmt); + /* If the cast fails we have a different control flow statement in the latch. Most + commonly this is a switch. */ + if (!last_cond) + return opt_result::failure_at (last_stmt, + "can't safely apply code motion to dependencies" + " to vectorize the early exit, unknown control fow" + " in stmt %G", last_stmt); + workset.safe_push (gimple_cond_lhs (last_cond)); + workset.safe_push (gimple_cond_rhs (last_cond)); + + imm_use_iterator imm_iter; + use_operand_p use_p; + tree lhs; + do + { + tree op = workset.pop (); + if (visited.add (op)) + continue; + stmt_vec_info stmt_vinfo = loop_vinfo->lookup_def (op); + + /* Not defined in loop, don't care. */ + if (!stmt_vinfo) + continue; + gimple *stmt = STMT_VINFO_STMT (stmt_vinfo); + auto dr_ref = STMT_VINFO_DATA_REF (stmt_vinfo); + if (dr_ref) + { + opt_result res + = vect_analyze_early_break_dependences_1 (dr_ref, stmt); + if (!res) + return res; + } + else + FOR_EACH_IMM_USE_FAST (use_p, imm_iter, op) + if ((lhs = gimple_get_lhs (USE_STMT (use_p)))) + workset.safe_push (lhs); + } + while (!workset.is_empty ()); + /* We don't allow outer -> inner loop transitions which should have been trapped already during loop form analysis. */ gcc_assert (dest_bb->loop_father == loop);