From patchwork Mon Oct 31 11:56:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamar Christina X-Patchwork-Id: 13241 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2267701wru; Mon, 31 Oct 2022 05:00:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4pL9ckiQ/t7uNKHCwuRbMQzSiy03Go/8blISCmdOltq+ylyOEVT35yGPHksVJdfUSkeHL8 X-Received: by 2002:a17:907:2705:b0:7ad:855d:1050 with SMTP id w5-20020a170907270500b007ad855d1050mr12700879ejk.443.1667217641824; Mon, 31 Oct 2022 05:00:41 -0700 (PDT) Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id a31-20020a509ea2000000b0045743696acbsi7367301edf.139.2022.10.31.05.00.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 05:00:41 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=jgZ3SE1x; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 05CBA384B838 for ; Mon, 31 Oct 2022 11:59:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 05CBA384B838 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1667217570; bh=rmbC28QJ7mY2s6QJJ6tPuQFAKcyl2yCevmb8ZKp2Xxg=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=jgZ3SE1xwNR3otxVS0kEkPWKKwvZHhz7K1zTHKDKmDlW5/wzpaE6REYxXZaEXm0Sk rESuIAwtxZ0lcnR0zOdUnKsHB+Qk0HZTCMJMiAiQXWUCjIOudrNJEEEG8UgqCepzch CMPzDnHWWqkyJi9Fwn+Nu9Sf/J3u8eLpUjqj9LHo= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-eopbgr140059.outbound.protection.outlook.com [40.107.14.59]) by sourceware.org (Postfix) with ESMTPS id 7A8AF385482E for ; Mon, 31 Oct 2022 11:57:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 7A8AF385482E ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=JPGRJlWlcnK4LgJpFspP7CQNiFSMpi/WndndCYshLKalFc8rzxhqQgsAtfy7kLt+tSXVYHRV6yS/0O0CfneH8vpvB7bK3QtZkU8dnjWPEUWQAkVT0vfIQVVUTPQIk9h/Yft5NclI3F2Cpkvq1S/QxF8gUA92leEHbPVDy2W2ZTcbW7iZ9qEIyS2dVFIDf5b8qINRatM8TGyeeLlmjJ+PG1+TbWndWD7VNYe1vKZ9iceu2HF8ZxpyFDxzrBFz22tUG0MijjEzZ2lC6uIxqLOHJ2103UFadgn2Ci4GMAn7ul2G22ToV4FEIDAs9d2eKmLjtVw95J0Y1GO2fg18sZAGZA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rmbC28QJ7mY2s6QJJ6tPuQFAKcyl2yCevmb8ZKp2Xxg=; b=imNgik6QKEyzmJ9iS76kL4gzKEJDzLY7bZ/9Lqf1kgpKxAd25rmHiXzGDUALqWpV8BvJlOAz1G9xNy7/z1HmrplSk1Av22nOPZYFxRkMTNMdENmvsfzc6Z1qxQzBIBwlYmzFQmIuCWyZ8GHzCV9WfqXWOKB6rqs+w627zzQRfg0xQPwaEtpyNaF9SWaCSH9XAmd/hXJnwBUVqjiLMiiZoPgtP6QODNho/lOxUBLgu1tFEAG1fvN8N50PWAoFVxuqQcDplmlunOJ6mwlnpoXuw52PpZr2y9i0zgSbjqkbD1cTj5vuHjvZgJf8R9Wj8nm7S+T+jt5cEGrBFfG0/ZjpoQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB6PR0301CA0080.eurprd03.prod.outlook.com (2603:10a6:6:30::27) by PAWPR08MB10257.eurprd08.prod.outlook.com (2603:10a6:102:367::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.15; Mon, 31 Oct 2022 11:57:32 +0000 Received: from DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:30:cafe::7f) by DB6PR0301CA0080.outlook.office365.com (2603:10a6:6:30::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.15 via Frontend Transport; Mon, 31 Oct 2022 11:57:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT035.mail.protection.outlook.com (100.127.142.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.14 via Frontend Transport; Mon, 31 Oct 2022 11:57:31 +0000 Received: ("Tessian outbound 0800d254cb3b:v130"); Mon, 31 Oct 2022 11:57:31 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 98af2278ba981402 X-CR-MTA-TID: 64aa7808 Received: from 98a33b3d9617.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 7C6FF9E9-61D8-411B-BB9B-DDCDE57357E6.1; Mon, 31 Oct 2022 11:56:46 +0000 Received: from EUR02-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 98a33b3d9617.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 31 Oct 2022 11:56:46 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LOIfRIyycB3o6t+fXwxJSElJDAUjVtT6N4+6IIu/+suBj/18WXinzm5CGQg89trS8xMICDlr+w5rtYB8X5g2wKYKgNnfztGWp+M5G5xcOMXiUGVmjOv8Ck5dONS4Oyy39XXbxKclnqpmqztFaRRnyp0bWn+yGWV3oW/raQ1pNXfMc59y8ZvA0v0XxjykuBQaBf+FJr11o3rh8Itga2CIS0HlR7QdWMrJwrK/pRkZ1N0MfUli0kZ8DUjYoINzY7wtKDbPvZ/y/4wSLdsCV1+bD7tAv9ES+Tt94i77VpqwY38ElFfDUUfnmSKnBoMxWjOmnIuPTIR/igRY3+QOSyrD7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rmbC28QJ7mY2s6QJJ6tPuQFAKcyl2yCevmb8ZKp2Xxg=; b=RDyZFvNg5+P9N2HGvfqRyaF/G3YG/E8elVrr3o1v9aE3jE9euadlPZeh2H7+pojk39qCG5LKPQujsWVNG8YsHMgvhsY+E1fqDzxYUq3x8w+Po7F5qllzJS2/KlJ8IZK33vxfI06dDEc5s39CUUwmvh1mTK/YZx7z0Em4spK8XhB+YT51Is9a1uoz7PbLMFBsVFxSDiWaDs2+sIiIPDFdVVnVrOVmohUsRK38qkMfnQrUy0LXsY27ainLRFJnJk3JFzkYS/1Mv0hHzrzk5CHo0CnKGqa9HpH5lgB4YzgEehzSoVH3knq5fTJ09+k5UWXZWDfba1TIYJKI65Fv4JSQSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by DB9PR08MB6730.eurprd08.prod.outlook.com (2603:10a6:10:2a2::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.19; Mon, 31 Oct 2022 11:56:44 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::c57d:50c2:3502:a52]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::c57d:50c2:3502:a52%4]) with mapi id 15.20.5769.019; Mon, 31 Oct 2022 11:56:44 +0000 Date: Mon, 31 Oct 2022 11:56:42 +0000 To: gcc-patches@gcc.gnu.org Subject: [PATCH 1/8]middle-end: Recognize scalar reductions from bitfields and array_refs Message-ID: Content-Disposition: inline X-ClientProxiedBy: LO4P123CA0430.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18b::21) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|DB9PR08MB6730:EE_|DBAEUR03FT035:EE_|PAWPR08MB10257:EE_ X-MS-Office365-Filtering-Correlation-Id: b6db9dd2-e002-4627-4d18-08dabb371713 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: QJLINOLDq3A0q/8kix+kxsPS1/LtmBppnuDsk0jlEgKnmqG40KnNkfFf8crS+amknDV+vwvq4EyL07qi7SM/y62skhuBeibKpHcr9MHfHRXtYFiVPFu8cALsajnEjGyZxZ16wtB4ebbaUs+5ZKGqBdh2NQpJcfZHyjjLAa6V0iPGVDRz4bRZ8smSsxhkM6yQshYcqL0+ddRPDn7l/0a8xf9dl+v24hrUKQpdQgU5xu0SZOJINEsTWMEFI6YJEEDNCZJLE/xo8Aicj+/2qKgjh4L7inSAvVTDnx2ZbxqmtCnVeC7btrXCWZnMIjesI+MYkhNDX7StAYnXIyG8d4DYKLqIjqTm8Ya6LfSmYphTsFbeERN0lXZX+DmusabckEeA50tLrj3ASw1mX4x/NNQWtz8fjQ+JtabJ2TOAHB89s9lP8BTQm6ZND2QWO8enE1KS2SJk/yxZqPgQbg39U42XwZPmqSIUimag8OOF0+UoY8w309rPLWwpP2dZjjqYYOteFKSPjpTiL2XnTS/Aq8aWhems7bzlFyow85IbPKMF5VxLfkIhB9UtENCmk+/WQXbsW1NKbSarGAKIKc4d450ujZwNWS0MND+ELgCzH4MkBcre53AdQ7VW+LsmNNSUCJEm3aSoA8++SNtMQmzHvDJ0I/+IiPu7HvpRLLuufSydLUrIXMJ+74eyOE42VGylRWDMK8+m0WbnSjqPdVbTiggXDsruTcQjEGEj18PB+uIJIj33a8D85RVYC/MyidJn63I/IpXIGMkAvFzxErhOKv3ABTj1PHvznjj+JpGANvfVY/w= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(346002)(366004)(396003)(136003)(376002)(39860400002)(451199015)(83380400001)(66899015)(36756003)(6486002)(2906002)(44832011)(86362001)(38100700002)(6512007)(33964004)(44144004)(26005)(4743002)(2616005)(186003)(316002)(66946007)(66556008)(8676002)(478600001)(4326008)(41300700001)(66476007)(235185007)(5660300002)(6916009)(8936002)(6506007)(4216001)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6730 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: dd27aa71-f569-4ba5-0c1d-08dabb36fb50 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: W9FoGqDbDmFLiBBYOZakT4aqYzWq+1zRRMvCPiqwyd48TtxYUDVh7f6j7PILjFDoptHJa2o3FA2yrjWMZQ7/fD+vvGlJdefLRSphwCJzJEo6U6Y5EMVNpRNA0IP/RfnDom4YDfdmxlkvcYMnnPlK81h+cBA2SA9palTOlM4SZziu4lXjMANN2AkGkg+0qn9iVv/jXJ9jfURJ3AzOPYsSVZhiOcTxrUuNRrdmjYy8PN4udLPMd3uahN8LNKQFrEBQjFDnOszUrZPD76nsJ4M7gwRg8xQLbpRoBoNJtk5wwyfBVgdb8O0lvDf9cXUP4p9SPAUAgoBKRgQSdCdL8C8VQ9bUX/ZB8fFOMXi2ACWoPusZy9S7ExfJzNZ0FxuG4SPoHVyl2tZmUsqsHOAx+gydg/IFJAuug2W4DmFzjpah1o3giu6nv134L5lloDir3xUGiIQgRm4t2Q4/oNW4t7vsZctqNdY5nN+C469JJRx2VYRIBLeLExaocR/jyb8GiXSSFKNCe71BkSdIjjALp6add77mVtEJMMBZRbTrAD5/KiKJLbsV6aoRBo/qqgSvB0T9AoKKKnmCc+BLXYRD5Lls+iiIUzRVXmeXI3cs9HXA5ViXOhdyuy5TCDfP3g2SgEsWkcgnT7kw1lmfJAOZM1Rt/YsVs6Qq8L4ZsawfCBuwpAG5pfK3kSEQivq8Ktg1Ex9uUgQJhsAOjLyNgZ4Gk8aZMUPS8JE25H457woMXFfGFB3gNqSThRJl4DLNg/FbbmjML+9lwuigF0IjpXHK9HWCg/3TNYPJAyO8gJK47hxCMnRFZs1m6tpicPWGQ56DleP0 X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199015)(36840700001)(46966006)(40470700004)(66899015)(82740400003)(36756003)(86362001)(356005)(81166007)(478600001)(4743002)(2906002)(336012)(83380400001)(40480700001)(44832011)(186003)(44144004)(107886003)(33964004)(2616005)(26005)(6506007)(40460700003)(36860700001)(6512007)(47076005)(4326008)(6916009)(316002)(235185007)(6486002)(82310400005)(70206006)(70586007)(8676002)(41300700001)(8936002)(5660300002)(4216001)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2022 11:57:31.0926 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b6db9dd2-e002-4627-4d18-08dabb371713 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10257 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Tamar Christina via Gcc-patches From: Tamar Christina Reply-To: Tamar Christina Cc: nd@arm.com, rguenther@suse.de Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748204405729167593?= X-GMAIL-MSGID: =?utf-8?q?1748204405729167593?= Hi All, This patch series is to add recognition of pairwise operations (reductions) in match.pd such that we can benefit from them even at -O1 when the vectorizer isn't enabled. Ths use of these allow for a lot simpler codegen in AArch64 and allows us to avoid quite a lot of codegen warts. As an example a simple: typedef float v4sf __attribute__((vector_size (16))); float foo3 (v4sf x) { return x[1] + x[2]; } currently generates: foo3: dup s1, v0.s[1] dup s0, v0.s[2] fadd s0, s1, s0 ret while with this patch series now generates: foo3: ext v0.16b, v0.16b, v0.16b, #4 faddp s0, v0.2s ret This patch will not perform the operation if the source is not a gimple register and leaves memory sources to the vectorizer as it's able to deal correctly with clobbers. The use of these instruction makes a significant difference in codegen quality for AArch64 and Arm. NOTE: The last entry in the series contains tests for all of the previous patches as it's a bit of an all or nothing thing. Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * match.pd (adjacent_data_access_p): Import. Add new pattern for bitwise plus, min, max, fmax, fmin. * tree-cfg.cc (verify_gimple_call): Allow function arguments in IFNs. * tree.cc (adjacent_data_access_p): New. * tree.h (adjacent_data_access_p): New. --- inline copy of patch -- diff --git a/gcc/match.pd b/gcc/match.pd index 2617d56091dfbd41ae49f980ee0af3757f5ec1cf..aecaa3520b36e770d11ea9a10eb18db23c0cd9f7 100644 --- diff --git a/gcc/match.pd b/gcc/match.pd index 2617d56091dfbd41ae49f980ee0af3757f5ec1cf..aecaa3520b36e770d11ea9a10eb18db23c0cd9f7 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -39,7 +39,8 @@ along with GCC; see the file COPYING3. If not see HONOR_NANS uniform_vector_p expand_vec_cmp_expr_p - bitmask_inv_cst_vector_p) + bitmask_inv_cst_vector_p + adjacent_data_access_p) /* Operator lists. */ (define_operator_list tcc_comparison @@ -7195,6 +7196,47 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) /* Canonicalizations of BIT_FIELD_REFs. */ +/* Canonicalize BIT_FIELD_REFS to pairwise operations. */ +(for op (plus min max FMIN_ALL FMAX_ALL) + ifn (IFN_REDUC_PLUS IFN_REDUC_MIN IFN_REDUC_MAX + IFN_REDUC_FMIN IFN_REDUC_FMAX) + (simplify + (op @0 @1) + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) + (with { poly_uint64 nloc = 0; + tree src = adjacent_data_access_p (@0, @1, &nloc, true); + tree ntype = build_vector_type (type, 2); + tree size = TYPE_SIZE (ntype); + tree pos = build_int_cst (TREE_TYPE (size), nloc); + poly_uint64 _sz; + poly_uint64 _total; } + (if (src && is_gimple_reg (src) && ntype + && poly_int_tree_p (size, &_sz) + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_total) + && known_ge (_total, _sz + nloc)) + (ifn (BIT_FIELD_REF:ntype { src; } { size; } { pos; }))))))) + +(for op (lt gt) + ifni (IFN_REDUC_MIN IFN_REDUC_MAX) + ifnf (IFN_REDUC_FMIN IFN_REDUC_FMAX) + (simplify + (cond (op @0 @1) @0 @1) + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) + (with { poly_uint64 nloc = 0; + tree src = adjacent_data_access_p (@0, @1, &nloc, false); + tree ntype = build_vector_type (type, 2); + tree size = TYPE_SIZE (ntype); + tree pos = build_int_cst (TREE_TYPE (size), nloc); + poly_uint64 _sz; + poly_uint64 _total; } + (if (src && is_gimple_reg (src) && ntype + && poly_int_tree_p (size, &_sz) + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_total) + && known_ge (_total, _sz + nloc)) + (if (SCALAR_FLOAT_MODE_P (TYPE_MODE (type))) + (ifnf (BIT_FIELD_REF:ntype { src; } { size; } { pos; })) + (ifni (BIT_FIELD_REF:ntype { src; } { size; } { pos; })))))))) + (simplify (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4) (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); })) diff --git a/gcc/tree-cfg.cc b/gcc/tree-cfg.cc index 91ec33c80a41e1e0cc6224e137dd42144724a168..b19710392940cf469de52d006603ae1e3deb6b76 100644 --- a/gcc/tree-cfg.cc +++ b/gcc/tree-cfg.cc @@ -3492,6 +3492,7 @@ verify_gimple_call (gcall *stmt) { tree arg = gimple_call_arg (stmt, i); if ((is_gimple_reg_type (TREE_TYPE (arg)) + && !is_gimple_variable (arg) && !is_gimple_val (arg)) || (!is_gimple_reg_type (TREE_TYPE (arg)) && !is_gimple_lvalue (arg))) diff --git a/gcc/tree.h b/gcc/tree.h index e6564aaccb7b69cd938ff60b6121aec41b7e8a59..8f8a9660c9e0605eb516de194640b8c1b531b798 100644 --- a/gcc/tree.h +++ b/gcc/tree.h @@ -5006,6 +5006,11 @@ extern bool integer_pow2p (const_tree); extern tree bitmask_inv_cst_vector_p (tree); +/* TRUE if the two operands represent adjacent access of data such that a + pairwise operation can be used. */ + +extern tree adjacent_data_access_p (tree, tree, poly_uint64*, bool); + /* integer_nonzerop (tree x) is nonzero if X is an integer constant with a nonzero value. */ diff --git a/gcc/tree.cc b/gcc/tree.cc index 007c9325b17076f474e6681c49966c59cf6b91c7..5315af38a1ead89ca5f75dc4b19de9841e29d311 100644 --- a/gcc/tree.cc +++ b/gcc/tree.cc @@ -10457,6 +10457,90 @@ bitmask_inv_cst_vector_p (tree t) return builder.build (); } +/* Returns base address if the two operands represent adjacent access of data + such that a pairwise operation can be used. OP1 must be a lower subpart + than OP2. If POS is not NULL then on return if a value is returned POS + will indicate the position of the lower address. If COMMUTATIVE_P then + the operation is also tried by flipping op1 and op2. */ + +tree adjacent_data_access_p (tree op1, tree op2, poly_uint64 *pos, + bool commutative_p) +{ + gcc_assert (op1); + gcc_assert (op2); + if (TREE_CODE (op1) != TREE_CODE (op2) + || TREE_TYPE (op1) != TREE_TYPE (op2)) + return NULL; + + tree type = TREE_TYPE (op1); + gimple *stmt1 = NULL, *stmt2 = NULL; + unsigned int bits = GET_MODE_BITSIZE (GET_MODE_INNER (TYPE_MODE (type))); + + if (TREE_CODE (op1) == BIT_FIELD_REF + && operand_equal_p (TREE_OPERAND (op1, 0), TREE_OPERAND (op2, 0), 0) + && operand_equal_p (TREE_OPERAND (op1, 1), TREE_OPERAND (op2, 1), 0) + && known_eq (bit_field_size (op1), bits)) + { + poly_uint64 offset1 = bit_field_offset (op1); + poly_uint64 offset2 = bit_field_offset (op2); + if (known_eq (offset2 - offset1, bits)) + { + if (pos) + *pos = offset1; + return TREE_OPERAND (op1, 0); + } + else if (commutative_p && known_eq (offset1 - offset2, bits)) + { + if (pos) + *pos = offset2; + return TREE_OPERAND (op1, 0); + } + } + else if (TREE_CODE (op1) == ARRAY_REF + && operand_equal_p (get_base_address (op1), get_base_address (op2))) + { + wide_int size1 = wi::to_wide (array_ref_element_size (op1)); + wide_int size2 = wi::to_wide (array_ref_element_size (op2)); + if (wi::ne_p (size1, size2) || wi::ne_p (size1, bits / 8) + || !tree_fits_poly_uint64_p (TREE_OPERAND (op1, 1)) + || !tree_fits_poly_uint64_p (TREE_OPERAND (op2, 1))) + return NULL; + + poly_uint64 offset1 = tree_to_poly_uint64 (TREE_OPERAND (op1, 1)); + poly_uint64 offset2 = tree_to_poly_uint64 (TREE_OPERAND (op2, 1)); + if (known_eq (offset2 - offset1, 1UL)) + { + if (pos) + *pos = offset1 * bits; + return TREE_OPERAND (op1, 0); + } + else if (commutative_p && known_eq (offset1 - offset2, 1UL)) + { + if (pos) + *pos = offset2 * bits; + return TREE_OPERAND (op1, 0); + } + } + else if (TREE_CODE (op1) == SSA_NAME + && (stmt1 = SSA_NAME_DEF_STMT (op1)) != NULL + && (stmt2 = SSA_NAME_DEF_STMT (op2)) != NULL + && is_gimple_assign (stmt1) + && is_gimple_assign (stmt2)) + { + if (gimple_assign_rhs_code (stmt1) != ARRAY_REF + && gimple_assign_rhs_code (stmt1) != BIT_FIELD_REF + && gimple_assign_rhs_code (stmt2) != ARRAY_REF + && gimple_assign_rhs_code (stmt2) != BIT_FIELD_REF) + return NULL; + + return adjacent_data_access_p (gimple_assign_rhs1 (stmt1), + gimple_assign_rhs1 (stmt2), pos, + commutative_p); + } + + return NULL; +} + /* If VECTOR_CST T has a single nonzero element, return the index of that element, otherwise return -1. */ --- a/gcc/match.pd +++ b/gcc/match.pd @@ -39,7 +39,8 @@ along with GCC; see the file COPYING3. If not see HONOR_NANS uniform_vector_p expand_vec_cmp_expr_p - bitmask_inv_cst_vector_p) + bitmask_inv_cst_vector_p + adjacent_data_access_p) /* Operator lists. */ (define_operator_list tcc_comparison @@ -7195,6 +7196,47 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) /* Canonicalizations of BIT_FIELD_REFs. */ +/* Canonicalize BIT_FIELD_REFS to pairwise operations. */ +(for op (plus min max FMIN_ALL FMAX_ALL) + ifn (IFN_REDUC_PLUS IFN_REDUC_MIN IFN_REDUC_MAX + IFN_REDUC_FMIN IFN_REDUC_FMAX) + (simplify + (op @0 @1) + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) + (with { poly_uint64 nloc = 0; + tree src = adjacent_data_access_p (@0, @1, &nloc, true); + tree ntype = build_vector_type (type, 2); + tree size = TYPE_SIZE (ntype); + tree pos = build_int_cst (TREE_TYPE (size), nloc); + poly_uint64 _sz; + poly_uint64 _total; } + (if (src && is_gimple_reg (src) && ntype + && poly_int_tree_p (size, &_sz) + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_total) + && known_ge (_total, _sz + nloc)) + (ifn (BIT_FIELD_REF:ntype { src; } { size; } { pos; }))))))) + +(for op (lt gt) + ifni (IFN_REDUC_MIN IFN_REDUC_MAX) + ifnf (IFN_REDUC_FMIN IFN_REDUC_FMAX) + (simplify + (cond (op @0 @1) @0 @1) + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) + (with { poly_uint64 nloc = 0; + tree src = adjacent_data_access_p (@0, @1, &nloc, false); + tree ntype = build_vector_type (type, 2); + tree size = TYPE_SIZE (ntype); + tree pos = build_int_cst (TREE_TYPE (size), nloc); + poly_uint64 _sz; + poly_uint64 _total; } + (if (src && is_gimple_reg (src) && ntype + && poly_int_tree_p (size, &_sz) + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_total) + && known_ge (_total, _sz + nloc)) + (if (SCALAR_FLOAT_MODE_P (TYPE_MODE (type))) + (ifnf (BIT_FIELD_REF:ntype { src; } { size; } { pos; })) + (ifni (BIT_FIELD_REF:ntype { src; } { size; } { pos; })))))))) + (simplify (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4) (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); })) diff --git a/gcc/tree-cfg.cc b/gcc/tree-cfg.cc index 91ec33c80a41e1e0cc6224e137dd42144724a168..b19710392940cf469de52d006603ae1e3deb6b76 100644 --- a/gcc/tree-cfg.cc +++ b/gcc/tree-cfg.cc @@ -3492,6 +3492,7 @@ verify_gimple_call (gcall *stmt) { tree arg = gimple_call_arg (stmt, i); if ((is_gimple_reg_type (TREE_TYPE (arg)) + && !is_gimple_variable (arg) && !is_gimple_val (arg)) || (!is_gimple_reg_type (TREE_TYPE (arg)) && !is_gimple_lvalue (arg))) diff --git a/gcc/tree.h b/gcc/tree.h index e6564aaccb7b69cd938ff60b6121aec41b7e8a59..8f8a9660c9e0605eb516de194640b8c1b531b798 100644 --- a/gcc/tree.h +++ b/gcc/tree.h @@ -5006,6 +5006,11 @@ extern bool integer_pow2p (const_tree); extern tree bitmask_inv_cst_vector_p (tree); +/* TRUE if the two operands represent adjacent access of data such that a + pairwise operation can be used. */ + +extern tree adjacent_data_access_p (tree, tree, poly_uint64*, bool); + /* integer_nonzerop (tree x) is nonzero if X is an integer constant with a nonzero value. */ diff --git a/gcc/tree.cc b/gcc/tree.cc index 007c9325b17076f474e6681c49966c59cf6b91c7..5315af38a1ead89ca5f75dc4b19de9841e29d311 100644 --- a/gcc/tree.cc +++ b/gcc/tree.cc @@ -10457,6 +10457,90 @@ bitmask_inv_cst_vector_p (tree t) return builder.build (); } +/* Returns base address if the two operands represent adjacent access of data + such that a pairwise operation can be used. OP1 must be a lower subpart + than OP2. If POS is not NULL then on return if a value is returned POS + will indicate the position of the lower address. If COMMUTATIVE_P then + the operation is also tried by flipping op1 and op2. */ + +tree adjacent_data_access_p (tree op1, tree op2, poly_uint64 *pos, + bool commutative_p) +{ + gcc_assert (op1); + gcc_assert (op2); + if (TREE_CODE (op1) != TREE_CODE (op2) + || TREE_TYPE (op1) != TREE_TYPE (op2)) + return NULL; + + tree type = TREE_TYPE (op1); + gimple *stmt1 = NULL, *stmt2 = NULL; + unsigned int bits = GET_MODE_BITSIZE (GET_MODE_INNER (TYPE_MODE (type))); + + if (TREE_CODE (op1) == BIT_FIELD_REF + && operand_equal_p (TREE_OPERAND (op1, 0), TREE_OPERAND (op2, 0), 0) + && operand_equal_p (TREE_OPERAND (op1, 1), TREE_OPERAND (op2, 1), 0) + && known_eq (bit_field_size (op1), bits)) + { + poly_uint64 offset1 = bit_field_offset (op1); + poly_uint64 offset2 = bit_field_offset (op2); + if (known_eq (offset2 - offset1, bits)) + { + if (pos) + *pos = offset1; + return TREE_OPERAND (op1, 0); + } + else if (commutative_p && known_eq (offset1 - offset2, bits)) + { + if (pos) + *pos = offset2; + return TREE_OPERAND (op1, 0); + } + } + else if (TREE_CODE (op1) == ARRAY_REF + && operand_equal_p (get_base_address (op1), get_base_address (op2))) + { + wide_int size1 = wi::to_wide (array_ref_element_size (op1)); + wide_int size2 = wi::to_wide (array_ref_element_size (op2)); + if (wi::ne_p (size1, size2) || wi::ne_p (size1, bits / 8) + || !tree_fits_poly_uint64_p (TREE_OPERAND (op1, 1)) + || !tree_fits_poly_uint64_p (TREE_OPERAND (op2, 1))) + return NULL; + + poly_uint64 offset1 = tree_to_poly_uint64 (TREE_OPERAND (op1, 1)); + poly_uint64 offset2 = tree_to_poly_uint64 (TREE_OPERAND (op2, 1)); + if (known_eq (offset2 - offset1, 1UL)) + { + if (pos) + *pos = offset1 * bits; + return TREE_OPERAND (op1, 0); + } + else if (commutative_p && known_eq (offset1 - offset2, 1UL)) + { + if (pos) + *pos = offset2 * bits; + return TREE_OPERAND (op1, 0); + } + } + else if (TREE_CODE (op1) == SSA_NAME + && (stmt1 = SSA_NAME_DEF_STMT (op1)) != NULL + && (stmt2 = SSA_NAME_DEF_STMT (op2)) != NULL + && is_gimple_assign (stmt1) + && is_gimple_assign (stmt2)) + { + if (gimple_assign_rhs_code (stmt1) != ARRAY_REF + && gimple_assign_rhs_code (stmt1) != BIT_FIELD_REF + && gimple_assign_rhs_code (stmt2) != ARRAY_REF + && gimple_assign_rhs_code (stmt2) != BIT_FIELD_REF) + return NULL; + + return adjacent_data_access_p (gimple_assign_rhs1 (stmt1), + gimple_assign_rhs1 (stmt2), pos, + commutative_p); + } + + return NULL; +} + /* If VECTOR_CST T has a single nonzero element, return the index of that element, otherwise return -1. */