Message ID | Yy19Z/q/HPJ6wm5w@arm.com |
---|---|
State | New, archived |
Headers |
Return-Path: <gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5044:0:0:0:0:0 with SMTP id h4csp126328wrt; Fri, 23 Sep 2022 02:36:17 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5/D3Jorxmyz1cXJ+VbtqrNwVu+pqC5SLLQLuAaGzJOF93IXm/O5dUZ9mVtYJ19EiYr4gmK X-Received: by 2002:aa7:c050:0:b0:453:4427:a947 with SMTP id k16-20020aa7c050000000b004534427a947mr7397632edo.172.1663925777569; Fri, 23 Sep 2022 02:36:17 -0700 (PDT) Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id hz5-20020a1709072ce500b0073c12a7e89esi7784180ejc.940.2022.09.23.02.36.17 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Sep 2022 02:36:17 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b="WYYRSAR/"; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EC199385735C for <ouuuleilei@gmail.com>; Fri, 23 Sep 2022 09:35:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EC199385735C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1663925731; bh=DYLbE5bx5E886ptahrkGbDWTx/jvkirCY/IXXP8l6/U=; h=Date:To:Subject:In-Reply-To:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:From; b=WYYRSAR/3TDVS6MD+rSOiBnrSbPLXxh/FOQrTTsE0YrMt4RJVBSuZ/vRTuf1Cniut S3rKH28hcxYMCxv3nU0vDchoWxQ4Y9BAHEvBmLvA8STf/2gw8Q6LizCk+IlxGeXfhn BeA/Xmk5YqdvNJnA7KVfUB2cfTH8QxmfqhHDRttw= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-eopbgr60081.outbound.protection.outlook.com [40.107.6.81]) by sourceware.org (Postfix) with ESMTPS id 6F150385740F for <gcc-patches@gcc.gnu.org>; Fri, 23 Sep 2022 09:33:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 6F150385740F ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=ibf6wdx8UBGRjWTSvD3ruAUK+07aq0yZnszbj08y9xr8dpRELLInWEAv5G7Qyvv1e5QFjCmoeuwVJnzWnEvSNekB6hsOV4rhy+BjU1oBYF6o5AjAcGV3OK3iTuNaCj53ywV2YKufzDDIat7ulZ0lyyqTgmU78yAE3P13+4Jix9HzWG/XQWO3jd34hn9Je3vsPWMknkOhYeKsFFyCqaig7AEEZlbh62KN/hDFZZTDnVs6U7/gu8F0Y55z0NbDA4o7Is2giOSiZ0ouIz1fXMZGAumN76ySINAMd5Vt1Roe1EmVfAA8uq6oz0zl1Ssq/vvFzXhXXeztr1VBVnka54Llqw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DYLbE5bx5E886ptahrkGbDWTx/jvkirCY/IXXP8l6/U=; b=Dx7CjK3i9BExY7sWoW9tSmoBO7K82iDHs7amSLZp/rhrpNCWKkuOC4e3FBTi5Gpgx5BxNQKvwLPcD76VKjczRZW6PCrp71bGT8MGwlCfqSyXZHsoac2oiW2yV6ILrXBAOPBfSqOl3ChZJZyeNXXT8K4s8oD2PXpUKlasppDJ4KMscjTDN92tuPRhCs6I/JtHWou57jtRuNUQJcv4ZO2NJ+juteeTlqCWT4qXoLPEetUkC/EtSs0t9O+4V3+znwkRwQ2DWl+UucMGwlfXVSCAvP58HcmMIHyRXcl3DVApsqAzl7riV+aR2ZuVFhmN4JH8KMb8BSKPJ3vX+OrrelW+SA== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB6PR07CA0001.eurprd07.prod.outlook.com (2603:10a6:6:2d::11) by AS2PR08MB9319.eurprd08.prod.outlook.com (2603:10a6:20b:599::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 09:33:44 +0000 Received: from DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:2d:cafe::12) by DB6PR07CA0001.outlook.office365.com (2603:10a6:6:2d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.9 via Frontend Transport; Fri, 23 Sep 2022 09:33:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT011.mail.protection.outlook.com (100.127.142.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 09:33:44 +0000 Received: ("Tessian outbound 0a0431bdcdb4:v124"); Fri, 23 Sep 2022 09:33:44 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 67afbd657942ea92 X-CR-MTA-TID: 64aa7808 Received: from 9dc76a74bbc3.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 83EA695C-E66A-4FFC-A1F9-ECC0977EEBDB.1; Fri, 23 Sep 2022 09:33:32 +0000 Received: from EUR03-DBA-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9dc76a74bbc3.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 23 Sep 2022 09:33:32 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J6tppd0x1X7fHHxeIsS0IN6XwprwAf9el2EIP9+tTRG7D5WHDhHIDysZqV0yQzXa9f0JYutw9Pkh+Fg1W9+UhWybfsnxDPf3ruZ75h2zpIQWbH1KwG9WcjLuCwVyF/eAqJZQiCL6TefnnFn5/A/k856gT5DxcUV1zskNbldE34r5Amnai3v0oS/HSsGmQwiFYDvJQ7n5n75eI8mW3OUpAw8IM9FFoERxCobCW6xZAzNNXKl0VmMycj1m4gQq8AgMuzZfqfFs77h4iDlblI/BgQTUJV6UgcSaTf4h9GVrw5kp9y64j4FVrJRqFDbpUuhpnI/4KYXg8QG0FlouFBuWRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DYLbE5bx5E886ptahrkGbDWTx/jvkirCY/IXXP8l6/U=; b=iRogJjiKWtpLqJdefCHYDZjqNBuNBcnQYM+PvM7MJhiXAxBwtpxUyKgTes6arPalOes9bn1VIyvpeL/1jM6Oazwa+IoWrwAyBbFFdb13MOjFrjn6uFbTkbtH6BTohFkrASurBhvOrAthsF9nPCY7Akq3mKWS/DsCRi3cP7I6ekDfVrpV9p3Fx7Ks2X2s2YGyXG//FewdhkaSJnrkVH8LVH0ntoTCV7vQpk03HJv/QNFCNQ2lakoC8jP2ljgX6UN0EnnYkjMhnyEu68N6kePjutlm4mGxLweT3K/qK7EYJs0I85tAOz9mJ/m7jC3OxlPSe+TQFHoPq8/jrmqs/AVA0A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by DB9PR08MB8360.eurprd08.prod.outlook.com (2603:10a6:10:3d8::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 09:33:30 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::6529:66e5:e7d4:1a40]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::6529:66e5:e7d4:1a40%4]) with mapi id 15.20.5632.021; Fri, 23 Sep 2022 09:33:30 +0000 Date: Fri, 23 Sep 2022 10:33:27 +0100 To: gcc-patches@gcc.gnu.org Subject: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask division. Message-ID: <Yy19Z/q/HPJ6wm5w@arm.com> Content-Type: multipart/mixed; boundary="l4lqgEfLsxaiypzy" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <patch-15779-tamar@arm.com> X-ClientProxiedBy: LO4P123CA0437.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:1a9::10) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|DB9PR08MB8360:EE_|DBAEUR03FT011:EE_|AS2PR08MB9319:EE_ X-MS-Office365-Filtering-Correlation-Id: b10f59d2-a951-4429-271e-08da9d46b578 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: G7MDFrBTKgdYwmvdQnQRizXcoroYGmEjWUM/EjJF05Hsj86rVy0IyCkjnd+jmk7dsCu3Cfx0W7jNGDAtk/wHHl7UiSaYqT/ZwwtaqM+DGHO7Sr+OO8nvHiJZGcz5ub79CNT0Sin26jgirlq7pegj5aIWbwCpkmFXAlaBXH/EpDOQnis436brSuIqix4YYfYWqk9cPENZYNZXhEl10VNiDlX1yz+QdN7lGwnBwPlLKPKMYu3urINa2CHk2PYXrxk+03uU7erWUz2hG5m9rIsV5kThIPNg2IbNhUTCQmPOJGHzwlXeannhjmYQpk8cJKx3DREXWEQILIlKF4pnx4jCy5BcZ/z5CXYEdFG2nlE2yJ8uFBmY3CM2jNCOpuBwVxMTMQPmL0HmHB85IMWNb4iIgKdF3ZwJ9v/SVLCs3PSe0GS2i5GQRXnwWVXq/87ofPSVCyGlKQaPlIDCqhetCf46Z7hbewxrAk//ryloLcc39XqglQGKdM/HdFBLjBJf1hYg2/DTWsBxSCmFob3CsWPFIE6yLDULSAepMVTRYhP3DuY0NEgfBuhjwr+9imu7V+ik2jMC9Hwqt5Kw4afWF1HnYm8m+cYCuuZ+iQMzSUav19AUacgbcx6gi2DuG2w76giBwfj5OPWnuksz6eoVYxIQ8LEPBVGn7nLFL6HryOeb1cM0rcBiZq47ziAHHlybmouub4C3flBSRHve241ljdLo3YAWYrLAk1bg/OgvrDAhH7Vmu08obbUjdItmPqf/7NGSUBu2/F0pqW3o+l+cFodTQlZhMRPL5pesO+frcPwRksQ= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(136003)(376002)(346002)(396003)(39860400002)(366004)(451199015)(8936002)(4326008)(66556008)(66476007)(66946007)(38100700002)(36756003)(84970400001)(30864003)(86362001)(44832011)(5660300002)(2906002)(235185007)(2616005)(186003)(6506007)(4743002)(83380400001)(6512007)(6486002)(33964004)(6666004)(478600001)(41300700001)(8676002)(44144004)(26005)(6916009)(316002)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8360 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 760ddc10-0402-43f3-138d-08da9d46ace3 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YY5FZWW9FVs3L7Ea5mtEA/K6xlpF8aYumW2w+441QYrz93hm17djOOvlvOqXleWuuXGCOv66VlJLmSicXDY2jEHzg+L2SNzLVTqyuX60zO3mBF5vO/dUpFPhd46+UD47GEGD8BSAVptjE4ubodpoILAou2o+vQgEKN1L7I4AMChVdBXOt4BK86MZgzaAa+UCce/JqpYakuIHQLpD8+8z8lEeGIMZiq0Dts1ySyuejdgSnKhc3AhWukJ1zTJTHB8iiTP9NkSKn4QwZ1TI3EOtgi/Vs0z99Wa/k9ncjsZ96MWqZNDFV3oE1lMNIQRw4lB1wA6VWcU3AsXvkVZf85L86dQlS0uBw+UNtXxuMImeTF+zVKAs0kFsCA5cRttw4n6ea1ooEBVGD9XoRHyykRj9TI3OY/1GFOfZmFhVQMlVIkZDdb+QAaPcHsFwop/dQ/5uQiuEs56TCgCt5guUW6L9erGs6ats4Ay5/eB/xdmiUOUeTJco0nFILcxhwok2NTT/zvxUyyNcTaH5cHfNKMpTmGK6MfY2ah5P8J7EkcmvQGIj6d93S62DNiBS+UNqNW19deKYj2HZzG42caNPVsz8MWwxtLQGtDoH+nEotk+WpK/KBYfJetluF7T1sQBbTFiXJL+4rMKOzD9+bpEsX2gxps9+iwqMGlUpBf6YLtllUIGYIbSJ/l8FLKktdyNAAfunX1551e+YMUzTMyZrpoSnpyUDdtW++fXRiCCUqDIp7sHa9mT69bxAs9ttiOQsLXLRHSsXoAv4Qq8eWAjY7Evy74E5Ccxj2WTdu2rW9prp5o0oSTIW0yhP+8QCWa9vbuEH9GcyGHzSPDYfJtANVI43Qw== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199015)(36840700001)(40470700004)(46966006)(6486002)(84970400001)(30864003)(44832011)(235185007)(5660300002)(478600001)(86362001)(36756003)(8676002)(41300700001)(82310400005)(70586007)(4326008)(356005)(8936002)(70206006)(6916009)(6666004)(316002)(36860700001)(81166007)(40480700001)(40460700003)(82740400003)(186003)(6506007)(2616005)(33964004)(6512007)(26005)(336012)(4743002)(44144004)(83380400001)(47076005)(2906002)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 09:33:44.4012 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b10f59d2-a951-4429-271e-08da9d46b578 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9319 X-Spam-Status: No, score=-11.0 required=5.0 tests=BAYES_00, BODY_8BITS, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: Tamar Christina via Gcc-patches <gcc-patches@gcc.gnu.org> Reply-To: Tamar Christina <tamar.christina@arm.com> Cc: Richard.Earnshaw@arm.com, nd@arm.com, richard.sandiford@arm.com, Marcus.Shawcroft@arm.com Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org> X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1744752636408266212?= X-GMAIL-MSGID: =?utf-8?q?1744752636408266212?= |
Series |
[1/4] middle-end Support not decomposing specific divisions during vectorization.
|
|
Commit Message
Tamar Christina
Sept. 23, 2022, 9:33 a.m. UTC
Hi All, This adds an implementation for the new optab for unsigned pow2 bitmask for AArch64. The implementation rewrites: x = y / (2 ^ (sizeof (y)/2)-1 into e.g. (for bytes) (x + ((x + 257) >> 8)) >> 8 where it's required that the additions be done in double the precision of x such that we don't lose any bits during an overflow. Essentially the sequence decomposes the division into doing two smaller divisions, one for the top and bottom parts of the number and adding the results back together. To account for the fact that shift by 8 would be division by 256 we add 1 to both parts of x such that when 255 we still get 1 as the answer. Because the amount we shift are half the original datatype we can use the halfing instructions the ISA provides to do the operation instead of using actual shifts. For AArch64 this means we generate for: void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { for (int i = 0; i < (n & -16); i+=1) pixel[i] = (pixel[i] * level) / 0xff; } the following: movi v3.16b, 0x1 umull2 v1.8h, v0.16b, v2.16b umull v0.8h, v0.8b, v2.8b addhn v5.8b, v1.8h, v3.8h addhn v4.8b, v0.8h, v3.8h uaddw v1.8h, v1.8h, v5.8b uaddw v0.8h, v0.8h, v4.8b uzp2 v0.16b, v0.16b, v1.16b instead of: umull v2.8h, v1.8b, v5.8b umull2 v1.8h, v1.16b, v5.16b umull v0.4s, v2.4h, v3.4h umull2 v2.4s, v2.8h, v3.8h umull v4.4s, v1.4h, v3.4h umull2 v1.4s, v1.8h, v3.8h uzp2 v0.8h, v0.8h, v2.8h uzp2 v1.8h, v4.8h, v1.8h shrn v0.8b, v0.8h, 7 shrn2 v0.16b, v1.8h, 7 Which results in significantly faster code. Thanks for Wilco for the concept. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * config/aarch64/aarch64-simd.md (@aarch64_bitmask_udiv<mode>3): New. * config/aarch64/aarch64.cc (aarch64_vectorize_can_special_div_by_constant): New. gcc/testsuite/ChangeLog: * gcc.target/aarch64/div-by-bitmask.c: New test. --- inline copy of patch -- diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0ba6386c1ab50f77e 100644 -- diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0ba6386c1ab50f77e 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -4831,6 +4831,65 @@ (define_expand "aarch64_<sur><addsub>hn2<mode>" } ) +;; div optimizations using narrowings +;; we can do the division e.g. shorts by 255 faster by calculating it as +;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in +;; double the precision of x. +;; +;; If we imagine a short as being composed of two blocks of bytes then +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to +;; adding 1 to each sub component: +;; +;; short value of 16-bits +;; ┌──────────────┬────────────────┐ +;; │ │ │ +;; └──────────────┴────────────────┘ +;; 8-bit part1 ▲ 8-bit part2 ▲ +;; │ │ +;; │ │ +;; +1 +1 +;; +;; after the first addition, we have to shift right by 8, and narrow the +;; results back to a byte. Remember that the addition must be done in +;; double the precision of the input. Since 8 is half the size of a short +;; we can use a narrowing halfing instruction in AArch64, addhn which also +;; does the addition in a wider precision and narrows back to a byte. The +;; shift itself is implicit in the operation as it writes back only the top +;; half of the result. i.e. bits 2*esize-1:esize. +;; +;; Since we have narrowed the result of the first part back to a byte, for +;; the second addition we can use a widening addition, uaddw. +;; +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr by 8 +;; to shift and the vectorizer. +;; +;; The shift is later optimized by combine to a uzp2 with movi #0. +(define_expand "@aarch64_bitmask_udiv<mode>3" + [(match_operand:VQN 0 "register_operand") + (match_operand:VQN 1 "register_operand") + (match_operand:VQN 2 "immediate_operand")] + "TARGET_SIMD" +{ + unsigned HOST_WIDE_INT size + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; + if (!CONST_VECTOR_P (operands[2]) + || const_vector_encoded_nelts (operands[2]) != 1 + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) + FAIL; + + rtx addend = gen_reg_rtx (<MODE>mode); + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, <VNARROWQ2>mode)); + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); + rtx tmp2 = gen_reg_rtx (<MODE>mode); + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, bitsize); + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, shift_vector)); + DONE; +}) + ;; pmul. (define_insn "aarch64_pmul<mode>" diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..91bb7d306f36dc4c9eeaafc37484b6fc6901bfb4 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -24146,6 +24146,51 @@ aarch64_vectorize_vec_perm_const (machine_mode vmode, machine_mode op_mode, return ret; } +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ + +bool +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, + tree vectype, + tree treeop0, tree treeop1, + rtx *output, rtx in0, rtx in1) +{ + + if ((!treeop0 || !treeop1) && (in0 == NULL_RTX || in1 == NULL_RTX)) + return false; + + tree cst = uniform_integer_cst_p (treeop1); + tree type; + if (code != TRUNC_DIV_EXPR + || !cst + || !TYPE_UNSIGNED ((type = TREE_TYPE (cst))) + || tree_int_cst_sgn (cst) != 1) + return false; + + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE (vectype)); + if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) + return false; + + if (in0 == NULL_RTX && in1 == NULL_RTX) + { + gcc_assert (treeop0 && treeop1); + wide_int icst = wi::to_wide (cst); + wide_int val = wi::add (icst, 1); + int pow = wi::exact_log2 (val); + return pow == (TYPE_PRECISION (type) / 2); + } + + if (!VECTOR_TYPE_P (vectype)) + return false; + + gcc_assert (output); + + if (!*output) + *output = gen_reg_rtx (TYPE_MODE (vectype)); + + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, in0, in1)); + return true; +} + /* Generate a byte permute mask for a register of mode MODE, which has NUNITS units. */ diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index 92bda1a7e14a3c9ea63e151e4a49a818bf4d1bdb..adba9fe97a9b43729c5e86d244a2a23e76cac097 100644 --- a/gcc/doc/tm.texi +++ b/gcc/doc/tm.texi @@ -6112,6 +6112,22 @@ instruction pattern. There is no need for the hook to handle these two implementation approaches itself. @end deftypefn +@deftypefn {Target Hook} bool TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST (enum @var{tree_code}, tree @var{vectype}, tree @var{treeop0}, tree @var{treeop1}, rtx *@var{output}, rtx @var{in0}, rtx @var{in1}) +This hook is used to test whether the target has a special method of +division of vectors of type @var{vectype} using the two operands @code{treeop0}, +and @code{treeop1} and producing a vector of type @var{vectype}. The division +will then not be decomposed by the and kept as a div. + +When the hook is being used to test whether the target supports a special +divide, @var{in0}, @var{in1}, and @var{output} are all null. When the hook +is being used to emit a division, @var{in0} and @var{in1} are the source +vectors of type @var{vecttype} and @var{output} is the destination vector of +type @var{vectype}. + +Return true if the operation is possible, emitting instructions for it +if rtxes are provided and updating @var{output}. +@end deftypefn + @deftypefn {Target Hook} tree TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION (unsigned @var{code}, tree @var{vec_type_out}, tree @var{vec_type_in}) This hook should return the decl of a function that implements the vectorized variant of the function with the @code{combined_fn} code diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index 112462310b134705d860153294287cfd7d4af81d..d5a745a02acdf051ea1da1b04076d058c24ce093 100644 --- a/gcc/doc/tm.texi.in +++ b/gcc/doc/tm.texi.in @@ -4164,6 +4164,8 @@ address; but often a machine-dependent strategy can generate better code. @hook TARGET_VECTORIZE_VEC_PERM_CONST +@hook TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST + @hook TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION @hook TARGET_VECTORIZE_BUILTIN_MD_VECTORIZED_FUNCTION diff --git a/gcc/explow.cc b/gcc/explow.cc index ddb4d6ae3600542f8d2bb5617cdd3933a9fae6c0..568e0eb1a158c696458ae678f5e346bf34ba0036 100644 --- a/gcc/explow.cc +++ b/gcc/explow.cc @@ -1037,7 +1037,7 @@ round_push (rtx size) TRUNC_DIV_EXPR. */ size = expand_binop (Pmode, add_optab, size, alignm1_rtx, NULL_RTX, 1, OPTAB_LIB_WIDEN); - size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, size, align_rtx, + size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, size, align_rtx, NULL_RTX, 1); size = expand_mult (Pmode, size, align_rtx, NULL_RTX, 1); @@ -1203,7 +1203,7 @@ align_dynamic_address (rtx target, unsigned required_align) gen_int_mode (required_align / BITS_PER_UNIT - 1, Pmode), NULL_RTX, 1, OPTAB_LIB_WIDEN); - target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, target, + target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, target, gen_int_mode (required_align / BITS_PER_UNIT, Pmode), NULL_RTX, 1); diff --git a/gcc/expmed.h b/gcc/expmed.h index 0b2538c4c6bd51dfdc772ef70bdf631c0bed8717..0db2986f11ff4a4b10b59501c6f33cb3595659b5 100644 --- a/gcc/expmed.h +++ b/gcc/expmed.h @@ -708,8 +708,9 @@ extern rtx expand_variable_shift (enum tree_code, machine_mode, extern rtx expand_shift (enum tree_code, machine_mode, rtx, poly_int64, rtx, int); #ifdef GCC_OPTABS_H -extern rtx expand_divmod (int, enum tree_code, machine_mode, rtx, rtx, - rtx, int, enum optab_methods = OPTAB_LIB_WIDEN); +extern rtx expand_divmod (int, enum tree_code, machine_mode, tree, tree, + rtx, rtx, rtx, int, + enum optab_methods = OPTAB_LIB_WIDEN); #endif #endif diff --git a/gcc/expmed.cc b/gcc/expmed.cc index 8d7418be418406e72a895ecddf2dc7fdb950c76c..b64ea5ac46a9da85770a5bb0990db8b97d3af414 100644 --- a/gcc/expmed.cc +++ b/gcc/expmed.cc @@ -4222,8 +4222,8 @@ expand_sdiv_pow2 (scalar_int_mode mode, rtx op0, HOST_WIDE_INT d) rtx expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, - rtx op0, rtx op1, rtx target, int unsignedp, - enum optab_methods methods) + tree treeop0, tree treeop1, rtx op0, rtx op1, rtx target, + int unsignedp, enum optab_methods methods) { machine_mode compute_mode; rtx tquotient; @@ -4375,6 +4375,14 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, last_div_const = ! rem_flag && op1_is_constant ? INTVAL (op1) : 0; + /* Check if the target has specific expansions for the division. */ + if (treeop0 + && targetm.vectorize.can_special_div_by_const (code, TREE_TYPE (treeop0), + treeop0, treeop1, + &target, op0, op1)) + return target; + + /* Now convert to the best mode to use. */ if (compute_mode != mode) { @@ -4618,8 +4626,8 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, || (optab_handler (sdivmod_optab, int_mode) != CODE_FOR_nothing))) quotient = expand_divmod (0, TRUNC_DIV_EXPR, - int_mode, op0, - gen_int_mode (abs_d, + int_mode, treeop0, treeop1, + op0, gen_int_mode (abs_d, int_mode), NULL_RTX, 0); else @@ -4808,8 +4816,8 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, size - 1, NULL_RTX, 0); t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), NULL_RTX); - t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, op1, - NULL_RTX, 0); + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, treeop0, + treeop1, t3, op1, NULL_RTX, 0); if (t4) { rtx t5; diff --git a/gcc/expr.cc b/gcc/expr.cc index 80bb1b8a4c5b8350fb1b8f57a99fd52e5882fcb6..b786f1d75e25f3410c0640cd96a8abc055fa34d9 100644 --- a/gcc/expr.cc +++ b/gcc/expr.cc @@ -8028,16 +8028,17 @@ force_operand (rtx value, rtx target) return expand_divmod (0, FLOAT_MODE_P (GET_MODE (value)) ? RDIV_EXPR : TRUNC_DIV_EXPR, - GET_MODE (value), op1, op2, target, 0); + GET_MODE (value), NULL, NULL, op1, op2, + target, 0); case MOD: - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), op1, op2, - target, 0); + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 0); case UDIV: - return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), op1, op2, - target, 1); + return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 1); case UMOD: - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), op1, op2, - target, 1); + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 1); case ASHIFTRT: return expand_simple_binop (GET_MODE (value), code, op1, op2, target, 0, OPTAB_LIB_WIDEN); @@ -8990,11 +8991,13 @@ expand_expr_divmod (tree_code code, machine_mode mode, tree treeop0, bool speed_p = optimize_insn_for_speed_p (); do_pending_stack_adjust (); start_sequence (); - rtx uns_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 1); + rtx uns_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, 1); rtx_insn *uns_insns = get_insns (); end_sequence (); start_sequence (); - rtx sgn_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 0); + rtx sgn_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, 0); rtx_insn *sgn_insns = get_insns (); end_sequence (); unsigned uns_cost = seq_cost (uns_insns, speed_p); @@ -9016,7 +9019,8 @@ expand_expr_divmod (tree_code code, machine_mode mode, tree treeop0, emit_insn (sgn_insns); return sgn_ret; } - return expand_divmod (mod_p, code, mode, op0, op1, target, unsignedp); + return expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, unsignedp); } rtx diff --git a/gcc/optabs.cc b/gcc/optabs.cc index 165f8d1fa22432b96967c69a58dbb7b4bf18120d..cff37ccb0dfc3dd79b97d0abfd872f340855dc96 100644 --- a/gcc/optabs.cc +++ b/gcc/optabs.cc @@ -1104,8 +1104,9 @@ expand_doubleword_mod (machine_mode mode, rtx op0, rtx op1, bool unsignedp) return NULL_RTX; } } - rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, sum, - gen_int_mode (INTVAL (op1), word_mode), + rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, NULL, NULL, + sum, gen_int_mode (INTVAL (op1), + word_mode), NULL_RTX, 1, OPTAB_DIRECT); if (remainder == NULL_RTX) return NULL_RTX; @@ -1208,8 +1209,8 @@ expand_doubleword_divmod (machine_mode mode, rtx op0, rtx op1, rtx *rem, if (op11 != const1_rtx) { - rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, quot1, op11, - NULL_RTX, unsignedp, OPTAB_DIRECT); + rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, NULL, NULL, quot1, + op11, NULL_RTX, unsignedp, OPTAB_DIRECT); if (rem2 == NULL_RTX) return NULL_RTX; @@ -1223,8 +1224,8 @@ expand_doubleword_divmod (machine_mode mode, rtx op0, rtx op1, rtx *rem, if (rem2 == NULL_RTX) return NULL_RTX; - rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, quot1, op11, - NULL_RTX, unsignedp, OPTAB_DIRECT); + rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, NULL, NULL, quot1, + op11, NULL_RTX, unsignedp, OPTAB_DIRECT); if (quot2 == NULL_RTX) return NULL_RTX; diff --git a/gcc/target.def b/gcc/target.def index 2a7fa68f83dd15dcdd2c332e8431e6142ec7d305..92ebd2af18fe8abb6ed95b07081cdd70113db9b1 100644 --- a/gcc/target.def +++ b/gcc/target.def @@ -1902,6 +1902,25 @@ implementation approaches itself.", const vec_perm_indices &sel), NULL) +DEFHOOK +(can_special_div_by_const, + "This hook is used to test whether the target has a special method of\n\ +division of vectors of type @var{vectype} using the two operands @code{treeop0},\n\ +and @code{treeop1} and producing a vector of type @var{vectype}. The division\n\ +will then not be decomposed by the and kept as a div.\n\ +\n\ +When the hook is being used to test whether the target supports a special\n\ +divide, @var{in0}, @var{in1}, and @var{output} are all null. When the hook\n\ +is being used to emit a division, @var{in0} and @var{in1} are the source\n\ +vectors of type @var{vecttype} and @var{output} is the destination vector of\n\ +type @var{vectype}.\n\ +\n\ +Return true if the operation is possible, emitting instructions for it\n\ +if rtxes are provided and updating @var{output}.", + bool, (enum tree_code, tree vectype, tree treeop0, tree treeop1, rtx *output, + rtx in0, rtx in1), + default_can_special_div_by_const) + /* Return true if the target supports misaligned store/load of a specific factor denoted in the third parameter. The last parameter is true if the access is defined in a packed struct. */ diff --git a/gcc/target.h b/gcc/target.h index d6fa6931499d15edff3e5af3e429540d001c7058..c836036ac7fa7910d62bd3da56f39c061f68b665 100644 --- a/gcc/target.h +++ b/gcc/target.h @@ -51,6 +51,7 @@ #include "insn-codes.h" #include "tm.h" #include "hard-reg-set.h" +#include "tree-core.h" #if CHECKING_P diff --git a/gcc/targhooks.h b/gcc/targhooks.h index ecce55ebe797cedc940620e8d89816973a045d49..42451a3e22e86fee9da2f56e2640d63f936b336d 100644 --- a/gcc/targhooks.h +++ b/gcc/targhooks.h @@ -207,6 +207,8 @@ extern void default_addr_space_diagnose_usage (addr_space_t, location_t); extern rtx default_addr_space_convert (rtx, tree, tree); extern unsigned int default_case_values_threshold (void); extern bool default_have_conditional_execution (void); +extern bool default_can_special_div_by_const (enum tree_code, tree, tree, tree, + rtx *, rtx, rtx); extern bool default_libc_has_function (enum function_class, tree); extern bool default_libc_has_fast_function (int fcode); diff --git a/gcc/targhooks.cc b/gcc/targhooks.cc index b15ae19bcb60c59ae8112e67b5f06a241a9bdbf1..8206533382611a7640efba241279936ced41ee95 100644 --- a/gcc/targhooks.cc +++ b/gcc/targhooks.cc @@ -1807,6 +1807,14 @@ default_have_conditional_execution (void) return HAVE_conditional_execution; } +/* Default that no division by constant operations are special. */ +bool +default_can_special_div_by_const (enum tree_code, tree, tree, tree, rtx *, rtx, + rtx) +{ + return false; +} + /* By default we assume that c99 functions are present at the runtime, but sincos is not. */ bool diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c new file mode 100644 index 0000000000000000000000000000000000000000..472cd710534bc8aa9b1b4916f3d7b4d5b64a19b9 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c @@ -0,0 +1,25 @@ +/* { dg-require-effective-target vect_int } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint8_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c new file mode 100644 index 0000000000000000000000000000000000000000..e904a71885b2e8487593a2cd3db75b3e4112e2cc --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c @@ -0,0 +1,25 @@ +/* { dg-require-effective-target vect_int } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint16_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c new file mode 100644 index 0000000000000000000000000000000000000000..a1418ebbf5ea8731ed4e3e720157701d9d1cf852 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c @@ -0,0 +1,26 @@ +/* { dg-require-effective-target vect_int } */ +/* { dg-additional-options "-fno-vect-cost-model" { target aarch64*-*-* } } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint32_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h new file mode 100644 index 0000000000000000000000000000000000000000..29a16739aa4b706616367bfd1832f28ebd07993e --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h @@ -0,0 +1,43 @@ +#include <stdio.h> + +#ifndef N +#define N 65 +#endif + +#ifndef TYPE +#define TYPE uint32_t +#endif + +#ifndef DEBUG +#define DEBUG 0 +#endif + +#define BASE ((TYPE) -1 < 0 ? -126 : 4) + +int main () +{ + TYPE a[N]; + TYPE b[N]; + + for (int i = 0; i < N; ++i) + { + a[i] = BASE + i * 13; + b[i] = BASE + i * 13; + if (DEBUG) + printf ("%d: 0x%x\n", i, a[i]); + } + + fun1 (a, N / 2, N); + fun2 (b, N / 2, N); + + for (int i = 0; i < N; ++i) + { + if (DEBUG) + printf ("%d = 0x%x == 0x%x\n", i, a[i], b[i]); + + if (a[i] != b[i]) + __builtin_abort (); + } + return 0; +} + diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c new file mode 100644 index 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44ab211cd246d82d5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c @@ -0,0 +1,61 @@ +/* { dg-do compile } */ +/* { dg-additional-options "-O3 -std=c99" } */ +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } */ + +#include <stdint.h> + +#pragma GCC target "+nosve" + +/* +** draw_bitmap1: +** ... +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b +** ... +*/ +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xfe; +} + +/* +** draw_bitmap3: +** ... +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h +** ... +*/ +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +/* +** draw_bitmap4: +** ... +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s +** ... +*/ +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} diff --git a/gcc/tree-vect-generic.cc b/gcc/tree-vect-generic.cc index 350129555a0c71c0896c4f1003163f3b3557c11b..ebee5e24b186915ebcb3a817c9a12046b6ec94f3 100644 --- a/gcc/tree-vect-generic.cc +++ b/gcc/tree-vect-generic.cc @@ -1237,6 +1237,14 @@ expand_vector_operation (gimple_stmt_iterator *gsi, tree type, tree compute_type tree rhs2 = gimple_assign_rhs2 (assign); tree ret; + /* Check if the target was going to handle it through the special + division callback hook. */ + if (targetm.vectorize.can_special_div_by_const (code, type, rhs1, + rhs2, NULL, + NULL_RTX, NULL_RTX)) + return NULL_TREE; + + if (!optimize || !VECTOR_INTEGER_TYPE_P (type) || TREE_CODE (rhs2) != VECTOR_CST diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc index 09574bb1a2696b3438a4ce9f09f74b42e784aca0..607acdf95eb30335d8bc0e85af0b1bfea10fe443 100644 --- a/gcc/tree-vect-patterns.cc +++ b/gcc/tree-vect-patterns.cc @@ -3596,6 +3596,12 @@ vect_recog_divmod_pattern (vec_info *vinfo, return pattern_stmt; } + else if (targetm.vectorize.can_special_div_by_const (rhs_code, vectype, + oprnd0, oprnd1, NULL, + NULL_RTX, NULL_RTX)) + { + return NULL; + } if (prec > HOST_BITS_PER_WIDE_INT || integer_zerop (oprnd1)) diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index c9dab217f059f17e91e9a7582523e627d7a45b66..6d05c48a7339de094d7288bd68e0e1c1e93faafe 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -6260,6 +6260,11 @@ vectorizable_operation (vec_info *vinfo, } target_support_p = (optab_handler (optab, vec_mode) != CODE_FOR_nothing); + if (!target_support_p) + target_support_p + = targetm.vectorize.can_special_div_by_const (code, vectype, + op0, op1, NULL, + NULL_RTX, NULL_RTX); } bool using_emulated_vectors_p = vect_emulated_vector_p (vectype);
Comments
Hi All, Ping, and updated patch based on mid-end changes. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * config/aarch64/aarch64-simd.md (@aarch64_bitmask_udiv<mode>3): New. * config/aarch64/aarch64.cc (aarch64_vectorize_can_special_div_by_constant): New. gcc/testsuite/ChangeLog: * gcc.target/aarch64/div-by-bitmask.c: New test. --- inline copy of patch --- diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0ba6386c1ab50f77e 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -4831,6 +4831,65 @@ (define_expand "aarch64_<sur><addsub>hn2<mode>" } ) +;; div optimizations using narrowings +;; we can do the division e.g. shorts by 255 faster by calculating it as +;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in +;; double the precision of x. +;; +;; If we imagine a short as being composed of two blocks of bytes then +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to +;; adding 1 to each sub component: +;; +;; short value of 16-bits +;; ┌──────────────┬────────────────┐ +;; │ │ │ +;; └──────────────┴────────────────┘ +;; 8-bit part1 ▲ 8-bit part2 ▲ +;; │ │ +;; │ │ +;; +1 +1 +;; +;; after the first addition, we have to shift right by 8, and narrow the +;; results back to a byte. Remember that the addition must be done in +;; double the precision of the input. Since 8 is half the size of a short +;; we can use a narrowing halfing instruction in AArch64, addhn which also +;; does the addition in a wider precision and narrows back to a byte. The +;; shift itself is implicit in the operation as it writes back only the top +;; half of the result. i.e. bits 2*esize-1:esize. +;; +;; Since we have narrowed the result of the first part back to a byte, for +;; the second addition we can use a widening addition, uaddw. +;; +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr by 8 +;; to shift and the vectorizer. +;; +;; The shift is later optimized by combine to a uzp2 with movi #0. +(define_expand "@aarch64_bitmask_udiv<mode>3" + [(match_operand:VQN 0 "register_operand") + (match_operand:VQN 1 "register_operand") + (match_operand:VQN 2 "immediate_operand")] + "TARGET_SIMD" +{ + unsigned HOST_WIDE_INT size + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; + if (!CONST_VECTOR_P (operands[2]) + || const_vector_encoded_nelts (operands[2]) != 1 + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) + FAIL; + + rtx addend = gen_reg_rtx (<MODE>mode); + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, <VNARROWQ2>mode)); + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); + rtx tmp2 = gen_reg_rtx (<MODE>mode); + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, bitsize); + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, shift_vector)); + DONE; +}) + ;; pmul. (define_insn "aarch64_pmul<mode>" diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..d3c3650d7d728f56adb65154127dc7b72386c5a7 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -24146,6 +24146,40 @@ aarch64_vectorize_vec_perm_const (machine_mode vmode, machine_mode op_mode, return ret; } +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ + +bool +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, + tree vectype, wide_int cst, + rtx *output, rtx in0, rtx in1) +{ + if (code != TRUNC_DIV_EXPR + || !TYPE_UNSIGNED (vectype)) + return false; + + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE (vectype)); + if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) + return false; + + if (in0 == NULL_RTX && in1 == NULL_RTX) + { + wide_int val = wi::add (cst, 1); + int pow = wi::exact_log2 (val); + return pow == (int)(element_precision (vectype) / 2); + } + + if (!VECTOR_TYPE_P (vectype)) + return false; + + gcc_assert (output); + + if (!*output) + *output = gen_reg_rtx (TYPE_MODE (vectype)); + + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, in0, in1)); + return true; +} + /* Generate a byte permute mask for a register of mode MODE, which has NUNITS units. */ @@ -27606,6 +27640,10 @@ aarch64_libgcc_floating_mode_supported_p #undef TARGET_VECTOR_ALIGNMENT #define TARGET_VECTOR_ALIGNMENT aarch64_simd_vector_alignment +#undef TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST +#define TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST \ + aarch64_vectorize_can_special_div_by_constant + #undef TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT #define TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT \ aarch64_vectorize_preferred_vector_alignment diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c new file mode 100644 index 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44ab211cd246d82d5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c @@ -0,0 +1,61 @@ +/* { dg-do compile } */ +/* { dg-additional-options "-O3 -std=c99" } */ +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } */ + +#include <stdint.h> + +#pragma GCC target "+nosve" + +/* +** draw_bitmap1: +** ... +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b +** ... +*/ +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xfe; +} + +/* +** draw_bitmap3: +** ... +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h +** ... +*/ +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +/* +** draw_bitmap4: +** ... +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s +** ... +*/ +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} > -----Original Message----- > From: Tamar Christina <tamar.christina@arm.com> > Sent: Friday, September 23, 2022 10:34 AM > To: gcc-patches@gcc.gnu.org > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > <Richard.Sandiford@arm.com> > Subject: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask division. > > Hi All, > > This adds an implementation for the new optab for unsigned pow2 bitmask > for AArch64. > > The implementation rewrites: > > x = y / (2 ^ (sizeof (y)/2)-1 > > into e.g. (for bytes) > > (x + ((x + 257) >> 8)) >> 8 > > where it's required that the additions be done in double the precision of x > such that we don't lose any bits during an overflow. > > Essentially the sequence decomposes the division into doing two smaller > divisions, one for the top and bottom parts of the number and adding the > results back together. > > To account for the fact that shift by 8 would be division by 256 we add 1 to > both parts of x such that when 255 we still get 1 as the answer. > > Because the amount we shift are half the original datatype we can use the > halfing instructions the ISA provides to do the operation instead of using > actual shifts. > > For AArch64 this means we generate for: > > void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > for (int i = 0; i < (n & -16); i+=1) > pixel[i] = (pixel[i] * level) / 0xff; } > > the following: > > movi v3.16b, 0x1 > umull2 v1.8h, v0.16b, v2.16b > umull v0.8h, v0.8b, v2.8b > addhn v5.8b, v1.8h, v3.8h > addhn v4.8b, v0.8h, v3.8h > uaddw v1.8h, v1.8h, v5.8b > uaddw v0.8h, v0.8h, v4.8b > uzp2 v0.16b, v0.16b, v1.16b > > instead of: > > umull v2.8h, v1.8b, v5.8b > umull2 v1.8h, v1.16b, v5.16b > umull v0.4s, v2.4h, v3.4h > umull2 v2.4s, v2.8h, v3.8h > umull v4.4s, v1.4h, v3.4h > umull2 v1.4s, v1.8h, v3.8h > uzp2 v0.8h, v0.8h, v2.8h > uzp2 v1.8h, v4.8h, v1.8h > shrn v0.8b, v0.8h, 7 > shrn2 v0.16b, v1.8h, 7 > > Which results in significantly faster code. > > Thanks for Wilco for the concept. > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (@aarch64_bitmask_udiv<mode>3): New. > * config/aarch64/aarch64.cc > (aarch64_vectorize_can_special_div_by_constant): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/div-by-bitmask.c: New test. > > --- inline copy of patch -- > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0b > a6386c1ab50f77e 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -4831,6 +4831,65 @@ (define_expand > "aarch64_<sur><addsub>hn2<mode>" > } > ) > > +;; div optimizations using narrowings > +;; we can do the division e.g. shorts by 255 faster by calculating it > +as ;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in ;; > +double the precision of x. > +;; > +;; If we imagine a short as being composed of two blocks of bytes then > +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to ;; > +adding 1 to each sub component: > +;; > +;; short value of 16-bits > +;; ┌──────────────┬────────────────┐ > +;; │ │ │ > +;; └──────────────┴────────────────┘ > +;; 8-bit part1 ▲ 8-bit part2 ▲ > +;; │ │ > +;; │ │ > +;; +1 +1 > +;; > +;; after the first addition, we have to shift right by 8, and narrow > +the ;; results back to a byte. Remember that the addition must be done > +in ;; double the precision of the input. Since 8 is half the size of a > +short ;; we can use a narrowing halfing instruction in AArch64, addhn > +which also ;; does the addition in a wider precision and narrows back > +to a byte. The ;; shift itself is implicit in the operation as it > +writes back only the top ;; half of the result. i.e. bits 2*esize-1:esize. > +;; > +;; Since we have narrowed the result of the first part back to a byte, > +for ;; the second addition we can use a widening addition, uaddw. > +;; > +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr > +by 8 ;; to shift and the vectorizer. > +;; > +;; The shift is later optimized by combine to a uzp2 with movi #0. > +(define_expand "@aarch64_bitmask_udiv<mode>3" > + [(match_operand:VQN 0 "register_operand") > + (match_operand:VQN 1 "register_operand") > + (match_operand:VQN 2 "immediate_operand")] > + "TARGET_SIMD" > +{ > + unsigned HOST_WIDE_INT size > + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; > + if (!CONST_VECTOR_P (operands[2]) > + || const_vector_encoded_nelts (operands[2]) != 1 > + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) > + FAIL; > + > + rtx addend = gen_reg_rtx (<MODE>mode); > + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); > + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, > +<VNARROWQ2>mode)); > + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); > + rtx tmp2 = gen_reg_rtx (<MODE>mode); > + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); > + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); > + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, > +bitsize); > + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); > + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, > +shift_vector)); > + DONE; > +}) > + > ;; pmul. > > (define_insn "aarch64_pmul<mode>" > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index > 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..91bb7d306f36dc4c9eeaafc3 > 7484b6fc6901bfb4 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -24146,6 +24146,51 @@ aarch64_vectorize_vec_perm_const > (machine_mode vmode, machine_mode op_mode, > return ret; > } > > +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ > + > +bool > +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, > + tree vectype, > + tree treeop0, tree treeop1, > + rtx *output, rtx in0, rtx in1) { > + > + if ((!treeop0 || !treeop1) && (in0 == NULL_RTX || in1 == NULL_RTX)) > + return false; > + > + tree cst = uniform_integer_cst_p (treeop1); tree type; if (code != > + TRUNC_DIV_EXPR > + || !cst > + || !TYPE_UNSIGNED ((type = TREE_TYPE (cst))) > + || tree_int_cst_sgn (cst) != 1) > + return false; > + > + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE > + (vectype)); if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) > + return false; > + > + if (in0 == NULL_RTX && in1 == NULL_RTX) > + { > + gcc_assert (treeop0 && treeop1); > + wide_int icst = wi::to_wide (cst); > + wide_int val = wi::add (icst, 1); > + int pow = wi::exact_log2 (val); > + return pow == (TYPE_PRECISION (type) / 2); > + } > + > + if (!VECTOR_TYPE_P (vectype)) > + return false; > + > + gcc_assert (output); > + > + if (!*output) > + *output = gen_reg_rtx (TYPE_MODE (vectype)); > + > + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, > +in0, in1)); > + return true; > +} > + > /* Generate a byte permute mask for a register of mode MODE, > which has NUNITS units. */ > > diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index > 92bda1a7e14a3c9ea63e151e4a49a818bf4d1bdb..adba9fe97a9b43729c5e86d2 > 44a2a23e76cac097 100644 > --- a/gcc/doc/tm.texi > +++ b/gcc/doc/tm.texi > @@ -6112,6 +6112,22 @@ instruction pattern. There is no need for the hook > to handle these two implementation approaches itself. > @end deftypefn > > +@deftypefn {Target Hook} bool > TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > +(enum @var{tree_code}, tree @var{vectype}, tree @var{treeop0}, tree > +@var{treeop1}, rtx *@var{output}, rtx @var{in0}, rtx @var{in1}) This > +hook is used to test whether the target has a special method of > +division of vectors of type @var{vectype} using the two operands > @code{treeop0}, and @code{treeop1} and producing a vector of type > @var{vectype}. The division will then not be decomposed by the and kept as > a div. > + > +When the hook is being used to test whether the target supports a > +special divide, @var{in0}, @var{in1}, and @var{output} are all null. > +When the hook is being used to emit a division, @var{in0} and @var{in1} > +are the source vectors of type @var{vecttype} and @var{output} is the > +destination vector of type @var{vectype}. > + > +Return true if the operation is possible, emitting instructions for it > +if rtxes are provided and updating @var{output}. > +@end deftypefn > + > @deftypefn {Target Hook} tree > TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION (unsigned > @var{code}, tree @var{vec_type_out}, tree @var{vec_type_in}) This hook > should return the decl of a function that implements the vectorized variant > of the function with the @code{combined_fn} code diff --git > a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index > 112462310b134705d860153294287cfd7d4af81d..d5a745a02acdf051ea1da1b04 > 076d058c24ce093 100644 > --- a/gcc/doc/tm.texi.in > +++ b/gcc/doc/tm.texi.in > @@ -4164,6 +4164,8 @@ address; but often a machine-dependent strategy > can generate better code. > > @hook TARGET_VECTORIZE_VEC_PERM_CONST > > +@hook TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > + > @hook TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION > > @hook TARGET_VECTORIZE_BUILTIN_MD_VECTORIZED_FUNCTION > diff --git a/gcc/explow.cc b/gcc/explow.cc index > ddb4d6ae3600542f8d2bb5617cdd3933a9fae6c0..568e0eb1a158c696458ae678f > 5e346bf34ba0036 100644 > --- a/gcc/explow.cc > +++ b/gcc/explow.cc > @@ -1037,7 +1037,7 @@ round_push (rtx size) > TRUNC_DIV_EXPR. */ > size = expand_binop (Pmode, add_optab, size, alignm1_rtx, > NULL_RTX, 1, OPTAB_LIB_WIDEN); > - size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, size, align_rtx, > + size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, size, > + align_rtx, > NULL_RTX, 1); > size = expand_mult (Pmode, size, align_rtx, NULL_RTX, 1); > > @@ -1203,7 +1203,7 @@ align_dynamic_address (rtx target, unsigned > required_align) > gen_int_mode (required_align / BITS_PER_UNIT - 1, > Pmode), > NULL_RTX, 1, OPTAB_LIB_WIDEN); > - target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, target, > + target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, > target, > gen_int_mode (required_align / BITS_PER_UNIT, > Pmode), > NULL_RTX, 1); > diff --git a/gcc/expmed.h b/gcc/expmed.h index > 0b2538c4c6bd51dfdc772ef70bdf631c0bed8717..0db2986f11ff4a4b10b59501c6 > f33cb3595659b5 100644 > --- a/gcc/expmed.h > +++ b/gcc/expmed.h > @@ -708,8 +708,9 @@ extern rtx expand_variable_shift (enum tree_code, > machine_mode, extern rtx expand_shift (enum tree_code, machine_mode, > rtx, poly_int64, rtx, > int); > #ifdef GCC_OPTABS_H > -extern rtx expand_divmod (int, enum tree_code, machine_mode, rtx, rtx, > - rtx, int, enum optab_methods = > OPTAB_LIB_WIDEN); > +extern rtx expand_divmod (int, enum tree_code, machine_mode, tree, > tree, > + rtx, rtx, rtx, int, > + enum optab_methods = OPTAB_LIB_WIDEN); > #endif > #endif > > diff --git a/gcc/expmed.cc b/gcc/expmed.cc index > 8d7418be418406e72a895ecddf2dc7fdb950c76c..b64ea5ac46a9da85770a5bb09 > 90db8b97d3af414 100644 > --- a/gcc/expmed.cc > +++ b/gcc/expmed.cc > @@ -4222,8 +4222,8 @@ expand_sdiv_pow2 (scalar_int_mode mode, rtx > op0, HOST_WIDE_INT d) > > rtx > expand_divmod (int rem_flag, enum tree_code code, machine_mode > mode, > - rtx op0, rtx op1, rtx target, int unsignedp, > - enum optab_methods methods) > + tree treeop0, tree treeop1, rtx op0, rtx op1, rtx target, > + int unsignedp, enum optab_methods methods) > { > machine_mode compute_mode; > rtx tquotient; > @@ -4375,6 +4375,14 @@ expand_divmod (int rem_flag, enum tree_code > code, machine_mode mode, > > last_div_const = ! rem_flag && op1_is_constant ? INTVAL (op1) : 0; > > + /* Check if the target has specific expansions for the division. */ > + if (treeop0 > + && targetm.vectorize.can_special_div_by_const (code, TREE_TYPE > (treeop0), > + treeop0, treeop1, > + &target, op0, op1)) > + return target; > + > + > /* Now convert to the best mode to use. */ > if (compute_mode != mode) > { > @@ -4618,8 +4626,8 @@ expand_divmod (int rem_flag, enum tree_code > code, machine_mode mode, > || (optab_handler (sdivmod_optab, int_mode) > != CODE_FOR_nothing))) > quotient = expand_divmod (0, TRUNC_DIV_EXPR, > - int_mode, op0, > - gen_int_mode (abs_d, > + int_mode, treeop0, treeop1, > + op0, gen_int_mode (abs_d, > int_mode), > NULL_RTX, 0); > else > @@ -4808,8 +4816,8 @@ expand_divmod (int rem_flag, enum tree_code > code, machine_mode mode, > size - 1, NULL_RTX, 0); > t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), > NULL_RTX); > - t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, > op1, > - NULL_RTX, 0); > + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, > treeop0, > + treeop1, t3, op1, NULL_RTX, 0); > if (t4) > { > rtx t5; > diff --git a/gcc/expr.cc b/gcc/expr.cc > index > 80bb1b8a4c5b8350fb1b8f57a99fd52e5882fcb6..b786f1d75e25f3410c0640cd96 > a8abc055fa34d9 100644 > --- a/gcc/expr.cc > +++ b/gcc/expr.cc > @@ -8028,16 +8028,17 @@ force_operand (rtx value, rtx target) > return expand_divmod (0, > FLOAT_MODE_P (GET_MODE (value)) > ? RDIV_EXPR : TRUNC_DIV_EXPR, > - GET_MODE (value), op1, op2, target, 0); > + GET_MODE (value), NULL, NULL, op1, op2, > + target, 0); > case MOD: > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > op1, op2, > - target, 0); > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > NULL, NULL, > + op1, op2, target, 0); > case UDIV: > - return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > op1, op2, > - target, 1); > + return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > NULL, NULL, > + op1, op2, target, 1); > case UMOD: > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > op1, op2, > - target, 1); > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > NULL, NULL, > + op1, op2, target, 1); > case ASHIFTRT: > return expand_simple_binop (GET_MODE (value), code, op1, op2, > target, 0, OPTAB_LIB_WIDEN); > @@ -8990,11 +8991,13 @@ expand_expr_divmod (tree_code code, > machine_mode mode, tree treeop0, > bool speed_p = optimize_insn_for_speed_p (); > do_pending_stack_adjust (); > start_sequence (); > - rtx uns_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 1); > + rtx uns_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > + op0, op1, target, 1); > rtx_insn *uns_insns = get_insns (); > end_sequence (); > start_sequence (); > - rtx sgn_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 0); > + rtx sgn_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > + op0, op1, target, 0); > rtx_insn *sgn_insns = get_insns (); > end_sequence (); > unsigned uns_cost = seq_cost (uns_insns, speed_p); @@ -9016,7 +9019,8 > @@ expand_expr_divmod (tree_code code, machine_mode mode, tree > treeop0, > emit_insn (sgn_insns); > return sgn_ret; > } > - return expand_divmod (mod_p, code, mode, op0, op1, target, unsignedp); > + return expand_divmod (mod_p, code, mode, treeop0, treeop1, > + op0, op1, target, unsignedp); > } > > rtx > diff --git a/gcc/optabs.cc b/gcc/optabs.cc index > 165f8d1fa22432b96967c69a58dbb7b4bf18120d..cff37ccb0dfc3dd79b97d0abfd > 872f340855dc96 100644 > --- a/gcc/optabs.cc > +++ b/gcc/optabs.cc > @@ -1104,8 +1104,9 @@ expand_doubleword_mod (machine_mode mode, > rtx op0, rtx op1, bool unsignedp) > return NULL_RTX; > } > } > - rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > sum, > - gen_int_mode (INTVAL (op1), > word_mode), > + rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > NULL, NULL, > + sum, gen_int_mode (INTVAL (op1), > + word_mode), > NULL_RTX, 1, OPTAB_DIRECT); > if (remainder == NULL_RTX) > return NULL_RTX; > @@ -1208,8 +1209,8 @@ expand_doubleword_divmod (machine_mode > mode, rtx op0, rtx op1, rtx *rem, > > if (op11 != const1_rtx) > { > - rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, quot1, op11, > - NULL_RTX, unsignedp, OPTAB_DIRECT); > + rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, NULL, NULL, > quot1, > + op11, NULL_RTX, unsignedp, > OPTAB_DIRECT); > if (rem2 == NULL_RTX) > return NULL_RTX; > > @@ -1223,8 +1224,8 @@ expand_doubleword_divmod (machine_mode > mode, rtx op0, rtx op1, rtx *rem, > if (rem2 == NULL_RTX) > return NULL_RTX; > > - rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, quot1, op11, > - NULL_RTX, unsignedp, OPTAB_DIRECT); > + rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, NULL, NULL, > quot1, > + op11, NULL_RTX, unsignedp, > OPTAB_DIRECT); > if (quot2 == NULL_RTX) > return NULL_RTX; > > diff --git a/gcc/target.def b/gcc/target.def index > 2a7fa68f83dd15dcdd2c332e8431e6142ec7d305..92ebd2af18fe8abb6ed95b070 > 81cdd70113db9b1 100644 > --- a/gcc/target.def > +++ b/gcc/target.def > @@ -1902,6 +1902,25 @@ implementation approaches itself.", > const vec_perm_indices &sel), > NULL) > > +DEFHOOK > +(can_special_div_by_const, > + "This hook is used to test whether the target has a special method > +of\n\ division of vectors of type @var{vectype} using the two operands > +@code{treeop0},\n\ and @code{treeop1} and producing a vector of type > +@var{vectype}. The division\n\ will then not be decomposed by the and > +kept as a div.\n\ \n\ When the hook is being used to test whether the > +target supports a special\n\ divide, @var{in0}, @var{in1}, and > +@var{output} are all null. When the hook\n\ is being used to emit a > +division, @var{in0} and @var{in1} are the source\n\ vectors of type > +@var{vecttype} and @var{output} is the destination vector of\n\ type > +@var{vectype}.\n\ \n\ Return true if the operation is possible, > +emitting instructions for it\n\ if rtxes are provided and updating > +@var{output}.", bool, (enum tree_code, tree vectype, tree treeop0, > +tree treeop1, rtx *output, > + rtx in0, rtx in1), > + default_can_special_div_by_const) > + > /* Return true if the target supports misaligned store/load of a > specific factor denoted in the third parameter. The last parameter > is true if the access is defined in a packed struct. */ diff --git a/gcc/target.h > b/gcc/target.h index > d6fa6931499d15edff3e5af3e429540d001c7058..c836036ac7fa7910d62bd3da56 > f39c061f68b665 100644 > --- a/gcc/target.h > +++ b/gcc/target.h > @@ -51,6 +51,7 @@ > #include "insn-codes.h" > #include "tm.h" > #include "hard-reg-set.h" > +#include "tree-core.h" > > #if CHECKING_P > > diff --git a/gcc/targhooks.h b/gcc/targhooks.h index > ecce55ebe797cedc940620e8d89816973a045d49..42451a3e22e86fee9da2f56e > 2640d63f936b336d 100644 > --- a/gcc/targhooks.h > +++ b/gcc/targhooks.h > @@ -207,6 +207,8 @@ extern void default_addr_space_diagnose_usage > (addr_space_t, location_t); extern rtx default_addr_space_convert (rtx, > tree, tree); extern unsigned int default_case_values_threshold (void); > extern bool default_have_conditional_execution (void); > +extern bool default_can_special_div_by_const (enum tree_code, tree, > tree, tree, > + rtx *, rtx, rtx); > > extern bool default_libc_has_function (enum function_class, tree); extern > bool default_libc_has_fast_function (int fcode); diff --git a/gcc/targhooks.cc > b/gcc/targhooks.cc index > b15ae19bcb60c59ae8112e67b5f06a241a9bdbf1..8206533382611a7640efba241 > 279936ced41ee95 100644 > --- a/gcc/targhooks.cc > +++ b/gcc/targhooks.cc > @@ -1807,6 +1807,14 @@ default_have_conditional_execution (void) > return HAVE_conditional_execution; > } > > +/* Default that no division by constant operations are special. */ > +bool default_can_special_div_by_const (enum tree_code, tree, tree, > +tree, rtx *, rtx, > + rtx) > +{ > + return false; > +} > + > /* By default we assume that c99 functions are present at the runtime, > but sincos is not. */ > bool > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..472cd710534bc8aa9b1b4916f3 > d7b4d5b64a19b9 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > @@ -0,0 +1,25 @@ > +/* { dg-require-effective-target vect_int } */ > + > +#include <stdint.h> > +#include "tree-vect.h" > + > +#define N 50 > +#define TYPE uint8_t > + > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * level) / 0xff; } > + > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * level) / 0xff; } > + > +#include "vect-div-bitmask.h" > + > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > +detected" "vect" { target aarch64*-*-* } } } */ > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..e904a71885b2e8487593a2cd3 > db75b3e4112e2cc > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > @@ -0,0 +1,25 @@ > +/* { dg-require-effective-target vect_int } */ > + > +#include <stdint.h> > +#include "tree-vect.h" > + > +#define N 50 > +#define TYPE uint16_t > + > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * level) / 0xffffU; } > + > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * level) / 0xffffU; } > + > +#include "vect-div-bitmask.h" > + > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > +detected" "vect" { target aarch64*-*-* } } } */ > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..a1418ebbf5ea8731ed4e3e720 > 157701d9d1cf852 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > @@ -0,0 +1,26 @@ > +/* { dg-require-effective-target vect_int } */ > +/* { dg-additional-options "-fno-vect-cost-model" { target aarch64*-*-* > +} } */ > + > +#include <stdint.h> > +#include "tree-vect.h" > + > +#define N 50 > +#define TYPE uint32_t > + > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > + > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > +restrict pixel, TYPE level, int n) { > + for (int i = 0; i < n; i+=1) > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > + > +#include "vect-div-bitmask.h" > + > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > +detected" "vect" { target aarch64*-*-* } } } */ > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > new file mode 100644 > index > 0000000000000000000000000000000000000000..29a16739aa4b706616367bfd1 > 832f28ebd07993e > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > @@ -0,0 +1,43 @@ > +#include <stdio.h> > + > +#ifndef N > +#define N 65 > +#endif > + > +#ifndef TYPE > +#define TYPE uint32_t > +#endif > + > +#ifndef DEBUG > +#define DEBUG 0 > +#endif > + > +#define BASE ((TYPE) -1 < 0 ? -126 : 4) > + > +int main () > +{ > + TYPE a[N]; > + TYPE b[N]; > + > + for (int i = 0; i < N; ++i) > + { > + a[i] = BASE + i * 13; > + b[i] = BASE + i * 13; > + if (DEBUG) > + printf ("%d: 0x%x\n", i, a[i]); > + } > + > + fun1 (a, N / 2, N); > + fun2 (b, N / 2, N); > + > + for (int i = 0; i < N; ++i) > + { > + if (DEBUG) > + printf ("%d = 0x%x == 0x%x\n", i, a[i], b[i]); > + > + if (a[i] != b[i]) > + __builtin_abort (); > + } > + return 0; > +} > + > diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44a > b211cd246d82d5 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > @@ -0,0 +1,61 @@ > +/* { dg-do compile } */ > +/* { dg-additional-options "-O3 -std=c99" } */ > +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } > +*/ > + > +#include <stdint.h> > + > +#pragma GCC target "+nosve" > + > +/* > +** draw_bitmap1: > +** ... > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b > +** ... > +*/ > +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xff; } > + > +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xfe; } > + > +/* > +** draw_bitmap3: > +** ... > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h > +** ... > +*/ > +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xffffU; } > + > +/* > +** draw_bitmap4: > +** ... > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s > +** ... > +*/ > +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > diff --git a/gcc/tree-vect-generic.cc b/gcc/tree-vect-generic.cc index > 350129555a0c71c0896c4f1003163f3b3557c11b..ebee5e24b186915ebcb3a817c > 9a12046b6ec94f3 100644 > --- a/gcc/tree-vect-generic.cc > +++ b/gcc/tree-vect-generic.cc > @@ -1237,6 +1237,14 @@ expand_vector_operation (gimple_stmt_iterator > *gsi, tree type, tree compute_type > tree rhs2 = gimple_assign_rhs2 (assign); > tree ret; > > + /* Check if the target was going to handle it through the special > + division callback hook. */ > + if (targetm.vectorize.can_special_div_by_const (code, type, rhs1, > + rhs2, NULL, > + NULL_RTX, > NULL_RTX)) > + return NULL_TREE; > + > + > if (!optimize > || !VECTOR_INTEGER_TYPE_P (type) > || TREE_CODE (rhs2) != VECTOR_CST diff --git a/gcc/tree-vect- > patterns.cc b/gcc/tree-vect-patterns.cc index > 09574bb1a2696b3438a4ce9f09f74b42e784aca0..607acdf95eb30335d8bc0e85af > 0b1bfea10fe443 100644 > --- a/gcc/tree-vect-patterns.cc > +++ b/gcc/tree-vect-patterns.cc > @@ -3596,6 +3596,12 @@ vect_recog_divmod_pattern (vec_info *vinfo, > > return pattern_stmt; > } > + else if (targetm.vectorize.can_special_div_by_const (rhs_code, vectype, > + oprnd0, oprnd1, NULL, > + NULL_RTX, NULL_RTX)) > + { > + return NULL; > + } > > if (prec > HOST_BITS_PER_WIDE_INT > || integer_zerop (oprnd1)) > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index > c9dab217f059f17e91e9a7582523e627d7a45b66..6d05c48a7339de094d7288bd6 > 8e0e1c1e93faafe 100644 > --- a/gcc/tree-vect-stmts.cc > +++ b/gcc/tree-vect-stmts.cc > @@ -6260,6 +6260,11 @@ vectorizable_operation (vec_info *vinfo, > } > target_support_p = (optab_handler (optab, vec_mode) > != CODE_FOR_nothing); > + if (!target_support_p) > + target_support_p > + = targetm.vectorize.can_special_div_by_const (code, vectype, > + op0, op1, NULL, > + NULL_RTX, > NULL_RTX); > } > > bool using_emulated_vectors_p = vect_emulated_vector_p (vectype); > > > > > --
Ping > -----Original Message----- > From: Tamar Christina > Sent: Monday, October 31, 2022 11:35 AM > To: 'Tamar Christina' <tamar.christina@arm.com>; gcc-patches@gcc.gnu.org > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > <Richard.Sandiford@arm.com> > Subject: RE: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask > division. > > Hi All, > > Ping, and updated patch based on mid-end changes. > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (@aarch64_bitmask_udiv<mode>3): New. > * config/aarch64/aarch64.cc > (aarch64_vectorize_can_special_div_by_constant): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/div-by-bitmask.c: New test. > > --- inline copy of patch --- > > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0b > a6386c1ab50f77e 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -4831,6 +4831,65 @@ (define_expand > "aarch64_<sur><addsub>hn2<mode>" > } > ) > > +;; div optimizations using narrowings > +;; we can do the division e.g. shorts by 255 faster by calculating it > +as ;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in ;; > +double the precision of x. > +;; > +;; If we imagine a short as being composed of two blocks of bytes then > +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to ;; > +adding 1 to each sub component: > +;; > +;; short value of 16-bits > +;; ┌──────────────┬────────────────┐ > +;; │ │ │ > +;; └──────────────┴────────────────┘ > +;; 8-bit part1 ▲ 8-bit part2 ▲ > +;; │ │ > +;; │ │ > +;; +1 +1 > +;; > +;; after the first addition, we have to shift right by 8, and narrow > +the ;; results back to a byte. Remember that the addition must be done > +in ;; double the precision of the input. Since 8 is half the size of a > +short ;; we can use a narrowing halfing instruction in AArch64, addhn > +which also ;; does the addition in a wider precision and narrows back > +to a byte. The ;; shift itself is implicit in the operation as it > +writes back only the top ;; half of the result. i.e. bits 2*esize-1:esize. > +;; > +;; Since we have narrowed the result of the first part back to a byte, > +for ;; the second addition we can use a widening addition, uaddw. > +;; > +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr > +by 8 ;; to shift and the vectorizer. > +;; > +;; The shift is later optimized by combine to a uzp2 with movi #0. > +(define_expand "@aarch64_bitmask_udiv<mode>3" > + [(match_operand:VQN 0 "register_operand") > + (match_operand:VQN 1 "register_operand") > + (match_operand:VQN 2 "immediate_operand")] > + "TARGET_SIMD" > +{ > + unsigned HOST_WIDE_INT size > + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; > + if (!CONST_VECTOR_P (operands[2]) > + || const_vector_encoded_nelts (operands[2]) != 1 > + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) > + FAIL; > + > + rtx addend = gen_reg_rtx (<MODE>mode); > + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); > + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, > +<VNARROWQ2>mode)); > + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); > + rtx tmp2 = gen_reg_rtx (<MODE>mode); > + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); > + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); > + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, > +bitsize); > + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); > + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, > +shift_vector)); > + DONE; > +}) > + > ;; pmul. > > (define_insn "aarch64_pmul<mode>" > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index > 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..d3c3650d7d728f56adb65154 > 127dc7b72386c5a7 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -24146,6 +24146,40 @@ aarch64_vectorize_vec_perm_const > (machine_mode vmode, machine_mode op_mode, > return ret; > } > > +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ > + > +bool > +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, > + tree vectype, wide_int cst, > + rtx *output, rtx in0, rtx in1) { > + if (code != TRUNC_DIV_EXPR > + || !TYPE_UNSIGNED (vectype)) > + return false; > + > + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE > + (vectype)); if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) > + return false; > + > + if (in0 == NULL_RTX && in1 == NULL_RTX) > + { > + wide_int val = wi::add (cst, 1); > + int pow = wi::exact_log2 (val); > + return pow == (int)(element_precision (vectype) / 2); > + } > + > + if (!VECTOR_TYPE_P (vectype)) > + return false; > + > + gcc_assert (output); > + > + if (!*output) > + *output = gen_reg_rtx (TYPE_MODE (vectype)); > + > + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, > +in0, in1)); > + return true; > +} > + > /* Generate a byte permute mask for a register of mode MODE, > which has NUNITS units. */ > > @@ -27606,6 +27640,10 @@ aarch64_libgcc_floating_mode_supported_p > #undef TARGET_VECTOR_ALIGNMENT > #define TARGET_VECTOR_ALIGNMENT aarch64_simd_vector_alignment > > +#undef TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > +#define TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST \ > + aarch64_vectorize_can_special_div_by_constant > + > #undef TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT > #define TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT \ > aarch64_vectorize_preferred_vector_alignment > diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44a > b211cd246d82d5 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > @@ -0,0 +1,61 @@ > +/* { dg-do compile } */ > +/* { dg-additional-options "-O3 -std=c99" } */ > +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } > +*/ > + > +#include <stdint.h> > + > +#pragma GCC target "+nosve" > + > +/* > +** draw_bitmap1: > +** ... > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b > +** ... > +*/ > +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xff; } > + > +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xfe; } > + > +/* > +** draw_bitmap3: > +** ... > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h > +** ... > +*/ > +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xffffU; } > + > +/* > +** draw_bitmap4: > +** ... > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s > +** ... > +*/ > +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) { > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > > -----Original Message----- > > From: Tamar Christina <tamar.christina@arm.com> > > Sent: Friday, September 23, 2022 10:34 AM > > To: gcc-patches@gcc.gnu.org > > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > > <Richard.Sandiford@arm.com> > > Subject: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask > division. > > > > Hi All, > > > > This adds an implementation for the new optab for unsigned pow2 > > bitmask for AArch64. > > > > The implementation rewrites: > > > > x = y / (2 ^ (sizeof (y)/2)-1 > > > > into e.g. (for bytes) > > > > (x + ((x + 257) >> 8)) >> 8 > > > > where it's required that the additions be done in double the precision > > of x such that we don't lose any bits during an overflow. > > > > Essentially the sequence decomposes the division into doing two > > smaller divisions, one for the top and bottom parts of the number and > > adding the results back together. > > > > To account for the fact that shift by 8 would be division by 256 we > > add 1 to both parts of x such that when 255 we still get 1 as the answer. > > > > Because the amount we shift are half the original datatype we can use > > the halfing instructions the ISA provides to do the operation instead > > of using actual shifts. > > > > For AArch64 this means we generate for: > > > > void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > > for (int i = 0; i < (n & -16); i+=1) > > pixel[i] = (pixel[i] * level) / 0xff; } > > > > the following: > > > > movi v3.16b, 0x1 > > umull2 v1.8h, v0.16b, v2.16b > > umull v0.8h, v0.8b, v2.8b > > addhn v5.8b, v1.8h, v3.8h > > addhn v4.8b, v0.8h, v3.8h > > uaddw v1.8h, v1.8h, v5.8b > > uaddw v0.8h, v0.8h, v4.8b > > uzp2 v0.16b, v0.16b, v1.16b > > > > instead of: > > > > umull v2.8h, v1.8b, v5.8b > > umull2 v1.8h, v1.16b, v5.16b > > umull v0.4s, v2.4h, v3.4h > > umull2 v2.4s, v2.8h, v3.8h > > umull v4.4s, v1.4h, v3.4h > > umull2 v1.4s, v1.8h, v3.8h > > uzp2 v0.8h, v0.8h, v2.8h > > uzp2 v1.8h, v4.8h, v1.8h > > shrn v0.8b, v0.8h, 7 > > shrn2 v0.16b, v1.8h, 7 > > > > Which results in significantly faster code. > > > > Thanks for Wilco for the concept. > > > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > > > Ok for master? > > > > Thanks, > > Tamar > > > > gcc/ChangeLog: > > > > * config/aarch64/aarch64-simd.md > > (@aarch64_bitmask_udiv<mode>3): New. > > * config/aarch64/aarch64.cc > > (aarch64_vectorize_can_special_div_by_constant): New. > > > > gcc/testsuite/ChangeLog: > > > > * gcc.target/aarch64/div-by-bitmask.c: New test. > > > > --- inline copy of patch -- > > diff --git a/gcc/config/aarch64/aarch64-simd.md > > b/gcc/config/aarch64/aarch64-simd.md > > index > > > 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f0b > > a6386c1ab50f77e 100644 > > --- a/gcc/config/aarch64/aarch64-simd.md > > +++ b/gcc/config/aarch64/aarch64-simd.md > > @@ -4831,6 +4831,65 @@ (define_expand > > "aarch64_<sur><addsub>hn2<mode>" > > } > > ) > > > > +;; div optimizations using narrowings ;; we can do the division e.g. > > +shorts by 255 faster by calculating it as ;; (x + ((x + 257) >> 8)) > > +>> 8 assuming the operation is done in ;; double the precision of x. > > +;; > > +;; If we imagine a short as being composed of two blocks of bytes > > +then ;; adding 257 or 0b0000_0001_0000_0001 to the number is > > +equivalen to ;; adding 1 to each sub component: > > +;; > > +;; short value of 16-bits > > +;; ┌──────────────┬────────────────┐ > > +;; │ │ │ > > +;; └──────────────┴────────────────┘ > > +;; 8-bit part1 ▲ 8-bit part2 ▲ > > +;; │ │ > > +;; │ │ > > +;; +1 +1 > > +;; > > +;; after the first addition, we have to shift right by 8, and narrow > > +the ;; results back to a byte. Remember that the addition must be > > +done in ;; double the precision of the input. Since 8 is half the > > +size of a short ;; we can use a narrowing halfing instruction in > > +AArch64, addhn which also ;; does the addition in a wider precision > > +and narrows back to a byte. The ;; shift itself is implicit in the > > +operation as it writes back only the top ;; half of the result. i.e. bits > 2*esize-1:esize. > > +;; > > +;; Since we have narrowed the result of the first part back to a > > +byte, for ;; the second addition we can use a widening addition, uaddw. > > +;; > > +;; For the finaly shift, since it's unsigned arithmatic we emit an > > +ushr by 8 ;; to shift and the vectorizer. > > +;; > > +;; The shift is later optimized by combine to a uzp2 with movi #0. > > +(define_expand "@aarch64_bitmask_udiv<mode>3" > > + [(match_operand:VQN 0 "register_operand") > > + (match_operand:VQN 1 "register_operand") > > + (match_operand:VQN 2 "immediate_operand")] > > + "TARGET_SIMD" > > +{ > > + unsigned HOST_WIDE_INT size > > + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; > > + if (!CONST_VECTOR_P (operands[2]) > > + || const_vector_encoded_nelts (operands[2]) != 1 > > + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) > > + FAIL; > > + > > + rtx addend = gen_reg_rtx (<MODE>mode); > > + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, > 1); > > + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, > > +<VNARROWQ2>mode)); > > + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); > > + rtx tmp2 = gen_reg_rtx (<MODE>mode); > > + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); > > + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); > > + rtx shift_vector = aarch64_simd_gen_const_vector_dup > (<MODE>mode, > > +bitsize); > > + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], > tmp1)); > > + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, > > +shift_vector)); > > + DONE; > > +}) > > + > > ;; pmul. > > > > (define_insn "aarch64_pmul<mode>" > > diff --git a/gcc/config/aarch64/aarch64.cc > > b/gcc/config/aarch64/aarch64.cc index > > > 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..91bb7d306f36dc4c9eeaafc3 > > 7484b6fc6901bfb4 100644 > > --- a/gcc/config/aarch64/aarch64.cc > > +++ b/gcc/config/aarch64/aarch64.cc > > @@ -24146,6 +24146,51 @@ aarch64_vectorize_vec_perm_const > > (machine_mode vmode, machine_mode op_mode, > > return ret; > > } > > > > +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ > > + > > +bool > > +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, > > + tree vectype, > > + tree treeop0, tree treeop1, > > + rtx *output, rtx in0, rtx in1) { > > + > > + if ((!treeop0 || !treeop1) && (in0 == NULL_RTX || in1 == NULL_RTX)) > > + return false; > > + > > + tree cst = uniform_integer_cst_p (treeop1); tree type; if (code > > + != TRUNC_DIV_EXPR > > + || !cst > > + || !TYPE_UNSIGNED ((type = TREE_TYPE (cst))) > > + || tree_int_cst_sgn (cst) != 1) > > + return false; > > + > > + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE > > + (vectype)); if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) > > + return false; > > + > > + if (in0 == NULL_RTX && in1 == NULL_RTX) > > + { > > + gcc_assert (treeop0 && treeop1); > > + wide_int icst = wi::to_wide (cst); > > + wide_int val = wi::add (icst, 1); > > + int pow = wi::exact_log2 (val); > > + return pow == (TYPE_PRECISION (type) / 2); > > + } > > + > > + if (!VECTOR_TYPE_P (vectype)) > > + return false; > > + > > + gcc_assert (output); > > + > > + if (!*output) > > + *output = gen_reg_rtx (TYPE_MODE (vectype)); > > + > > + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), > *output, > > +in0, in1)); > > + return true; > > +} > > + > > /* Generate a byte permute mask for a register of mode MODE, > > which has NUNITS units. */ > > > > diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index > > > 92bda1a7e14a3c9ea63e151e4a49a818bf4d1bdb..adba9fe97a9b43729c5e86d2 > > 44a2a23e76cac097 100644 > > --- a/gcc/doc/tm.texi > > +++ b/gcc/doc/tm.texi > > @@ -6112,6 +6112,22 @@ instruction pattern. There is no need for the > > hook to handle these two implementation approaches itself. > > @end deftypefn > > > > +@deftypefn {Target Hook} bool > > TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > > +(enum @var{tree_code}, tree @var{vectype}, tree @var{treeop0}, tree > > +@var{treeop1}, rtx *@var{output}, rtx @var{in0}, rtx @var{in1}) This > > +hook is used to test whether the target has a special method of > > +division of vectors of type @var{vectype} using the two operands > > @code{treeop0}, and @code{treeop1} and producing a vector of type > > @var{vectype}. The division will then not be decomposed by the and > > kept as a div. > > + > > +When the hook is being used to test whether the target supports a > > +special divide, @var{in0}, @var{in1}, and @var{output} are all null. > > +When the hook is being used to emit a division, @var{in0} and > > +@var{in1} are the source vectors of type @var{vecttype} and > > +@var{output} is the destination vector of type @var{vectype}. > > + > > +Return true if the operation is possible, emitting instructions for > > +it if rtxes are provided and updating @var{output}. > > +@end deftypefn > > + > > @deftypefn {Target Hook} tree > > TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION (unsigned > @var{code}, > > tree @var{vec_type_out}, tree @var{vec_type_in}) This hook should > > return the decl of a function that implements the vectorized variant > > of the function with the @code{combined_fn} code diff --git > > a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index > > > 112462310b134705d860153294287cfd7d4af81d..d5a745a02acdf051ea1da1b04 > > 076d058c24ce093 100644 > > --- a/gcc/doc/tm.texi.in > > +++ b/gcc/doc/tm.texi.in > > @@ -4164,6 +4164,8 @@ address; but often a machine-dependent > strategy > > can generate better code. > > > > @hook TARGET_VECTORIZE_VEC_PERM_CONST > > > > +@hook TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > > + > > @hook TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION > > > > @hook TARGET_VECTORIZE_BUILTIN_MD_VECTORIZED_FUNCTION > > diff --git a/gcc/explow.cc b/gcc/explow.cc index > > > ddb4d6ae3600542f8d2bb5617cdd3933a9fae6c0..568e0eb1a158c696458ae678f > > 5e346bf34ba0036 100644 > > --- a/gcc/explow.cc > > +++ b/gcc/explow.cc > > @@ -1037,7 +1037,7 @@ round_push (rtx size) > > TRUNC_DIV_EXPR. */ > > size = expand_binop (Pmode, add_optab, size, alignm1_rtx, > > NULL_RTX, 1, OPTAB_LIB_WIDEN); > > - size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, size, align_rtx, > > + size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, size, > > + align_rtx, > > NULL_RTX, 1); > > size = expand_mult (Pmode, size, align_rtx, NULL_RTX, 1); > > > > @@ -1203,7 +1203,7 @@ align_dynamic_address (rtx target, unsigned > > required_align) > > gen_int_mode (required_align / BITS_PER_UNIT - 1, > > Pmode), > > NULL_RTX, 1, OPTAB_LIB_WIDEN); > > - target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, target, > > + target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, > > target, > > gen_int_mode (required_align / BITS_PER_UNIT, > > Pmode), > > NULL_RTX, 1); > > diff --git a/gcc/expmed.h b/gcc/expmed.h index > > > 0b2538c4c6bd51dfdc772ef70bdf631c0bed8717..0db2986f11ff4a4b10b59501c6 > > f33cb3595659b5 100644 > > --- a/gcc/expmed.h > > +++ b/gcc/expmed.h > > @@ -708,8 +708,9 @@ extern rtx expand_variable_shift (enum tree_code, > > machine_mode, extern rtx expand_shift (enum tree_code, > machine_mode, > > rtx, poly_int64, rtx, > > int); > > #ifdef GCC_OPTABS_H > > -extern rtx expand_divmod (int, enum tree_code, machine_mode, rtx, rtx, > > - rtx, int, enum optab_methods = > > OPTAB_LIB_WIDEN); > > +extern rtx expand_divmod (int, enum tree_code, machine_mode, tree, > > tree, > > + rtx, rtx, rtx, int, > > + enum optab_methods = OPTAB_LIB_WIDEN); > > #endif > > #endif > > > > diff --git a/gcc/expmed.cc b/gcc/expmed.cc index > > > 8d7418be418406e72a895ecddf2dc7fdb950c76c..b64ea5ac46a9da85770a5bb09 > > 90db8b97d3af414 100644 > > --- a/gcc/expmed.cc > > +++ b/gcc/expmed.cc > > @@ -4222,8 +4222,8 @@ expand_sdiv_pow2 (scalar_int_mode mode, rtx > op0, > > HOST_WIDE_INT d) > > > > rtx > > expand_divmod (int rem_flag, enum tree_code code, machine_mode > mode, > > - rtx op0, rtx op1, rtx target, int unsignedp, > > - enum optab_methods methods) > > + tree treeop0, tree treeop1, rtx op0, rtx op1, rtx target, > > + int unsignedp, enum optab_methods methods) > > { > > machine_mode compute_mode; > > rtx tquotient; > > @@ -4375,6 +4375,14 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > > > last_div_const = ! rem_flag && op1_is_constant ? INTVAL (op1) : 0; > > > > + /* Check if the target has specific expansions for the division. > > + */ if (treeop0 > > + && targetm.vectorize.can_special_div_by_const (code, TREE_TYPE > > (treeop0), > > + treeop0, treeop1, > > + &target, op0, op1)) > > + return target; > > + > > + > > /* Now convert to the best mode to use. */ > > if (compute_mode != mode) > > { > > @@ -4618,8 +4626,8 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > || (optab_handler (sdivmod_optab, int_mode) > > != CODE_FOR_nothing))) > > quotient = expand_divmod (0, TRUNC_DIV_EXPR, > > - int_mode, op0, > > - gen_int_mode (abs_d, > > + int_mode, treeop0, treeop1, > > + op0, gen_int_mode (abs_d, > > int_mode), > > NULL_RTX, 0); > > else > > @@ -4808,8 +4816,8 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > size - 1, NULL_RTX, 0); > > t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), > > NULL_RTX); > > - t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, > > op1, > > - NULL_RTX, 0); > > + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, > > treeop0, > > + treeop1, t3, op1, NULL_RTX, 0); > > if (t4) > > { > > rtx t5; > > diff --git a/gcc/expr.cc b/gcc/expr.cc index > > > 80bb1b8a4c5b8350fb1b8f57a99fd52e5882fcb6..b786f1d75e25f3410c0640cd96 > > a8abc055fa34d9 100644 > > --- a/gcc/expr.cc > > +++ b/gcc/expr.cc > > @@ -8028,16 +8028,17 @@ force_operand (rtx value, rtx target) > > return expand_divmod (0, > > FLOAT_MODE_P (GET_MODE (value)) > > ? RDIV_EXPR : TRUNC_DIV_EXPR, > > - GET_MODE (value), op1, op2, target, 0); > > + GET_MODE (value), NULL, NULL, op1, op2, > > + target, 0); > > case MOD: > > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > op1, op2, > > - target, 0); > > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 0); > > case UDIV: > > - return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > > op1, op2, > > - target, 1); > > + return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 1); > > case UMOD: > > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > op1, op2, > > - target, 1); > > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 1); > > case ASHIFTRT: > > return expand_simple_binop (GET_MODE (value), code, op1, op2, > > target, 0, OPTAB_LIB_WIDEN); @@ - > 8990,11 +8991,13 @@ > > expand_expr_divmod (tree_code code, machine_mode mode, tree > treeop0, > > bool speed_p = optimize_insn_for_speed_p (); > > do_pending_stack_adjust (); > > start_sequence (); > > - rtx uns_ret = expand_divmod (mod_p, code, mode, op0, op1, target, > 1); > > + rtx uns_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, 1); > > rtx_insn *uns_insns = get_insns (); > > end_sequence (); > > start_sequence (); > > - rtx sgn_ret = expand_divmod (mod_p, code, mode, op0, op1, target, > 0); > > + rtx sgn_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, 0); > > rtx_insn *sgn_insns = get_insns (); > > end_sequence (); > > unsigned uns_cost = seq_cost (uns_insns, speed_p); @@ -9016,7 > > +9019,8 @@ expand_expr_divmod (tree_code code, machine_mode > mode, tree > > treeop0, > > emit_insn (sgn_insns); > > return sgn_ret; > > } > > - return expand_divmod (mod_p, code, mode, op0, op1, target, > > unsignedp); > > + return expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, unsignedp); > > } > > > > rtx > > diff --git a/gcc/optabs.cc b/gcc/optabs.cc index > > > 165f8d1fa22432b96967c69a58dbb7b4bf18120d..cff37ccb0dfc3dd79b97d0abfd > > 872f340855dc96 100644 > > --- a/gcc/optabs.cc > > +++ b/gcc/optabs.cc > > @@ -1104,8 +1104,9 @@ expand_doubleword_mod (machine_mode > mode, rtx > > op0, rtx op1, bool unsignedp) > > return NULL_RTX; > > } > > } > > - rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > > sum, > > - gen_int_mode (INTVAL (op1), > > word_mode), > > + rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > > NULL, NULL, > > + sum, gen_int_mode (INTVAL (op1), > > + word_mode), > > NULL_RTX, 1, OPTAB_DIRECT); > > if (remainder == NULL_RTX) > > return NULL_RTX; > > @@ -1208,8 +1209,8 @@ expand_doubleword_divmod (machine_mode > mode, rtx > > op0, rtx op1, rtx *rem, > > > > if (op11 != const1_rtx) > > { > > - rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, quot1, > op11, > > - NULL_RTX, unsignedp, OPTAB_DIRECT); > > + rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, NULL, > NULL, > > quot1, > > + op11, NULL_RTX, unsignedp, > > OPTAB_DIRECT); > > if (rem2 == NULL_RTX) > > return NULL_RTX; > > > > @@ -1223,8 +1224,8 @@ expand_doubleword_divmod (machine_mode > mode, rtx > > op0, rtx op1, rtx *rem, > > if (rem2 == NULL_RTX) > > return NULL_RTX; > > > > - rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, quot1, op11, > > - NULL_RTX, unsignedp, OPTAB_DIRECT); > > + rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, NULL, NULL, > > quot1, > > + op11, NULL_RTX, unsignedp, > > OPTAB_DIRECT); > > if (quot2 == NULL_RTX) > > return NULL_RTX; > > > > diff --git a/gcc/target.def b/gcc/target.def index > > > 2a7fa68f83dd15dcdd2c332e8431e6142ec7d305..92ebd2af18fe8abb6ed95b070 > > 81cdd70113db9b1 100644 > > --- a/gcc/target.def > > +++ b/gcc/target.def > > @@ -1902,6 +1902,25 @@ implementation approaches itself.", > > const vec_perm_indices &sel), > > NULL) > > > > +DEFHOOK > > +(can_special_div_by_const, > > + "This hook is used to test whether the target has a special method > > +of\n\ division of vectors of type @var{vectype} using the two > > +operands @code{treeop0},\n\ and @code{treeop1} and producing a > vector > > +of type @var{vectype}. The division\n\ will then not be decomposed > > +by the and kept as a div.\n\ \n\ When the hook is being used to test > > +whether the target supports a special\n\ divide, @var{in0}, > > +@var{in1}, and @var{output} are all null. When the hook\n\ is being > > +used to emit a division, @var{in0} and @var{in1} are the source\n\ > > +vectors of type @var{vecttype} and @var{output} is the destination > > +vector of\n\ type @var{vectype}.\n\ \n\ Return true if the operation > > +is possible, emitting instructions for it\n\ if rtxes are provided > > +and updating @var{output}.", bool, (enum tree_code, tree vectype, > > +tree treeop0, tree treeop1, rtx *output, > > + rtx in0, rtx in1), > > + default_can_special_div_by_const) > > + > > /* Return true if the target supports misaligned store/load of a > > specific factor denoted in the third parameter. The last parameter > > is true if the access is defined in a packed struct. */ diff > > --git a/gcc/target.h b/gcc/target.h index > > > d6fa6931499d15edff3e5af3e429540d001c7058..c836036ac7fa7910d62bd3da56 > > f39c061f68b665 100644 > > --- a/gcc/target.h > > +++ b/gcc/target.h > > @@ -51,6 +51,7 @@ > > #include "insn-codes.h" > > #include "tm.h" > > #include "hard-reg-set.h" > > +#include "tree-core.h" > > > > #if CHECKING_P > > > > diff --git a/gcc/targhooks.h b/gcc/targhooks.h index > > > ecce55ebe797cedc940620e8d89816973a045d49..42451a3e22e86fee9da2f56e > > 2640d63f936b336d 100644 > > --- a/gcc/targhooks.h > > +++ b/gcc/targhooks.h > > @@ -207,6 +207,8 @@ extern void default_addr_space_diagnose_usage > > (addr_space_t, location_t); extern rtx default_addr_space_convert > > (rtx, tree, tree); extern unsigned int default_case_values_threshold > > (void); extern bool default_have_conditional_execution (void); > > +extern bool default_can_special_div_by_const (enum tree_code, tree, > > tree, tree, > > + rtx *, rtx, rtx); > > > > extern bool default_libc_has_function (enum function_class, tree); > > extern bool default_libc_has_fast_function (int fcode); diff --git > > a/gcc/targhooks.cc b/gcc/targhooks.cc index > > > b15ae19bcb60c59ae8112e67b5f06a241a9bdbf1..8206533382611a7640efba241 > > 279936ced41ee95 100644 > > --- a/gcc/targhooks.cc > > +++ b/gcc/targhooks.cc > > @@ -1807,6 +1807,14 @@ default_have_conditional_execution (void) > > return HAVE_conditional_execution; > > } > > > > +/* Default that no division by constant operations are special. */ > > +bool default_can_special_div_by_const (enum tree_code, tree, tree, > > +tree, rtx *, rtx, > > + rtx) > > +{ > > + return false; > > +} > > + > > /* By default we assume that c99 functions are present at the runtime, > > but sincos is not. */ > > bool > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..472cd710534bc8aa9b1b4916f3 > > d7b4d5b64a19b9 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > @@ -0,0 +1,25 @@ > > +/* { dg-require-effective-target vect_int } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint8_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..e904a71885b2e8487593a2cd3 > > db75b3e4112e2cc > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > @@ -0,0 +1,25 @@ > > +/* { dg-require-effective-target vect_int } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint16_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..a1418ebbf5ea8731ed4e3e720 > > 157701d9d1cf852 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > @@ -0,0 +1,26 @@ > > +/* { dg-require-effective-target vect_int } */ > > +/* { dg-additional-options "-fno-vect-cost-model" { target > > +aarch64*-*-* } } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint32_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..29a16739aa4b706616367bfd1 > > 832f28ebd07993e > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > @@ -0,0 +1,43 @@ > > +#include <stdio.h> > > + > > +#ifndef N > > +#define N 65 > > +#endif > > + > > +#ifndef TYPE > > +#define TYPE uint32_t > > +#endif > > + > > +#ifndef DEBUG > > +#define DEBUG 0 > > +#endif > > + > > +#define BASE ((TYPE) -1 < 0 ? -126 : 4) > > + > > +int main () > > +{ > > + TYPE a[N]; > > + TYPE b[N]; > > + > > + for (int i = 0; i < N; ++i) > > + { > > + a[i] = BASE + i * 13; > > + b[i] = BASE + i * 13; > > + if (DEBUG) > > + printf ("%d: 0x%x\n", i, a[i]); > > + } > > + > > + fun1 (a, N / 2, N); > > + fun2 (b, N / 2, N); > > + > > + for (int i = 0; i < N; ++i) > > + { > > + if (DEBUG) > > + printf ("%d = 0x%x == 0x%x\n", i, a[i], b[i]); > > + > > + if (a[i] != b[i]) > > + __builtin_abort (); > > + } > > + return 0; > > +} > > + > > diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44a > > b211cd246d82d5 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > @@ -0,0 +1,61 @@ > > +/* { dg-do compile } */ > > +/* { dg-additional-options "-O3 -std=c99" } */ > > +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } > > +} */ > > + > > +#include <stdint.h> > > + > > +#pragma GCC target "+nosve" > > + > > +/* > > +** draw_bitmap1: > > +** ... > > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > > +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b > > +** ... > > +*/ > > +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xfe; } > > + > > +/* > > +** draw_bitmap3: > > +** ... > > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > > +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h > > +** ... > > +*/ > > +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +/* > > +** draw_bitmap4: > > +** ... > > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > > +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s > > +** ... > > +*/ > > +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > diff --git a/gcc/tree-vect-generic.cc b/gcc/tree-vect-generic.cc index > > > 350129555a0c71c0896c4f1003163f3b3557c11b..ebee5e24b186915ebcb3a817c > > 9a12046b6ec94f3 100644 > > --- a/gcc/tree-vect-generic.cc > > +++ b/gcc/tree-vect-generic.cc > > @@ -1237,6 +1237,14 @@ expand_vector_operation > (gimple_stmt_iterator > > *gsi, tree type, tree compute_type > > tree rhs2 = gimple_assign_rhs2 (assign); > > tree ret; > > > > + /* Check if the target was going to handle it through the special > > + division callback hook. */ > > + if (targetm.vectorize.can_special_div_by_const (code, type, rhs1, > > + rhs2, NULL, > > + NULL_RTX, > > NULL_RTX)) > > + return NULL_TREE; > > + > > + > > if (!optimize > > || !VECTOR_INTEGER_TYPE_P (type) > > || TREE_CODE (rhs2) != VECTOR_CST diff --git a/gcc/tree-vect- > > patterns.cc b/gcc/tree-vect-patterns.cc index > > > 09574bb1a2696b3438a4ce9f09f74b42e784aca0..607acdf95eb30335d8bc0e85af > > 0b1bfea10fe443 100644 > > --- a/gcc/tree-vect-patterns.cc > > +++ b/gcc/tree-vect-patterns.cc > > @@ -3596,6 +3596,12 @@ vect_recog_divmod_pattern (vec_info *vinfo, > > > > return pattern_stmt; > > } > > + else if (targetm.vectorize.can_special_div_by_const (rhs_code, vectype, > > + oprnd0, oprnd1, NULL, > > + NULL_RTX, NULL_RTX)) > > + { > > + return NULL; > > + } > > > > if (prec > HOST_BITS_PER_WIDE_INT > > || integer_zerop (oprnd1)) > > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index > > > c9dab217f059f17e91e9a7582523e627d7a45b66..6d05c48a7339de094d7288bd6 > > 8e0e1c1e93faafe 100644 > > --- a/gcc/tree-vect-stmts.cc > > +++ b/gcc/tree-vect-stmts.cc > > @@ -6260,6 +6260,11 @@ vectorizable_operation (vec_info *vinfo, > > } > > target_support_p = (optab_handler (optab, vec_mode) > > != CODE_FOR_nothing); > > + if (!target_support_p) > > + target_support_p > > + = targetm.vectorize.can_special_div_by_const (code, vectype, > > + op0, op1, NULL, > > + NULL_RTX, > > NULL_RTX); > > } > > > > bool using_emulated_vectors_p = vect_emulated_vector_p (vectype); > > > > > > > > > > --
Hi Tamar, > -----Original Message----- > From: Tamar Christina <Tamar.Christina@arm.com> > Sent: Monday, October 31, 2022 11:35 AM > To: Tamar Christina <Tamar.Christina@arm.com>; gcc-patches@gcc.gnu.org > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > <Richard.Sandiford@arm.com> > Subject: RE: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask > division. > > Hi All, > > Ping, and updated patch based on mid-end changes. > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (@aarch64_bitmask_udiv<mode>3): New. > * config/aarch64/aarch64.cc > (aarch64_vectorize_can_special_div_by_constant): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/div-by-bitmask.c: New test. > > --- inline copy of patch --- > > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f > 0ba6386c1ab50f77e 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -4831,6 +4831,65 @@ (define_expand > "aarch64_<sur><addsub>hn2<mode>" > } > ) Some editorial comments. > > +;; div optimizations using narrowings > +;; we can do the division e.g. shorts by 255 faster by calculating it as > +;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in > +;; double the precision of x. > +;; > +;; If we imagine a short as being composed of two blocks of bytes then > +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to Typo "equivalent" > +;; adding 1 to each sub component: > +;; > +;; short value of 16-bits > +;; ┌──────────────┬────────────────┐ > +;; │ │ │ > +;; └──────────────┴────────────────┘ > +;; 8-bit part1 ▲ 8-bit part2 ▲ > +;; │ │ > +;; │ │ > +;; +1 +1 > +;; > +;; after the first addition, we have to shift right by 8, and narrow the > +;; results back to a byte. Remember that the addition must be done in > +;; double the precision of the input. Since 8 is half the size of a short > +;; we can use a narrowing halfing instruction in AArch64, addhn which also > +;; does the addition in a wider precision and narrows back to a byte. The > +;; shift itself is implicit in the operation as it writes back only the top > +;; half of the result. i.e. bits 2*esize-1:esize. > +;; > +;; Since we have narrowed the result of the first part back to a byte, for > +;; the second addition we can use a widening addition, uaddw. > +;; > +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr by 8 "final shift", "unsigned arithmetic" > +;; to shift and the vectorizer. Incomplete sentence? > +;; > +;; The shift is later optimized by combine to a uzp2 with movi #0. > +(define_expand "@aarch64_bitmask_udiv<mode>3" > + [(match_operand:VQN 0 "register_operand") > + (match_operand:VQN 1 "register_operand") > + (match_operand:VQN 2 "immediate_operand")] > + "TARGET_SIMD" > +{ > + unsigned HOST_WIDE_INT size > + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; > + if (!CONST_VECTOR_P (operands[2]) > + || const_vector_encoded_nelts (operands[2]) != 1 > + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) > + FAIL; > + > + rtx addend = gen_reg_rtx (<MODE>mode); > + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); > + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, > <VNARROWQ2>mode)); > + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); > + rtx tmp2 = gen_reg_rtx (<MODE>mode); > + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); > + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); > + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, > bitsize); > + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); > + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, > shift_vector)); > + DONE; > +}) Does all this work for big-endian too? I think it does, but wonder whether you've tested. Ok if so, with the comments addressed. Thanks, Kyrill > + > ;; pmul. > > (define_insn "aarch64_pmul<mode>" > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index > 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..d3c3650d7d728f56adb651 > 54127dc7b72386c5a7 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -24146,6 +24146,40 @@ aarch64_vectorize_vec_perm_const > (machine_mode vmode, machine_mode op_mode, > return ret; > } > > +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ > + > +bool > +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, > + tree vectype, wide_int cst, > + rtx *output, rtx in0, rtx in1) > +{ > + if (code != TRUNC_DIV_EXPR > + || !TYPE_UNSIGNED (vectype)) > + return false; > + > + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE > (vectype)); > + if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) > + return false; > + > + if (in0 == NULL_RTX && in1 == NULL_RTX) > + { > + wide_int val = wi::add (cst, 1); > + int pow = wi::exact_log2 (val); > + return pow == (int)(element_precision (vectype) / 2); > + } > + > + if (!VECTOR_TYPE_P (vectype)) > + return false; > + > + gcc_assert (output); > + > + if (!*output) > + *output = gen_reg_rtx (TYPE_MODE (vectype)); > + > + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, > in0, in1)); > + return true; > +} > + > /* Generate a byte permute mask for a register of mode MODE, > which has NUNITS units. */ > > @@ -27606,6 +27640,10 @@ aarch64_libgcc_floating_mode_supported_p > #undef TARGET_VECTOR_ALIGNMENT > #define TARGET_VECTOR_ALIGNMENT aarch64_simd_vector_alignment > > +#undef TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > +#define TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST \ > + aarch64_vectorize_can_special_div_by_constant > + > #undef TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT > #define TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT \ > aarch64_vectorize_preferred_vector_alignment > diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf > 44ab211cd246d82d5 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > @@ -0,0 +1,61 @@ > +/* { dg-do compile } */ > +/* { dg-additional-options "-O3 -std=c99" } */ > +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } */ > + > +#include <stdint.h> > + > +#pragma GCC target "+nosve" > + > +/* > +** draw_bitmap1: > +** ... > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b > +** ... > +*/ > +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) > +{ > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xff; > +} > + > +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) > +{ > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xfe; > +} > + > +/* > +** draw_bitmap3: > +** ... > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h > +** ... > +*/ > +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) > +{ > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * level) / 0xffffU; > +} > + > +/* > +** draw_bitmap4: > +** ... > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s > +** ... > +*/ > +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) > +{ > + for (int i = 0; i < (n & -16); i+=1) > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; > +} > > > -----Original Message----- > > From: Tamar Christina <tamar.christina@arm.com> > > Sent: Friday, September 23, 2022 10:34 AM > > To: gcc-patches@gcc.gnu.org > > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > > <Richard.Sandiford@arm.com> > > Subject: [PATCH 2/4]AArch64 Add implementation for pow2 bitmask > division. > > > > Hi All, > > > > This adds an implementation for the new optab for unsigned pow2 bitmask > > for AArch64. > > > > The implementation rewrites: > > > > x = y / (2 ^ (sizeof (y)/2)-1 > > > > into e.g. (for bytes) > > > > (x + ((x + 257) >> 8)) >> 8 > > > > where it's required that the additions be done in double the precision of x > > such that we don't lose any bits during an overflow. > > > > Essentially the sequence decomposes the division into doing two smaller > > divisions, one for the top and bottom parts of the number and adding the > > results back together. > > > > To account for the fact that shift by 8 would be division by 256 we add 1 to > > both parts of x such that when 255 we still get 1 as the answer. > > > > Because the amount we shift are half the original datatype we can use the > > halfing instructions the ISA provides to do the operation instead of using > > actual shifts. > > > > For AArch64 this means we generate for: > > > > void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > > for (int i = 0; i < (n & -16); i+=1) > > pixel[i] = (pixel[i] * level) / 0xff; } > > > > the following: > > > > movi v3.16b, 0x1 > > umull2 v1.8h, v0.16b, v2.16b > > umull v0.8h, v0.8b, v2.8b > > addhn v5.8b, v1.8h, v3.8h > > addhn v4.8b, v0.8h, v3.8h > > uaddw v1.8h, v1.8h, v5.8b > > uaddw v0.8h, v0.8h, v4.8b > > uzp2 v0.16b, v0.16b, v1.16b > > > > instead of: > > > > umull v2.8h, v1.8b, v5.8b > > umull2 v1.8h, v1.16b, v5.16b > > umull v0.4s, v2.4h, v3.4h > > umull2 v2.4s, v2.8h, v3.8h > > umull v4.4s, v1.4h, v3.4h > > umull2 v1.4s, v1.8h, v3.8h > > uzp2 v0.8h, v0.8h, v2.8h > > uzp2 v1.8h, v4.8h, v1.8h > > shrn v0.8b, v0.8h, 7 > > shrn2 v0.16b, v1.8h, 7 > > > > Which results in significantly faster code. > > > > Thanks for Wilco for the concept. > > > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > > > Ok for master? > > > > Thanks, > > Tamar > > > > gcc/ChangeLog: > > > > * config/aarch64/aarch64-simd.md > > (@aarch64_bitmask_udiv<mode>3): New. > > * config/aarch64/aarch64.cc > > (aarch64_vectorize_can_special_div_by_constant): New. > > > > gcc/testsuite/ChangeLog: > > > > * gcc.target/aarch64/div-by-bitmask.c: New test. > > > > --- inline copy of patch -- > > diff --git a/gcc/config/aarch64/aarch64-simd.md > > b/gcc/config/aarch64/aarch64-simd.md > > index > > > 587a45d77721e1b39accbad7dbeca4d741eccb10..f4152160084d6b6f34bd69f > 0b > > a6386c1ab50f77e 100644 > > --- a/gcc/config/aarch64/aarch64-simd.md > > +++ b/gcc/config/aarch64/aarch64-simd.md > > @@ -4831,6 +4831,65 @@ (define_expand > > "aarch64_<sur><addsub>hn2<mode>" > > } > > ) > > > > +;; div optimizations using narrowings > > +;; we can do the division e.g. shorts by 255 faster by calculating it > > +as ;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in ;; > > +double the precision of x. > > +;; > > +;; If we imagine a short as being composed of two blocks of bytes then > > +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to ;; > > +adding 1 to each sub component: > > +;; > > +;; short value of 16-bits > > +;; ┌──────────────┬────────────────┐ > > +;; │ │ │ > > +;; └──────────────┴────────────────┘ > > +;; 8-bit part1 ▲ 8-bit part2 ▲ > > +;; │ │ > > +;; │ │ > > +;; +1 +1 > > +;; > > +;; after the first addition, we have to shift right by 8, and narrow > > +the ;; results back to a byte. Remember that the addition must be done > > +in ;; double the precision of the input. Since 8 is half the size of a > > +short ;; we can use a narrowing halfing instruction in AArch64, addhn > > +which also ;; does the addition in a wider precision and narrows back > > +to a byte. The ;; shift itself is implicit in the operation as it > > +writes back only the top ;; half of the result. i.e. bits 2*esize-1:esize. > > +;; > > +;; Since we have narrowed the result of the first part back to a byte, > > +for ;; the second addition we can use a widening addition, uaddw. > > +;; > > +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr > > +by 8 ;; to shift and the vectorizer. > > +;; > > +;; The shift is later optimized by combine to a uzp2 with movi #0. > > +(define_expand "@aarch64_bitmask_udiv<mode>3" > > + [(match_operand:VQN 0 "register_operand") > > + (match_operand:VQN 1 "register_operand") > > + (match_operand:VQN 2 "immediate_operand")] > > + "TARGET_SIMD" > > +{ > > + unsigned HOST_WIDE_INT size > > + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; > > + if (!CONST_VECTOR_P (operands[2]) > > + || const_vector_encoded_nelts (operands[2]) != 1 > > + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) > > + FAIL; > > + > > + rtx addend = gen_reg_rtx (<MODE>mode); > > + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, > 1); > > + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, > > +<VNARROWQ2>mode)); > > + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); > > + rtx tmp2 = gen_reg_rtx (<MODE>mode); > > + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); > > + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); > > + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, > > +bitsize); > > + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], > tmp1)); > > + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, > > +shift_vector)); > > + DONE; > > +}) > > + > > ;; pmul. > > > > (define_insn "aarch64_pmul<mode>" > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > > index > > > 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..91bb7d306f36dc4c9eeaafc > 3 > > 7484b6fc6901bfb4 100644 > > --- a/gcc/config/aarch64/aarch64.cc > > +++ b/gcc/config/aarch64/aarch64.cc > > @@ -24146,6 +24146,51 @@ aarch64_vectorize_vec_perm_const > > (machine_mode vmode, machine_mode op_mode, > > return ret; > > } > > > > +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ > > + > > +bool > > +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, > > + tree vectype, > > + tree treeop0, tree treeop1, > > + rtx *output, rtx in0, rtx in1) { > > + > > + if ((!treeop0 || !treeop1) && (in0 == NULL_RTX || in1 == NULL_RTX)) > > + return false; > > + > > + tree cst = uniform_integer_cst_p (treeop1); tree type; if (code != > > + TRUNC_DIV_EXPR > > + || !cst > > + || !TYPE_UNSIGNED ((type = TREE_TYPE (cst))) > > + || tree_int_cst_sgn (cst) != 1) > > + return false; > > + > > + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE > > + (vectype)); if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) > > + return false; > > + > > + if (in0 == NULL_RTX && in1 == NULL_RTX) > > + { > > + gcc_assert (treeop0 && treeop1); > > + wide_int icst = wi::to_wide (cst); > > + wide_int val = wi::add (icst, 1); > > + int pow = wi::exact_log2 (val); > > + return pow == (TYPE_PRECISION (type) / 2); > > + } > > + > > + if (!VECTOR_TYPE_P (vectype)) > > + return false; > > + > > + gcc_assert (output); > > + > > + if (!*output) > > + *output = gen_reg_rtx (TYPE_MODE (vectype)); > > + > > + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, > > +in0, in1)); > > + return true; > > +} > > + > > /* Generate a byte permute mask for a register of mode MODE, > > which has NUNITS units. */ > > > > diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index > > > 92bda1a7e14a3c9ea63e151e4a49a818bf4d1bdb..adba9fe97a9b43729c5e86d > 2 > > 44a2a23e76cac097 100644 > > --- a/gcc/doc/tm.texi > > +++ b/gcc/doc/tm.texi > > @@ -6112,6 +6112,22 @@ instruction pattern. There is no need for the > hook > > to handle these two implementation approaches itself. > > @end deftypefn > > > > +@deftypefn {Target Hook} bool > > TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > > +(enum @var{tree_code}, tree @var{vectype}, tree @var{treeop0}, tree > > +@var{treeop1}, rtx *@var{output}, rtx @var{in0}, rtx @var{in1}) This > > +hook is used to test whether the target has a special method of > > +division of vectors of type @var{vectype} using the two operands > > @code{treeop0}, and @code{treeop1} and producing a vector of type > > @var{vectype}. The division will then not be decomposed by the and kept > as > > a div. > > + > > +When the hook is being used to test whether the target supports a > > +special divide, @var{in0}, @var{in1}, and @var{output} are all null. > > +When the hook is being used to emit a division, @var{in0} and @var{in1} > > +are the source vectors of type @var{vecttype} and @var{output} is the > > +destination vector of type @var{vectype}. > > + > > +Return true if the operation is possible, emitting instructions for it > > +if rtxes are provided and updating @var{output}. > > +@end deftypefn > > + > > @deftypefn {Target Hook} tree > > TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION (unsigned > > @var{code}, tree @var{vec_type_out}, tree @var{vec_type_in}) This hook > > should return the decl of a function that implements the vectorized variant > > of the function with the @code{combined_fn} code diff --git > > a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index > > > 112462310b134705d860153294287cfd7d4af81d..d5a745a02acdf051ea1da1b > 04 > > 076d058c24ce093 100644 > > --- a/gcc/doc/tm.texi.in > > +++ b/gcc/doc/tm.texi.in > > @@ -4164,6 +4164,8 @@ address; but often a machine-dependent > strategy > > can generate better code. > > > > @hook TARGET_VECTORIZE_VEC_PERM_CONST > > > > +@hook TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST > > + > > @hook TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION > > > > @hook TARGET_VECTORIZE_BUILTIN_MD_VECTORIZED_FUNCTION > > diff --git a/gcc/explow.cc b/gcc/explow.cc index > > > ddb4d6ae3600542f8d2bb5617cdd3933a9fae6c0..568e0eb1a158c696458ae67 > 8f > > 5e346bf34ba0036 100644 > > --- a/gcc/explow.cc > > +++ b/gcc/explow.cc > > @@ -1037,7 +1037,7 @@ round_push (rtx size) > > TRUNC_DIV_EXPR. */ > > size = expand_binop (Pmode, add_optab, size, alignm1_rtx, > > NULL_RTX, 1, OPTAB_LIB_WIDEN); > > - size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, size, align_rtx, > > + size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, size, > > + align_rtx, > > NULL_RTX, 1); > > size = expand_mult (Pmode, size, align_rtx, NULL_RTX, 1); > > > > @@ -1203,7 +1203,7 @@ align_dynamic_address (rtx target, unsigned > > required_align) > > gen_int_mode (required_align / BITS_PER_UNIT - 1, > > Pmode), > > NULL_RTX, 1, OPTAB_LIB_WIDEN); > > - target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, target, > > + target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, > > target, > > gen_int_mode (required_align / BITS_PER_UNIT, > > Pmode), > > NULL_RTX, 1); > > diff --git a/gcc/expmed.h b/gcc/expmed.h index > > > 0b2538c4c6bd51dfdc772ef70bdf631c0bed8717..0db2986f11ff4a4b10b59501 > c6 > > f33cb3595659b5 100644 > > --- a/gcc/expmed.h > > +++ b/gcc/expmed.h > > @@ -708,8 +708,9 @@ extern rtx expand_variable_shift (enum tree_code, > > machine_mode, extern rtx expand_shift (enum tree_code, > machine_mode, > > rtx, poly_int64, rtx, > > int); > > #ifdef GCC_OPTABS_H > > -extern rtx expand_divmod (int, enum tree_code, machine_mode, rtx, rtx, > > - rtx, int, enum optab_methods = > > OPTAB_LIB_WIDEN); > > +extern rtx expand_divmod (int, enum tree_code, machine_mode, tree, > > tree, > > + rtx, rtx, rtx, int, > > + enum optab_methods = OPTAB_LIB_WIDEN); > > #endif > > #endif > > > > diff --git a/gcc/expmed.cc b/gcc/expmed.cc index > > > 8d7418be418406e72a895ecddf2dc7fdb950c76c..b64ea5ac46a9da85770a5bb > 09 > > 90db8b97d3af414 100644 > > --- a/gcc/expmed.cc > > +++ b/gcc/expmed.cc > > @@ -4222,8 +4222,8 @@ expand_sdiv_pow2 (scalar_int_mode mode, rtx > > op0, HOST_WIDE_INT d) > > > > rtx > > expand_divmod (int rem_flag, enum tree_code code, machine_mode > > mode, > > - rtx op0, rtx op1, rtx target, int unsignedp, > > - enum optab_methods methods) > > + tree treeop0, tree treeop1, rtx op0, rtx op1, rtx target, > > + int unsignedp, enum optab_methods methods) > > { > > machine_mode compute_mode; > > rtx tquotient; > > @@ -4375,6 +4375,14 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > > > last_div_const = ! rem_flag && op1_is_constant ? INTVAL (op1) : 0; > > > > + /* Check if the target has specific expansions for the division. */ > > + if (treeop0 > > + && targetm.vectorize.can_special_div_by_const (code, TREE_TYPE > > (treeop0), > > + treeop0, treeop1, > > + &target, op0, op1)) > > + return target; > > + > > + > > /* Now convert to the best mode to use. */ > > if (compute_mode != mode) > > { > > @@ -4618,8 +4626,8 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > || (optab_handler (sdivmod_optab, int_mode) > > != CODE_FOR_nothing))) > > quotient = expand_divmod (0, TRUNC_DIV_EXPR, > > - int_mode, op0, > > - gen_int_mode (abs_d, > > + int_mode, treeop0, treeop1, > > + op0, gen_int_mode (abs_d, > > int_mode), > > NULL_RTX, 0); > > else > > @@ -4808,8 +4816,8 @@ expand_divmod (int rem_flag, enum tree_code > > code, machine_mode mode, > > size - 1, NULL_RTX, 0); > > t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), > > NULL_RTX); > > - t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, > > op1, > > - NULL_RTX, 0); > > + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, > > treeop0, > > + treeop1, t3, op1, NULL_RTX, 0); > > if (t4) > > { > > rtx t5; > > diff --git a/gcc/expr.cc b/gcc/expr.cc > > index > > > 80bb1b8a4c5b8350fb1b8f57a99fd52e5882fcb6..b786f1d75e25f3410c0640cd > 96 > > a8abc055fa34d9 100644 > > --- a/gcc/expr.cc > > +++ b/gcc/expr.cc > > @@ -8028,16 +8028,17 @@ force_operand (rtx value, rtx target) > > return expand_divmod (0, > > FLOAT_MODE_P (GET_MODE (value)) > > ? RDIV_EXPR : TRUNC_DIV_EXPR, > > - GET_MODE (value), op1, op2, target, 0); > > + GET_MODE (value), NULL, NULL, op1, op2, > > + target, 0); > > case MOD: > > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > op1, op2, > > - target, 0); > > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 0); > > case UDIV: > > - return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > > op1, op2, > > - target, 1); > > + return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 1); > > case UMOD: > > - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > op1, op2, > > - target, 1); > > + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), > > NULL, NULL, > > + op1, op2, target, 1); > > case ASHIFTRT: > > return expand_simple_binop (GET_MODE (value), code, op1, op2, > > target, 0, OPTAB_LIB_WIDEN); > > @@ -8990,11 +8991,13 @@ expand_expr_divmod (tree_code code, > > machine_mode mode, tree treeop0, > > bool speed_p = optimize_insn_for_speed_p (); > > do_pending_stack_adjust (); > > start_sequence (); > > - rtx uns_ret = expand_divmod (mod_p, code, mode, op0, op1, target, > 1); > > + rtx uns_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, 1); > > rtx_insn *uns_insns = get_insns (); > > end_sequence (); > > start_sequence (); > > - rtx sgn_ret = expand_divmod (mod_p, code, mode, op0, op1, target, > 0); > > + rtx sgn_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, 0); > > rtx_insn *sgn_insns = get_insns (); > > end_sequence (); > > unsigned uns_cost = seq_cost (uns_insns, speed_p); @@ -9016,7 > +9019,8 > > @@ expand_expr_divmod (tree_code code, machine_mode mode, tree > > treeop0, > > emit_insn (sgn_insns); > > return sgn_ret; > > } > > - return expand_divmod (mod_p, code, mode, op0, op1, target, > unsignedp); > > + return expand_divmod (mod_p, code, mode, treeop0, treeop1, > > + op0, op1, target, unsignedp); > > } > > > > rtx > > diff --git a/gcc/optabs.cc b/gcc/optabs.cc index > > > 165f8d1fa22432b96967c69a58dbb7b4bf18120d..cff37ccb0dfc3dd79b97d0abf > d > > 872f340855dc96 100644 > > --- a/gcc/optabs.cc > > +++ b/gcc/optabs.cc > > @@ -1104,8 +1104,9 @@ expand_doubleword_mod (machine_mode > mode, > > rtx op0, rtx op1, bool unsignedp) > > return NULL_RTX; > > } > > } > > - rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > > sum, > > - gen_int_mode (INTVAL (op1), > > word_mode), > > + rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, > > NULL, NULL, > > + sum, gen_int_mode (INTVAL (op1), > > + word_mode), > > NULL_RTX, 1, OPTAB_DIRECT); > > if (remainder == NULL_RTX) > > return NULL_RTX; > > @@ -1208,8 +1209,8 @@ expand_doubleword_divmod (machine_mode > > mode, rtx op0, rtx op1, rtx *rem, > > > > if (op11 != const1_rtx) > > { > > - rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, quot1, op11, > > - NULL_RTX, unsignedp, OPTAB_DIRECT); > > + rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, NULL, NULL, > > quot1, > > + op11, NULL_RTX, unsignedp, > > OPTAB_DIRECT); > > if (rem2 == NULL_RTX) > > return NULL_RTX; > > > > @@ -1223,8 +1224,8 @@ expand_doubleword_divmod (machine_mode > > mode, rtx op0, rtx op1, rtx *rem, > > if (rem2 == NULL_RTX) > > return NULL_RTX; > > > > - rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, quot1, op11, > > - NULL_RTX, unsignedp, OPTAB_DIRECT); > > + rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, NULL, NULL, > > quot1, > > + op11, NULL_RTX, unsignedp, > > OPTAB_DIRECT); > > if (quot2 == NULL_RTX) > > return NULL_RTX; > > > > diff --git a/gcc/target.def b/gcc/target.def index > > > 2a7fa68f83dd15dcdd2c332e8431e6142ec7d305..92ebd2af18fe8abb6ed95b0 > 70 > > 81cdd70113db9b1 100644 > > --- a/gcc/target.def > > +++ b/gcc/target.def > > @@ -1902,6 +1902,25 @@ implementation approaches itself.", > > const vec_perm_indices &sel), > > NULL) > > > > +DEFHOOK > > +(can_special_div_by_const, > > + "This hook is used to test whether the target has a special method > > +of\n\ division of vectors of type @var{vectype} using the two operands > > +@code{treeop0},\n\ and @code{treeop1} and producing a vector of type > > +@var{vectype}. The division\n\ will then not be decomposed by the and > > +kept as a div.\n\ \n\ When the hook is being used to test whether the > > +target supports a special\n\ divide, @var{in0}, @var{in1}, and > > +@var{output} are all null. When the hook\n\ is being used to emit a > > +division, @var{in0} and @var{in1} are the source\n\ vectors of type > > +@var{vecttype} and @var{output} is the destination vector of\n\ type > > +@var{vectype}.\n\ \n\ Return true if the operation is possible, > > +emitting instructions for it\n\ if rtxes are provided and updating > > +@var{output}.", bool, (enum tree_code, tree vectype, tree treeop0, > > +tree treeop1, rtx *output, > > + rtx in0, rtx in1), > > + default_can_special_div_by_const) > > + > > /* Return true if the target supports misaligned store/load of a > > specific factor denoted in the third parameter. The last parameter > > is true if the access is defined in a packed struct. */ diff --git > a/gcc/target.h > > b/gcc/target.h index > > > d6fa6931499d15edff3e5af3e429540d001c7058..c836036ac7fa7910d62bd3da > 56 > > f39c061f68b665 100644 > > --- a/gcc/target.h > > +++ b/gcc/target.h > > @@ -51,6 +51,7 @@ > > #include "insn-codes.h" > > #include "tm.h" > > #include "hard-reg-set.h" > > +#include "tree-core.h" > > > > #if CHECKING_P > > > > diff --git a/gcc/targhooks.h b/gcc/targhooks.h index > > > ecce55ebe797cedc940620e8d89816973a045d49..42451a3e22e86fee9da2f56e > > 2640d63f936b336d 100644 > > --- a/gcc/targhooks.h > > +++ b/gcc/targhooks.h > > @@ -207,6 +207,8 @@ extern void default_addr_space_diagnose_usage > > (addr_space_t, location_t); extern rtx default_addr_space_convert (rtx, > > tree, tree); extern unsigned int default_case_values_threshold (void); > > extern bool default_have_conditional_execution (void); > > +extern bool default_can_special_div_by_const (enum tree_code, tree, > > tree, tree, > > + rtx *, rtx, rtx); > > > > extern bool default_libc_has_function (enum function_class, tree); extern > > bool default_libc_has_fast_function (int fcode); diff --git a/gcc/targhooks.cc > > b/gcc/targhooks.cc index > > > b15ae19bcb60c59ae8112e67b5f06a241a9bdbf1..8206533382611a7640efba2 > 41 > > 279936ced41ee95 100644 > > --- a/gcc/targhooks.cc > > +++ b/gcc/targhooks.cc > > @@ -1807,6 +1807,14 @@ default_have_conditional_execution (void) > > return HAVE_conditional_execution; > > } > > > > +/* Default that no division by constant operations are special. */ > > +bool default_can_special_div_by_const (enum tree_code, tree, tree, > > +tree, rtx *, rtx, > > + rtx) > > +{ > > + return false; > > +} > > + > > /* By default we assume that c99 functions are present at the runtime, > > but sincos is not. */ > > bool > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..472cd710534bc8aa9b1b491 > 6f3 > > d7b4d5b64a19b9 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c > > @@ -0,0 +1,25 @@ > > +/* { dg-require-effective-target vect_int } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint8_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..e904a71885b2e8487593a2c > d3 > > db75b3e4112e2cc > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c > > @@ -0,0 +1,25 @@ > > +/* { dg-require-effective-target vect_int } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint16_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..a1418ebbf5ea8731ed4e3e7 > 20 > > 157701d9d1cf852 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c > > @@ -0,0 +1,26 @@ > > +/* { dg-require-effective-target vect_int } */ > > +/* { dg-additional-options "-fno-vect-cost-model" { target aarch64*-*-* > > +} } */ > > + > > +#include <stdint.h> > > +#include "tree-vect.h" > > + > > +#define N 50 > > +#define TYPE uint32_t > > + > > +__attribute__((noipa, noinline, optimize("O1"))) void fun1(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > + > > +__attribute__((noipa, noinline, optimize("O3"))) void fun2(TYPE* > > +restrict pixel, TYPE level, int n) { > > + for (int i = 0; i < n; i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > + > > +#include "vect-div-bitmask.h" > > + > > +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: > > +detected" "vect" { target aarch64*-*-* } } } */ > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..29a16739aa4b706616367bf > d1 > > 832f28ebd07993e > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h > > @@ -0,0 +1,43 @@ > > +#include <stdio.h> > > + > > +#ifndef N > > +#define N 65 > > +#endif > > + > > +#ifndef TYPE > > +#define TYPE uint32_t > > +#endif > > + > > +#ifndef DEBUG > > +#define DEBUG 0 > > +#endif > > + > > +#define BASE ((TYPE) -1 < 0 ? -126 : 4) > > + > > +int main () > > +{ > > + TYPE a[N]; > > + TYPE b[N]; > > + > > + for (int i = 0; i < N; ++i) > > + { > > + a[i] = BASE + i * 13; > > + b[i] = BASE + i * 13; > > + if (DEBUG) > > + printf ("%d: 0x%x\n", i, a[i]); > > + } > > + > > + fun1 (a, N / 2, N); > > + fun2 (b, N / 2, N); > > + > > + for (int i = 0; i < N; ++i) > > + { > > + if (DEBUG) > > + printf ("%d = 0x%x == 0x%x\n", i, a[i], b[i]); > > + > > + if (a[i] != b[i]) > > + __builtin_abort (); > > + } > > + return 0; > > +} > > + > > diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > new file mode 100644 > > index > > > 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf > 44a > > b211cd246d82d5 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c > > @@ -0,0 +1,61 @@ > > +/* { dg-do compile } */ > > +/* { dg-additional-options "-O3 -std=c99" } */ > > +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } > > +*/ > > + > > +#include <stdint.h> > > + > > +#pragma GCC target "+nosve" > > + > > +/* > > +** draw_bitmap1: > > +** ... > > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > > +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h > > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > > +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b > > +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b > > +** ... > > +*/ > > +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xff; } > > + > > +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xfe; } > > + > > +/* > > +** draw_bitmap3: > > +** ... > > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > > +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s > > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > > +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h > > +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h > > +** ... > > +*/ > > +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * level) / 0xffffU; } > > + > > +/* > > +** draw_bitmap4: > > +** ... > > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > > +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d > > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > > +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s > > +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s > > +** ... > > +*/ > > +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) { > > + for (int i = 0; i < (n & -16); i+=1) > > + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; } > > diff --git a/gcc/tree-vect-generic.cc b/gcc/tree-vect-generic.cc index > > > 350129555a0c71c0896c4f1003163f3b3557c11b..ebee5e24b186915ebcb3a817 > c > > 9a12046b6ec94f3 100644 > > --- a/gcc/tree-vect-generic.cc > > +++ b/gcc/tree-vect-generic.cc > > @@ -1237,6 +1237,14 @@ expand_vector_operation > (gimple_stmt_iterator > > *gsi, tree type, tree compute_type > > tree rhs2 = gimple_assign_rhs2 (assign); > > tree ret; > > > > + /* Check if the target was going to handle it through the special > > + division callback hook. */ > > + if (targetm.vectorize.can_special_div_by_const (code, type, rhs1, > > + rhs2, NULL, > > + NULL_RTX, > > NULL_RTX)) > > + return NULL_TREE; > > + > > + > > if (!optimize > > || !VECTOR_INTEGER_TYPE_P (type) > > || TREE_CODE (rhs2) != VECTOR_CST diff --git a/gcc/tree-vect- > > patterns.cc b/gcc/tree-vect-patterns.cc index > > > 09574bb1a2696b3438a4ce9f09f74b42e784aca0..607acdf95eb30335d8bc0e85 > af > > 0b1bfea10fe443 100644 > > --- a/gcc/tree-vect-patterns.cc > > +++ b/gcc/tree-vect-patterns.cc > > @@ -3596,6 +3596,12 @@ vect_recog_divmod_pattern (vec_info *vinfo, > > > > return pattern_stmt; > > } > > + else if (targetm.vectorize.can_special_div_by_const (rhs_code, vectype, > > + oprnd0, oprnd1, NULL, > > + NULL_RTX, NULL_RTX)) > > + { > > + return NULL; > > + } > > > > if (prec > HOST_BITS_PER_WIDE_INT > > || integer_zerop (oprnd1)) > > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index > > > c9dab217f059f17e91e9a7582523e627d7a45b66..6d05c48a7339de094d7288b > d6 > > 8e0e1c1e93faafe 100644 > > --- a/gcc/tree-vect-stmts.cc > > +++ b/gcc/tree-vect-stmts.cc > > @@ -6260,6 +6260,11 @@ vectorizable_operation (vec_info *vinfo, > > } > > target_support_p = (optab_handler (optab, vec_mode) > > != CODE_FOR_nothing); > > + if (!target_support_p) > > + target_support_p > > + = targetm.vectorize.can_special_div_by_const (code, vectype, > > + op0, op1, NULL, > > + NULL_RTX, > > NULL_RTX); > > } > > > > bool using_emulated_vectors_p = vect_emulated_vector_p (vectype); > > > > > > > > > > --
--- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -4831,6 +4831,65 @@ (define_expand "aarch64_<sur><addsub>hn2<mode>" } ) +;; div optimizations using narrowings +;; we can do the division e.g. shorts by 255 faster by calculating it as +;; (x + ((x + 257) >> 8)) >> 8 assuming the operation is done in +;; double the precision of x. +;; +;; If we imagine a short as being composed of two blocks of bytes then +;; adding 257 or 0b0000_0001_0000_0001 to the number is equivalen to +;; adding 1 to each sub component: +;; +;; short value of 16-bits +;; ┌──────────────┬────────────────┐ +;; │ │ │ +;; └──────────────┴────────────────┘ +;; 8-bit part1 ▲ 8-bit part2 ▲ +;; │ │ +;; │ │ +;; +1 +1 +;; +;; after the first addition, we have to shift right by 8, and narrow the +;; results back to a byte. Remember that the addition must be done in +;; double the precision of the input. Since 8 is half the size of a short +;; we can use a narrowing halfing instruction in AArch64, addhn which also +;; does the addition in a wider precision and narrows back to a byte. The +;; shift itself is implicit in the operation as it writes back only the top +;; half of the result. i.e. bits 2*esize-1:esize. +;; +;; Since we have narrowed the result of the first part back to a byte, for +;; the second addition we can use a widening addition, uaddw. +;; +;; For the finaly shift, since it's unsigned arithmatic we emit an ushr by 8 +;; to shift and the vectorizer. +;; +;; The shift is later optimized by combine to a uzp2 with movi #0. +(define_expand "@aarch64_bitmask_udiv<mode>3" + [(match_operand:VQN 0 "register_operand") + (match_operand:VQN 1 "register_operand") + (match_operand:VQN 2 "immediate_operand")] + "TARGET_SIMD" +{ + unsigned HOST_WIDE_INT size + = (1ULL << GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode)) - 1; + if (!CONST_VECTOR_P (operands[2]) + || const_vector_encoded_nelts (operands[2]) != 1 + || size != UINTVAL (CONST_VECTOR_ELT (operands[2], 0))) + FAIL; + + rtx addend = gen_reg_rtx (<MODE>mode); + rtx val = aarch64_simd_gen_const_vector_dup (<VNARROWQ2>mode, 1); + emit_move_insn (addend, lowpart_subreg (<MODE>mode, val, <VNARROWQ2>mode)); + rtx tmp1 = gen_reg_rtx (<VNARROWQ>mode); + rtx tmp2 = gen_reg_rtx (<MODE>mode); + emit_insn (gen_aarch64_addhn<mode> (tmp1, operands[1], addend)); + unsigned bitsize = GET_MODE_UNIT_BITSIZE (<VNARROWQ>mode); + rtx shift_vector = aarch64_simd_gen_const_vector_dup (<MODE>mode, bitsize); + emit_insn (gen_aarch64_uaddw<Vnarrowq> (tmp2, operands[1], tmp1)); + emit_insn (gen_aarch64_simd_lshr<mode> (operands[0], tmp2, shift_vector)); + DONE; +}) + ;; pmul. (define_insn "aarch64_pmul<mode>" diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 4b486aeea90ea2afb9cdd96a4dbe15c5bb2abd7a..91bb7d306f36dc4c9eeaafc37484b6fc6901bfb4 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -24146,6 +24146,51 @@ aarch64_vectorize_vec_perm_const (machine_mode vmode, machine_mode op_mode, return ret; } +/* Implement TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST. */ + +bool +aarch64_vectorize_can_special_div_by_constant (enum tree_code code, + tree vectype, + tree treeop0, tree treeop1, + rtx *output, rtx in0, rtx in1) +{ + + if ((!treeop0 || !treeop1) && (in0 == NULL_RTX || in1 == NULL_RTX)) + return false; + + tree cst = uniform_integer_cst_p (treeop1); + tree type; + if (code != TRUNC_DIV_EXPR + || !cst + || !TYPE_UNSIGNED ((type = TREE_TYPE (cst))) + || tree_int_cst_sgn (cst) != 1) + return false; + + unsigned int flags = aarch64_classify_vector_mode (TYPE_MODE (vectype)); + if ((flags & VEC_ANY_SVE) && !TARGET_SVE2) + return false; + + if (in0 == NULL_RTX && in1 == NULL_RTX) + { + gcc_assert (treeop0 && treeop1); + wide_int icst = wi::to_wide (cst); + wide_int val = wi::add (icst, 1); + int pow = wi::exact_log2 (val); + return pow == (TYPE_PRECISION (type) / 2); + } + + if (!VECTOR_TYPE_P (vectype)) + return false; + + gcc_assert (output); + + if (!*output) + *output = gen_reg_rtx (TYPE_MODE (vectype)); + + emit_insn (gen_aarch64_bitmask_udiv3 (TYPE_MODE (vectype), *output, in0, in1)); + return true; +} + /* Generate a byte permute mask for a register of mode MODE, which has NUNITS units. */ diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index 92bda1a7e14a3c9ea63e151e4a49a818bf4d1bdb..adba9fe97a9b43729c5e86d244a2a23e76cac097 100644 --- a/gcc/doc/tm.texi +++ b/gcc/doc/tm.texi @@ -6112,6 +6112,22 @@ instruction pattern. There is no need for the hook to handle these two implementation approaches itself. @end deftypefn +@deftypefn {Target Hook} bool TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST (enum @var{tree_code}, tree @var{vectype}, tree @var{treeop0}, tree @var{treeop1}, rtx *@var{output}, rtx @var{in0}, rtx @var{in1}) +This hook is used to test whether the target has a special method of +division of vectors of type @var{vectype} using the two operands @code{treeop0}, +and @code{treeop1} and producing a vector of type @var{vectype}. The division +will then not be decomposed by the and kept as a div. + +When the hook is being used to test whether the target supports a special +divide, @var{in0}, @var{in1}, and @var{output} are all null. When the hook +is being used to emit a division, @var{in0} and @var{in1} are the source +vectors of type @var{vecttype} and @var{output} is the destination vector of +type @var{vectype}. + +Return true if the operation is possible, emitting instructions for it +if rtxes are provided and updating @var{output}. +@end deftypefn + @deftypefn {Target Hook} tree TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION (unsigned @var{code}, tree @var{vec_type_out}, tree @var{vec_type_in}) This hook should return the decl of a function that implements the vectorized variant of the function with the @code{combined_fn} code diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index 112462310b134705d860153294287cfd7d4af81d..d5a745a02acdf051ea1da1b04076d058c24ce093 100644 --- a/gcc/doc/tm.texi.in +++ b/gcc/doc/tm.texi.in @@ -4164,6 +4164,8 @@ address; but often a machine-dependent strategy can generate better code. @hook TARGET_VECTORIZE_VEC_PERM_CONST +@hook TARGET_VECTORIZE_CAN_SPECIAL_DIV_BY_CONST + @hook TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION @hook TARGET_VECTORIZE_BUILTIN_MD_VECTORIZED_FUNCTION diff --git a/gcc/explow.cc b/gcc/explow.cc index ddb4d6ae3600542f8d2bb5617cdd3933a9fae6c0..568e0eb1a158c696458ae678f5e346bf34ba0036 100644 --- a/gcc/explow.cc +++ b/gcc/explow.cc @@ -1037,7 +1037,7 @@ round_push (rtx size) TRUNC_DIV_EXPR. */ size = expand_binop (Pmode, add_optab, size, alignm1_rtx, NULL_RTX, 1, OPTAB_LIB_WIDEN); - size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, size, align_rtx, + size = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, size, align_rtx, NULL_RTX, 1); size = expand_mult (Pmode, size, align_rtx, NULL_RTX, 1); @@ -1203,7 +1203,7 @@ align_dynamic_address (rtx target, unsigned required_align) gen_int_mode (required_align / BITS_PER_UNIT - 1, Pmode), NULL_RTX, 1, OPTAB_LIB_WIDEN); - target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, target, + target = expand_divmod (0, TRUNC_DIV_EXPR, Pmode, NULL, NULL, target, gen_int_mode (required_align / BITS_PER_UNIT, Pmode), NULL_RTX, 1); diff --git a/gcc/expmed.h b/gcc/expmed.h index 0b2538c4c6bd51dfdc772ef70bdf631c0bed8717..0db2986f11ff4a4b10b59501c6f33cb3595659b5 100644 --- a/gcc/expmed.h +++ b/gcc/expmed.h @@ -708,8 +708,9 @@ extern rtx expand_variable_shift (enum tree_code, machine_mode, extern rtx expand_shift (enum tree_code, machine_mode, rtx, poly_int64, rtx, int); #ifdef GCC_OPTABS_H -extern rtx expand_divmod (int, enum tree_code, machine_mode, rtx, rtx, - rtx, int, enum optab_methods = OPTAB_LIB_WIDEN); +extern rtx expand_divmod (int, enum tree_code, machine_mode, tree, tree, + rtx, rtx, rtx, int, + enum optab_methods = OPTAB_LIB_WIDEN); #endif #endif diff --git a/gcc/expmed.cc b/gcc/expmed.cc index 8d7418be418406e72a895ecddf2dc7fdb950c76c..b64ea5ac46a9da85770a5bb0990db8b97d3af414 100644 --- a/gcc/expmed.cc +++ b/gcc/expmed.cc @@ -4222,8 +4222,8 @@ expand_sdiv_pow2 (scalar_int_mode mode, rtx op0, HOST_WIDE_INT d) rtx expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, - rtx op0, rtx op1, rtx target, int unsignedp, - enum optab_methods methods) + tree treeop0, tree treeop1, rtx op0, rtx op1, rtx target, + int unsignedp, enum optab_methods methods) { machine_mode compute_mode; rtx tquotient; @@ -4375,6 +4375,14 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, last_div_const = ! rem_flag && op1_is_constant ? INTVAL (op1) : 0; + /* Check if the target has specific expansions for the division. */ + if (treeop0 + && targetm.vectorize.can_special_div_by_const (code, TREE_TYPE (treeop0), + treeop0, treeop1, + &target, op0, op1)) + return target; + + /* Now convert to the best mode to use. */ if (compute_mode != mode) { @@ -4618,8 +4626,8 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, || (optab_handler (sdivmod_optab, int_mode) != CODE_FOR_nothing))) quotient = expand_divmod (0, TRUNC_DIV_EXPR, - int_mode, op0, - gen_int_mode (abs_d, + int_mode, treeop0, treeop1, + op0, gen_int_mode (abs_d, int_mode), NULL_RTX, 0); else @@ -4808,8 +4816,8 @@ expand_divmod (int rem_flag, enum tree_code code, machine_mode mode, size - 1, NULL_RTX, 0); t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), NULL_RTX); - t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, op1, - NULL_RTX, 0); + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, treeop0, + treeop1, t3, op1, NULL_RTX, 0); if (t4) { rtx t5; diff --git a/gcc/expr.cc b/gcc/expr.cc index 80bb1b8a4c5b8350fb1b8f57a99fd52e5882fcb6..b786f1d75e25f3410c0640cd96a8abc055fa34d9 100644 --- a/gcc/expr.cc +++ b/gcc/expr.cc @@ -8028,16 +8028,17 @@ force_operand (rtx value, rtx target) return expand_divmod (0, FLOAT_MODE_P (GET_MODE (value)) ? RDIV_EXPR : TRUNC_DIV_EXPR, - GET_MODE (value), op1, op2, target, 0); + GET_MODE (value), NULL, NULL, op1, op2, + target, 0); case MOD: - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), op1, op2, - target, 0); + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 0); case UDIV: - return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), op1, op2, - target, 1); + return expand_divmod (0, TRUNC_DIV_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 1); case UMOD: - return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), op1, op2, - target, 1); + return expand_divmod (1, TRUNC_MOD_EXPR, GET_MODE (value), NULL, NULL, + op1, op2, target, 1); case ASHIFTRT: return expand_simple_binop (GET_MODE (value), code, op1, op2, target, 0, OPTAB_LIB_WIDEN); @@ -8990,11 +8991,13 @@ expand_expr_divmod (tree_code code, machine_mode mode, tree treeop0, bool speed_p = optimize_insn_for_speed_p (); do_pending_stack_adjust (); start_sequence (); - rtx uns_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 1); + rtx uns_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, 1); rtx_insn *uns_insns = get_insns (); end_sequence (); start_sequence (); - rtx sgn_ret = expand_divmod (mod_p, code, mode, op0, op1, target, 0); + rtx sgn_ret = expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, 0); rtx_insn *sgn_insns = get_insns (); end_sequence (); unsigned uns_cost = seq_cost (uns_insns, speed_p); @@ -9016,7 +9019,8 @@ expand_expr_divmod (tree_code code, machine_mode mode, tree treeop0, emit_insn (sgn_insns); return sgn_ret; } - return expand_divmod (mod_p, code, mode, op0, op1, target, unsignedp); + return expand_divmod (mod_p, code, mode, treeop0, treeop1, + op0, op1, target, unsignedp); } rtx diff --git a/gcc/optabs.cc b/gcc/optabs.cc index 165f8d1fa22432b96967c69a58dbb7b4bf18120d..cff37ccb0dfc3dd79b97d0abfd872f340855dc96 100644 --- a/gcc/optabs.cc +++ b/gcc/optabs.cc @@ -1104,8 +1104,9 @@ expand_doubleword_mod (machine_mode mode, rtx op0, rtx op1, bool unsignedp) return NULL_RTX; } } - rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, sum, - gen_int_mode (INTVAL (op1), word_mode), + rtx remainder = expand_divmod (1, TRUNC_MOD_EXPR, word_mode, NULL, NULL, + sum, gen_int_mode (INTVAL (op1), + word_mode), NULL_RTX, 1, OPTAB_DIRECT); if (remainder == NULL_RTX) return NULL_RTX; @@ -1208,8 +1209,8 @@ expand_doubleword_divmod (machine_mode mode, rtx op0, rtx op1, rtx *rem, if (op11 != const1_rtx) { - rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, quot1, op11, - NULL_RTX, unsignedp, OPTAB_DIRECT); + rtx rem2 = expand_divmod (1, TRUNC_MOD_EXPR, mode, NULL, NULL, quot1, + op11, NULL_RTX, unsignedp, OPTAB_DIRECT); if (rem2 == NULL_RTX) return NULL_RTX; @@ -1223,8 +1224,8 @@ expand_doubleword_divmod (machine_mode mode, rtx op0, rtx op1, rtx *rem, if (rem2 == NULL_RTX) return NULL_RTX; - rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, quot1, op11, - NULL_RTX, unsignedp, OPTAB_DIRECT); + rtx quot2 = expand_divmod (0, TRUNC_DIV_EXPR, mode, NULL, NULL, quot1, + op11, NULL_RTX, unsignedp, OPTAB_DIRECT); if (quot2 == NULL_RTX) return NULL_RTX; diff --git a/gcc/target.def b/gcc/target.def index 2a7fa68f83dd15dcdd2c332e8431e6142ec7d305..92ebd2af18fe8abb6ed95b07081cdd70113db9b1 100644 --- a/gcc/target.def +++ b/gcc/target.def @@ -1902,6 +1902,25 @@ implementation approaches itself.", const vec_perm_indices &sel), NULL) +DEFHOOK +(can_special_div_by_const, + "This hook is used to test whether the target has a special method of\n\ +division of vectors of type @var{vectype} using the two operands @code{treeop0},\n\ +and @code{treeop1} and producing a vector of type @var{vectype}. The division\n\ +will then not be decomposed by the and kept as a div.\n\ +\n\ +When the hook is being used to test whether the target supports a special\n\ +divide, @var{in0}, @var{in1}, and @var{output} are all null. When the hook\n\ +is being used to emit a division, @var{in0} and @var{in1} are the source\n\ +vectors of type @var{vecttype} and @var{output} is the destination vector of\n\ +type @var{vectype}.\n\ +\n\ +Return true if the operation is possible, emitting instructions for it\n\ +if rtxes are provided and updating @var{output}.", + bool, (enum tree_code, tree vectype, tree treeop0, tree treeop1, rtx *output, + rtx in0, rtx in1), + default_can_special_div_by_const) + /* Return true if the target supports misaligned store/load of a specific factor denoted in the third parameter. The last parameter is true if the access is defined in a packed struct. */ diff --git a/gcc/target.h b/gcc/target.h index d6fa6931499d15edff3e5af3e429540d001c7058..c836036ac7fa7910d62bd3da56f39c061f68b665 100644 --- a/gcc/target.h +++ b/gcc/target.h @@ -51,6 +51,7 @@ #include "insn-codes.h" #include "tm.h" #include "hard-reg-set.h" +#include "tree-core.h" #if CHECKING_P diff --git a/gcc/targhooks.h b/gcc/targhooks.h index ecce55ebe797cedc940620e8d89816973a045d49..42451a3e22e86fee9da2f56e2640d63f936b336d 100644 --- a/gcc/targhooks.h +++ b/gcc/targhooks.h @@ -207,6 +207,8 @@ extern void default_addr_space_diagnose_usage (addr_space_t, location_t); extern rtx default_addr_space_convert (rtx, tree, tree); extern unsigned int default_case_values_threshold (void); extern bool default_have_conditional_execution (void); +extern bool default_can_special_div_by_const (enum tree_code, tree, tree, tree, + rtx *, rtx, rtx); extern bool default_libc_has_function (enum function_class, tree); extern bool default_libc_has_fast_function (int fcode); diff --git a/gcc/targhooks.cc b/gcc/targhooks.cc index b15ae19bcb60c59ae8112e67b5f06a241a9bdbf1..8206533382611a7640efba241279936ced41ee95 100644 --- a/gcc/targhooks.cc +++ b/gcc/targhooks.cc @@ -1807,6 +1807,14 @@ default_have_conditional_execution (void) return HAVE_conditional_execution; } +/* Default that no division by constant operations are special. */ +bool +default_can_special_div_by_const (enum tree_code, tree, tree, tree, rtx *, rtx, + rtx) +{ + return false; +} + /* By default we assume that c99 functions are present at the runtime, but sincos is not. */ bool diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c new file mode 100644 index 0000000000000000000000000000000000000000..472cd710534bc8aa9b1b4916f3d7b4d5b64a19b9 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-1.c @@ -0,0 +1,25 @@ +/* { dg-require-effective-target vect_int } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint8_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c new file mode 100644 index 0000000000000000000000000000000000000000..e904a71885b2e8487593a2cd3db75b3e4112e2cc --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-2.c @@ -0,0 +1,25 @@ +/* { dg-require-effective-target vect_int } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint16_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c new file mode 100644 index 0000000000000000000000000000000000000000..a1418ebbf5ea8731ed4e3e720157701d9d1cf852 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask-3.c @@ -0,0 +1,26 @@ +/* { dg-require-effective-target vect_int } */ +/* { dg-additional-options "-fno-vect-cost-model" { target aarch64*-*-* } } */ + +#include <stdint.h> +#include "tree-vect.h" + +#define N 50 +#define TYPE uint32_t + +__attribute__((noipa, noinline, optimize("O1"))) +void fun1(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} + +__attribute__((noipa, noinline, optimize("O3"))) +void fun2(TYPE* restrict pixel, TYPE level, int n) +{ + for (int i = 0; i < n; i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} + +#include "vect-div-bitmask.h" + +/* { dg-final { scan-tree-dump-not "vect_recog_divmod_pattern: detected" "vect" { target aarch64*-*-* } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h new file mode 100644 index 0000000000000000000000000000000000000000..29a16739aa4b706616367bfd1832f28ebd07993e --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-div-bitmask.h @@ -0,0 +1,43 @@ +#include <stdio.h> + +#ifndef N +#define N 65 +#endif + +#ifndef TYPE +#define TYPE uint32_t +#endif + +#ifndef DEBUG +#define DEBUG 0 +#endif + +#define BASE ((TYPE) -1 < 0 ? -126 : 4) + +int main () +{ + TYPE a[N]; + TYPE b[N]; + + for (int i = 0; i < N; ++i) + { + a[i] = BASE + i * 13; + b[i] = BASE + i * 13; + if (DEBUG) + printf ("%d: 0x%x\n", i, a[i]); + } + + fun1 (a, N / 2, N); + fun2 (b, N / 2, N); + + for (int i = 0; i < N; ++i) + { + if (DEBUG) + printf ("%d = 0x%x == 0x%x\n", i, a[i], b[i]); + + if (a[i] != b[i]) + __builtin_abort (); + } + return 0; +} + diff --git a/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c new file mode 100644 index 0000000000000000000000000000000000000000..2a535791ba7258302e0c2cf44ab211cd246d82d5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/div-by-bitmask.c @@ -0,0 +1,61 @@ +/* { dg-do compile } */ +/* { dg-additional-options "-O3 -std=c99" } */ +/* { dg-final { check-function-bodies "**" "" "" { target { le } } } } */ + +#include <stdint.h> + +#pragma GCC target "+nosve" + +/* +** draw_bitmap1: +** ... +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** addhn v[0-9]+.8b, v[0-9]+.8h, v[0-9]+.8h +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uaddw v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8b +** uzp2 v[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b +** ... +*/ +void draw_bitmap1(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xff; +} + +void draw_bitmap2(uint8_t* restrict pixel, uint8_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xfe; +} + +/* +** draw_bitmap3: +** ... +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** addhn v[0-9]+.4h, v[0-9]+.4s, v[0-9]+.4s +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uaddw v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4h +** uzp2 v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h +** ... +*/ +void draw_bitmap3(uint16_t* restrict pixel, uint16_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * level) / 0xffffU; +} + +/* +** draw_bitmap4: +** ... +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** addhn v[0-9]+.2s, v[0-9]+.2d, v[0-9]+.2d +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uaddw v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2s +** uzp2 v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s +** ... +*/ +void draw_bitmap4(uint32_t* restrict pixel, uint32_t level, int n) +{ + for (int i = 0; i < (n & -16); i+=1) + pixel[i] = (pixel[i] * (uint64_t)level) / 0xffffffffUL; +} diff --git a/gcc/tree-vect-generic.cc b/gcc/tree-vect-generic.cc index 350129555a0c71c0896c4f1003163f3b3557c11b..ebee5e24b186915ebcb3a817c9a12046b6ec94f3 100644 --- a/gcc/tree-vect-generic.cc +++ b/gcc/tree-vect-generic.cc @@ -1237,6 +1237,14 @@ expand_vector_operation (gimple_stmt_iterator *gsi, tree type, tree compute_type tree rhs2 = gimple_assign_rhs2 (assign); tree ret; + /* Check if the target was going to handle it through the special + division callback hook. */ + if (targetm.vectorize.can_special_div_by_const (code, type, rhs1, + rhs2, NULL, + NULL_RTX, NULL_RTX)) + return NULL_TREE; + + if (!optimize || !VECTOR_INTEGER_TYPE_P (type) || TREE_CODE (rhs2) != VECTOR_CST diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc index 09574bb1a2696b3438a4ce9f09f74b42e784aca0..607acdf95eb30335d8bc0e85af0b1bfea10fe443 100644 --- a/gcc/tree-vect-patterns.cc +++ b/gcc/tree-vect-patterns.cc @@ -3596,6 +3596,12 @@ vect_recog_divmod_pattern (vec_info *vinfo, return pattern_stmt; } + else if (targetm.vectorize.can_special_div_by_const (rhs_code, vectype, + oprnd0, oprnd1, NULL, + NULL_RTX, NULL_RTX)) + { + return NULL; + } if (prec > HOST_BITS_PER_WIDE_INT || integer_zerop (oprnd1)) diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index c9dab217f059f17e91e9a7582523e627d7a45b66..6d05c48a7339de094d7288bd68e0e1c1e93faafe 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -6260,6 +6260,11 @@ vectorizable_operation (vec_info *vinfo, } target_support_p = (optab_handler (optab, vec_mode) != CODE_FOR_nothing); + if (!target_support_p) + target_support_p + = targetm.vectorize.can_special_div_by_const (code, vectype, + op0, op1, NULL, + NULL_RTX, NULL_RTX); } bool using_emulated_vectors_p = vect_emulated_vector_p (vectype);