From patchwork Mon Jun 12 08:27:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejas Belagod X-Patchwork-Id: 106317 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2434401vqr; Mon, 12 Jun 2023 01:29:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7QFFFbFJ5wch/Eff0F7EKVPJKSyRwzT/fqFZXK20RhFTFbZ1+Djm4ECuQuP/iCONewwjgf X-Received: by 2002:a17:906:974d:b0:978:992e:efd3 with SMTP id o13-20020a170906974d00b00978992eefd3mr8757712ejy.77.1686558569726; Mon, 12 Jun 2023 01:29:29 -0700 (PDT) Received: from sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id n4-20020a17090625c400b00974e7637ea3si4485278ejb.702.2023.06.12.01.29.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 01:29:29 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=TZdpePeb; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 490423858C50 for ; Mon, 12 Jun 2023 08:29:28 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 490423858C50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1686558568; bh=gnZokZHjCui+Mw/xpOZSDwTlFhwq+pvJXeL1VavUR7Q=; h=To:CC:Subject:Date:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=TZdpePebZUkJfxmeuHBW9xyVcZZencVPqAp62G9NYKHgzqo2vkdjBXM6KImbgsoRt cUsjawuPwUmux4I2KQwXuvZp6v42qv0pvzIKMMUR2d03xZq4v9sL7ZZqFjw83j7rXh 0u5HkXMsTvJKmIZIcTWTj3d0YdJw/B4iTO1s2Xbc= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR03-AM7-obe.outbound.protection.outlook.com (mail-am7eur03on2058.outbound.protection.outlook.com [40.107.105.58]) by sourceware.org (Postfix) with ESMTPS id D87913858C50 for ; Mon, 12 Jun 2023 08:28:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D87913858C50 Received: from AS9PR05CA0260.eurprd05.prod.outlook.com (2603:10a6:20b:493::19) by AS1PR08MB7497.eurprd08.prod.outlook.com (2603:10a6:20b:4de::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.33; Mon, 12 Jun 2023 08:28:14 +0000 Received: from AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:493:cafe::ad) by AS9PR05CA0260.outlook.office365.com (2603:10a6:20b:493::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.33 via Frontend Transport; Mon, 12 Jun 2023 08:28:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT024.mail.protection.outlook.com (100.127.140.238) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.21 via Frontend Transport; Mon, 12 Jun 2023 08:28:12 +0000 Received: ("Tessian outbound e13c2446394c:v136"); Mon, 12 Jun 2023 08:28:12 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: d303d6a924d5381f X-CR-MTA-TID: 64aa7808 Received: from 3a17a5d86b4a.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 781E6DDA-DADF-4C4B-8453-B25A1F14839E.1; Mon, 12 Jun 2023 08:28:05 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3a17a5d86b4a.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 12 Jun 2023 08:28:05 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jG4sR8rSPSKMqNItSDLD23DqrQkchHUDfsq9aQI8zMD6axufWmwl7e9cLEzoAJW8R9hXCOpJ85gUtaIzrUoTzP/ddOtz7FeYqgLkhIrdWegU0rHSb3FGf7iF+DgDcB5XXMrsFsK45sgQBjCSRCCBTGuLFWqAN/UZDi8mdJQl2lH7r909sC/uZH2Nq1qsBuzFyct8a0V75i/X7d/txQPDMbFV66aHrBVLPbJu2elsZ7mpWpCcur4YPhGr/PD8uU+Z/m29e8FRZFq2xj2CBmgr0l4xqFrdRKBN2B/VrpJ9nWpOfFyA2qCPXS6XUtPz8Shngtvt/sHot7YinvTCl7Vyqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gnZokZHjCui+Mw/xpOZSDwTlFhwq+pvJXeL1VavUR7Q=; b=H8L9OvvO6W3Zm3NlSfI5jWa9qI47EbH+8OU1ceTPAX5Kpti7F2m9SG6tN71Ct56G/vzF83HUs2BdELJtr8o9pMpUJwLbQGwWPUtFoOPA8oviz1DxP3Nyg5PgCeRXItnVMjLCSzLmdMDhQuTcgLDNwJeJEryGRBNpWnLTHbmpo7IUUS9nfdPbUmzgoK64V3LsQk+ej6q/IdWtfJsAO6ZgX80Eza7WoxEFu3quFqB9L+sFxfMGBDONz+4pe72CXIY5z1IwavXqBnZs9DZiQQhDBgnu2XnUiMY+8xcbxQGai6Wmt/QOe9R6a0SGuuHt/pF9JpFnwmI3BUkxZ8PcBoIc7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB8PR09CA0011.eurprd09.prod.outlook.com (2603:10a6:10:a0::24) by DBAPR08MB5560.eurprd08.prod.outlook.com (2603:10a6:10:1ac::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.39; Mon, 12 Jun 2023 08:28:00 +0000 Received: from DBAEUR03FT003.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:a0:cafe::c4) by DB8PR09CA0011.outlook.office365.com (2603:10a6:10:a0::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.33 via Frontend Transport; Mon, 12 Jun 2023 08:28:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT003.mail.protection.outlook.com (100.127.142.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6500.21 via Frontend Transport; Mon, 12 Jun 2023 08:28:00 +0000 Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 12 Jun 2023 08:27:59 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 12 Jun 2023 08:27:59 +0000 Received: from e124570.cambridge.arm.com (10.2.79.30) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Mon, 12 Jun 2023 08:27:59 +0000 To: CC: Tejas Belagod , Subject: [PATCH v2] [PR96339] Optimise svlast[ab] Date: Mon, 12 Jun 2023 09:27:56 +0100 Message-ID: <20230612082756.27638-1-tejas.belagod@arm.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT003:EE_|DBAPR08MB5560:EE_|AM7EUR03FT024:EE_|AS1PR08MB7497:EE_ X-MS-Office365-Filtering-Correlation-Id: 4c1e9f5d-6e23-487a-0e4e-08db6b1ef65f x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 3PLo67baAM1We0XvjWWNRmi4TJ2+IdxT9fznvoEsNGnwkMcNJoSkILgyIJLaGmOn+y55zIk7Dudje5mFsXQ6WwV9DrLCGgVMtGxaSAE9xMHVWM1vaK4R9rWIP91jEcrzcoNhkqNTOv9RLgpzugV4Grmo014Kb+jBVffI9G51QiMmaaoRFjjNycxoJrTo0DUOSVxGQzOw+c7g7cbBdIJQhzGzEwVyL3H7xEaQDAwMr8vTeFPR/8XYYVvIs4gt1UPZN/x4ZTT70w1jWphzFOPJVjLNPft+n50VuHFAAAADEana5H9RqamQfCVwOE5zPMA3z6MBhYJeHL7s2MZU+5aP8P8nLIRD5lvMoJyL9U+iGrkZiATPhHDq4Pj7gCu+VU6XKbuBSjeUYq/i0CHpuWQ3y2awPIFZfcsw1deLlKivcUewCOOPBgnei5cM5nR6C3Zfv9hUTHam72S/LDCh9ikpjJtZ+O/dpaJWODwSMbpiO0qRzTKdRZCS9MLar6OKGjbpTdKXtJQ27zgTqeRZwFHUK8I6fryBD/IDz9QhsrxG6qzg3Dw0FcAALCpRK0668Fl6YNEBvbLzhdJKmrnP5Ky4ZXEiUm3ilEPjm05el2kAdoyitDJbLEP6pmhgcegSyAjb8GKieOC31Ir9mwRs2E48zwIW5fmvL7IczOOd6tvCf7eMlTRvX6zsODHoHU+Qigv69wtEzIl0RwtFuNJ0ihNly6pKjIRcaVooOJzZhHIxhHQvrVQbDVIX/Z2FGnSobjTl6SoCj3c1G3ixe1KuFisJyA== X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(346002)(39850400004)(396003)(136003)(376002)(451199021)(36840700001)(46966006)(30864003)(8936002)(8676002)(70586007)(70206006)(2906002)(84970400001)(41300700001)(5660300002)(44832011)(6916009)(478600001)(4326008)(7696005)(6666004)(316002)(54906003)(26005)(81166007)(1076003)(186003)(40480700001)(2616005)(82740400003)(356005)(83380400001)(47076005)(36756003)(426003)(36860700001)(82310400005)(86362001)(336012)(36900700001)(579004); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5560 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 1256295b-3eb9-4a1b-36b9-08db6b1eeeb3 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: D5/HkuDc43LrszJc+xix3wkLEpP0uLkeYAOB6wDhDWmUf62YI88sUkQoDUexO5ZUsdYWB9mMv8XyCmew3IqFRi+cOUD/7OGMSbybUlclZY5bi4kTzdU6RrCxl4NFHmOPwmBDUeUT6tjBbvF4FvEpI+NCHjs6Zi2ZS6pvSlflKNcKuT/GVXxbaQvCa+x0QK9uVvz7+GslRa3kyX3XaUp3i8lzgRBafyd5GH1FBsYGdqNDgn+sQRLdCAT6DYme51vfgRxLv/FmHkPCQugsMeT91L82daleVFAXBBG9CZZMEeytO4CymwMcBH5Aj2P4aazKLnipVnosnktVxpHVD930471DTPjZXrxeUb0efM00kDvnQmR+vTeYw0Uw0pgSN1v1NjeTQwIJz12GVS9xBoROopCM2kIyLw+pT+391bF7lJor9Hn3Ck+232ru7Fg7nETuHbJsvimXNu0LGT7DmtiGuO5E6DyQLLP650XZgUxvXdoOl089xiQR5RZw/zkbVgEWiQ1DDmyMFvtRnrFMogjzGfjVFSEviNBZS6huntnPc8vBGKHH15rkFwHkCysbwjr8LIKytSn6XV8suAPmso7zDAZ8hWuQyhkgeKinz98gpxM3XRGhuhPb6VwG/TVCOZzddEPVaCX6VES9pm/ux3LwhwcgMjFzILjFObs6vfGFUCNbHT3lBDF4OMbBmSMBsvHGU3hOYLEnKowu0BDMhIV6Cl6EsSEc55iuArGojKMoY3w22nC/OyqIgN0T0TXffF6VBmGug5MCkjb/V/Mx6xcn8Q== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(346002)(39860400002)(376002)(396003)(451199021)(40470700004)(36840700001)(46966006)(70586007)(70206006)(40480700001)(84970400001)(6666004)(4326008)(6916009)(54906003)(478600001)(316002)(41300700001)(40460700003)(26005)(1076003)(36860700001)(8676002)(44832011)(8936002)(426003)(36756003)(186003)(30864003)(2906002)(86362001)(82310400005)(336012)(2616005)(81166007)(82740400003)(5660300002)(7696005)(47076005)(83380400001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2023 08:28:12.8891 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4c1e9f5d-6e23-487a-0e4e-08db6b1ef65f X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7497 X-Spam-Status: No, score=-12.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Tejas Belagod via Gcc-patches From: Tejas Belagod Reply-To: Tejas Belagod Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768484838779106366?= X-GMAIL-MSGID: =?utf-8?q?1768484838779106366?= From: Tejas Belagod This PR optimizes an SVE intrinsics sequence where svlasta (svptrue_pat_b8 (SV_VL1), x) a scalar is selected based on a constant predicate and a variable vector. This sequence is optimized to return the correspoding element of a NEON vector. For eg. svlasta (svptrue_pat_b8 (SV_VL1), x) returns umov w0, v0.b[1] Likewise, svlastb (svptrue_pat_b8 (SV_VL1), x) returns umov w0, v0.b[0] This optimization only works provided the constant predicate maps to a range that is within the bounds of a 128-bit NEON register. gcc/ChangeLog: PR target/96339 * config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): Fold sve calls that have a constant input predicate vector. (svlast_impl::is_lasta): Query to check if intrinsic is svlasta. (svlast_impl::is_lastb): Query to check if intrinsic is svlastb. (svlast_impl::vect_all_same): Check if all vector elements are equal. gcc/testsuite/ChangeLog: PR target/96339 * gcc.target/aarch64/sve/acle/general-c/svlast.c: New. * gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New. * gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New. * gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm to expect optimized code for function body. * gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewise. --- .../aarch64/aarch64-sve-builtins-base.cc | 133 ++++++++ .../aarch64/sve/acle/general-c/svlast.c | 63 ++++ .../sve/acle/general-c/svlast128_run.c | 313 +++++++++++++++++ .../sve/acle/general-c/svlast256_run.c | 314 ++++++++++++++++++ .../gcc.target/aarch64/sve/pcs/return_4.c | 2 - .../aarch64/sve/pcs/return_4_1024.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_128.c | 2 - .../aarch64/sve/pcs/return_4_2048.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_256.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_512.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5.c | 2 - .../aarch64/sve/pcs/return_5_1024.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_128.c | 2 - .../aarch64/sve/pcs/return_5_2048.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_256.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_512.c | 2 - 16 files changed, 823 insertions(+), 24 deletions(-) create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc index cd9cace3c9b..9b766ffa817 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc @@ -1056,6 +1056,139 @@ class svlast_impl : public quiet public: CONSTEXPR svlast_impl (int unspec) : m_unspec (unspec) {} + bool is_lasta () const { return m_unspec == UNSPEC_LASTA; } + bool is_lastb () const { return m_unspec == UNSPEC_LASTB; } + + bool vect_all_same (tree v, int step) const + { + int i; + int nelts = vector_cst_encoded_nelts (v); + tree first_el = VECTOR_CST_ENCODED_ELT (v, 0); + + for (i = 0; i < nelts; i += step) + if (!operand_equal_p (VECTOR_CST_ENCODED_ELT (v, i), first_el, 0)) + return false; + + return true; + } + + /* Fold a svlast{a/b} call with constant predicate to a BIT_FIELD_REF. + BIT_FIELD_REF lowers to Advanced SIMD element extract, so we have to + ensure the index of the element being accessed is in the range of a + Advanced SIMD vector width. */ + gimple *fold (gimple_folder & f) const override + { + tree pred = gimple_call_arg (f.call, 0); + tree val = gimple_call_arg (f.call, 1); + + if (TREE_CODE (pred) == VECTOR_CST) + { + HOST_WIDE_INT pos; + int i = 0; + int step = f.type_suffix (0).element_bytes; + int step_1 = gcd (step, VECTOR_CST_NPATTERNS (pred)); + int npats = VECTOR_CST_NPATTERNS (pred); + unsigned HOST_WIDE_INT enelts = vector_cst_encoded_nelts (pred); + tree b = NULL_TREE; + unsigned HOST_WIDE_INT nelts; + + /* We can optimize 2 cases common to variable and fixed-length cases + without a linear search of the predicate vector: + 1. LASTA if predicate is all true, return element 0. + 2. LASTA if predicate all false, return element 0. */ + if (is_lasta () && vect_all_same (pred, step_1)) + { + b = build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), bitsize_int (0)); + return gimple_build_assign (f.lhs, b); + } + + /* Handle the all-false case for LASTB where SVE VL == 128b - + return the highest numbered element. */ + if (is_lastb () && known_eq (BYTES_PER_SVE_VECTOR, 16) + && vect_all_same (pred, step_1) + && integer_zerop (VECTOR_CST_ENCODED_ELT (pred, 0))) + { + b = build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), + bitsize_int ((16 - step) * BITS_PER_UNIT)); + + return gimple_build_assign (f.lhs, b); + } + + /* Determine if there are any repeating non-zero elements in variable + length vectors. */ + if (!VECTOR_CST_NELTS (pred).is_constant (&nelts)) + { + /* If VECTOR_CST_NELTS_PER_PATTERN (pred) == 2 and every multiple of + 'step_1' in + [VECTOR_CST_NPATTERNS .. VECTOR_CST_ENCODED_NELTS - 1] + is zero, then we can treat the vector as VECTOR_CST_NPATTERNS + elements followed by all inactive elements. */ + if (VECTOR_CST_NELTS_PER_PATTERN (pred) == 2) + { + /* Restrict the scope of search to NPATS if vector is + variable-length for linear search later. */ + nelts = npats; + for (i = npats; i < enelts; i += step_1) + { + /* If there are active elements in the repeated pattern of a + variable-length vector, then return NULL as there is no + way to be sure statically if this falls within the + Advanced SIMD range. */ + if (!integer_zerop (VECTOR_CST_ENCODED_ELT (pred, i))) + return NULL; + } + } + else + /* If we're here, it means that for NELTS_PER_PATTERN != 2, there + is a repeating non-zero element. */ + return NULL; + } + + /* If we're here, it means either: + 1. The vector is variable-length and there's no active element in the + repeated part of the pattern, or + 2. The vector is fixed-length. + + Fall through to finding the last active element linearly for + for all cases where the last active element is known to be + within a statically-determinable range. */ + i = MAX ((int)nelts - step, 0); + for (; i >= 0; i -= step) + if (!integer_zerop (VECTOR_CST_ELT (pred, i))) + break; + + if (is_lastb ()) + { + /* For LASTB, the element is the last active element. */ + pos = i; + } + else + { + /* For LASTA, the element is one after last active element. */ + pos = i + step; + + /* If last active element is + last element, wrap-around and return first Advanced SIMD + element. */ + if (known_ge (pos, BYTES_PER_SVE_VECTOR)) + pos = 0; + } + + /* Out of Advanced SIMD range. */ + if (pos < 0 || pos > 15) + return NULL; + + b = build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), + bitsize_int (pos * BITS_PER_UNIT)); + + return gimple_build_assign (f.lhs, b); + } + return NULL; + } + rtx expand (function_expander &e) const override { diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c new file mode 100644 index 00000000000..fdbe5e309af --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c @@ -0,0 +1,63 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -msve-vector-bits=256" } */ + +#include +#include "arm_sve.h" + +#define NAME(name, size, pat, sign, ab) \ + name ## _ ## size ## _ ## pat ## _ ## sign ## _ ## ab + +#define NAMEF(name, size, pat, sign, ab) \ + name ## _ ## size ## _ ## pat ## _ ## sign ## _ ## ab ## _false + +#define SVTYPE(size, sign) \ + sv ## sign ## int ## size ## _t + +#define STYPE(size, sign) sign ## int ## size ##_t + +#define SVELAST_DEF(size, pat, sign, ab, su) \ + STYPE (size, sign) __attribute__((noinline)) \ + NAME (foo, size, pat, sign, ab) (SVTYPE (size, sign) x) \ + { \ + return svlast ## ab (svptrue_pat_b ## size (pat), x); \ + } \ + STYPE (size, sign) __attribute__((noinline)) \ + NAMEF (foo, size, pat, sign, ab) (SVTYPE (size, sign) x) \ + { \ + return svlast ## ab (svpfalse (), x); \ + } + +#define ALL_PATS(SIZE, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL1, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL2, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL3, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL4, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL5, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL6, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL7, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL8, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL16, SIGN, AB, SU) + +#define ALL_SIGN(SIZE, AB) \ + ALL_PATS (SIZE, , AB, s) \ + ALL_PATS (SIZE, u, AB, u) + +#define ALL_SIZE(AB) \ + ALL_SIGN (8, AB) \ + ALL_SIGN (16, AB) \ + ALL_SIGN (32, AB) \ + ALL_SIGN (64, AB) + +#define ALL_POS() \ + ALL_SIZE (a) \ + ALL_SIZE (b) + + +ALL_POS() + +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.b} 52 } } */ +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.h} 50 } } */ +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.s} 12 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tw0, s0} 24 } } */ +/* { dg-final { scan-assembler-times {\tumov\tx[0-9]+, v[0-9]+\.d} 4 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tx0, d0} 32 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c new file mode 100644 index 00000000000..5e1e9303d7b --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c @@ -0,0 +1,313 @@ +/* { dg-do run { target aarch64_sve128_hw } } */ +/* { dg-options "-O3 -msve-vector-bits=128 -std=gnu99" } */ + +#include "svlast.c" + +int +main (void) +{ + int8_t res_8_SV_VL1__a = 1; + int8_t res_8_SV_VL2__a = 2; + int8_t res_8_SV_VL3__a = 3; + int8_t res_8_SV_VL4__a = 4; + int8_t res_8_SV_VL5__a = 5; + int8_t res_8_SV_VL6__a = 6; + int8_t res_8_SV_VL7__a = 7; + int8_t res_8_SV_VL8__a = 8; + int8_t res_8_SV_VL16__a = 0; + uint8_t res_8_SV_VL1_u_a = 1; + uint8_t res_8_SV_VL2_u_a = 2; + uint8_t res_8_SV_VL3_u_a = 3; + uint8_t res_8_SV_VL4_u_a = 4; + uint8_t res_8_SV_VL5_u_a = 5; + uint8_t res_8_SV_VL6_u_a = 6; + uint8_t res_8_SV_VL7_u_a = 7; + uint8_t res_8_SV_VL8_u_a = 8; + uint8_t res_8_SV_VL16_u_a = 0; + int16_t res_16_SV_VL1__a = 1; + int16_t res_16_SV_VL2__a = 2; + int16_t res_16_SV_VL3__a = 3; + int16_t res_16_SV_VL4__a = 4; + int16_t res_16_SV_VL5__a = 5; + int16_t res_16_SV_VL6__a = 6; + int16_t res_16_SV_VL7__a = 7; + int16_t res_16_SV_VL8__a = 0; + int16_t res_16_SV_VL16__a = 0; + uint16_t res_16_SV_VL1_u_a = 1; + uint16_t res_16_SV_VL2_u_a = 2; + uint16_t res_16_SV_VL3_u_a = 3; + uint16_t res_16_SV_VL4_u_a = 4; + uint16_t res_16_SV_VL5_u_a = 5; + uint16_t res_16_SV_VL6_u_a = 6; + uint16_t res_16_SV_VL7_u_a = 7; + uint16_t res_16_SV_VL8_u_a = 0; + uint16_t res_16_SV_VL16_u_a = 0; + int32_t res_32_SV_VL1__a = 1; + int32_t res_32_SV_VL2__a = 2; + int32_t res_32_SV_VL3__a = 3; + int32_t res_32_SV_VL4__a = 0; + int32_t res_32_SV_VL5__a = 0; + int32_t res_32_SV_VL6__a = 0; + int32_t res_32_SV_VL7__a = 0; + int32_t res_32_SV_VL8__a = 0; + int32_t res_32_SV_VL16__a = 0; + uint32_t res_32_SV_VL1_u_a = 1; + uint32_t res_32_SV_VL2_u_a = 2; + uint32_t res_32_SV_VL3_u_a = 3; + uint32_t res_32_SV_VL4_u_a = 0; + uint32_t res_32_SV_VL5_u_a = 0; + uint32_t res_32_SV_VL6_u_a = 0; + uint32_t res_32_SV_VL7_u_a = 0; + uint32_t res_32_SV_VL8_u_a = 0; + uint32_t res_32_SV_VL16_u_a = 0; + int64_t res_64_SV_VL1__a = 1; + int64_t res_64_SV_VL2__a = 0; + int64_t res_64_SV_VL3__a = 0; + int64_t res_64_SV_VL4__a = 0; + int64_t res_64_SV_VL5__a = 0; + int64_t res_64_SV_VL6__a = 0; + int64_t res_64_SV_VL7__a = 0; + int64_t res_64_SV_VL8__a = 0; + int64_t res_64_SV_VL16__a = 0; + uint64_t res_64_SV_VL1_u_a = 1; + uint64_t res_64_SV_VL2_u_a = 0; + uint64_t res_64_SV_VL3_u_a = 0; + uint64_t res_64_SV_VL4_u_a = 0; + uint64_t res_64_SV_VL5_u_a = 0; + uint64_t res_64_SV_VL6_u_a = 0; + uint64_t res_64_SV_VL7_u_a = 0; + uint64_t res_64_SV_VL8_u_a = 0; + uint64_t res_64_SV_VL16_u_a = 0; + int8_t res_8_SV_VL1__b = 0; + int8_t res_8_SV_VL2__b = 1; + int8_t res_8_SV_VL3__b = 2; + int8_t res_8_SV_VL4__b = 3; + int8_t res_8_SV_VL5__b = 4; + int8_t res_8_SV_VL6__b = 5; + int8_t res_8_SV_VL7__b = 6; + int8_t res_8_SV_VL8__b = 7; + int8_t res_8_SV_VL16__b = 15; + uint8_t res_8_SV_VL1_u_b = 0; + uint8_t res_8_SV_VL2_u_b = 1; + uint8_t res_8_SV_VL3_u_b = 2; + uint8_t res_8_SV_VL4_u_b = 3; + uint8_t res_8_SV_VL5_u_b = 4; + uint8_t res_8_SV_VL6_u_b = 5; + uint8_t res_8_SV_VL7_u_b = 6; + uint8_t res_8_SV_VL8_u_b = 7; + uint8_t res_8_SV_VL16_u_b = 15; + int16_t res_16_SV_VL1__b = 0; + int16_t res_16_SV_VL2__b = 1; + int16_t res_16_SV_VL3__b = 2; + int16_t res_16_SV_VL4__b = 3; + int16_t res_16_SV_VL5__b = 4; + int16_t res_16_SV_VL6__b = 5; + int16_t res_16_SV_VL7__b = 6; + int16_t res_16_SV_VL8__b = 7; + int16_t res_16_SV_VL16__b = 7; + uint16_t res_16_SV_VL1_u_b = 0; + uint16_t res_16_SV_VL2_u_b = 1; + uint16_t res_16_SV_VL3_u_b = 2; + uint16_t res_16_SV_VL4_u_b = 3; + uint16_t res_16_SV_VL5_u_b = 4; + uint16_t res_16_SV_VL6_u_b = 5; + uint16_t res_16_SV_VL7_u_b = 6; + uint16_t res_16_SV_VL8_u_b = 7; + uint16_t res_16_SV_VL16_u_b = 7; + int32_t res_32_SV_VL1__b = 0; + int32_t res_32_SV_VL2__b = 1; + int32_t res_32_SV_VL3__b = 2; + int32_t res_32_SV_VL4__b = 3; + int32_t res_32_SV_VL5__b = 3; + int32_t res_32_SV_VL6__b = 3; + int32_t res_32_SV_VL7__b = 3; + int32_t res_32_SV_VL8__b = 3; + int32_t res_32_SV_VL16__b = 3; + uint32_t res_32_SV_VL1_u_b = 0; + uint32_t res_32_SV_VL2_u_b = 1; + uint32_t res_32_SV_VL3_u_b = 2; + uint32_t res_32_SV_VL4_u_b = 3; + uint32_t res_32_SV_VL5_u_b = 3; + uint32_t res_32_SV_VL6_u_b = 3; + uint32_t res_32_SV_VL7_u_b = 3; + uint32_t res_32_SV_VL8_u_b = 3; + uint32_t res_32_SV_VL16_u_b = 3; + int64_t res_64_SV_VL1__b = 0; + int64_t res_64_SV_VL2__b = 1; + int64_t res_64_SV_VL3__b = 1; + int64_t res_64_SV_VL4__b = 1; + int64_t res_64_SV_VL5__b = 1; + int64_t res_64_SV_VL6__b = 1; + int64_t res_64_SV_VL7__b = 1; + int64_t res_64_SV_VL8__b = 1; + int64_t res_64_SV_VL16__b = 1; + uint64_t res_64_SV_VL1_u_b = 0; + uint64_t res_64_SV_VL2_u_b = 1; + uint64_t res_64_SV_VL3_u_b = 1; + uint64_t res_64_SV_VL4_u_b = 1; + uint64_t res_64_SV_VL5_u_b = 1; + uint64_t res_64_SV_VL6_u_b = 1; + uint64_t res_64_SV_VL7_u_b = 1; + uint64_t res_64_SV_VL8_u_b = 1; + uint64_t res_64_SV_VL16_u_b = 1; + + int8_t res_8_SV_VL1__a_false = 0; + int8_t res_8_SV_VL2__a_false = 0; + int8_t res_8_SV_VL3__a_false = 0; + int8_t res_8_SV_VL4__a_false = 0; + int8_t res_8_SV_VL5__a_false = 0; + int8_t res_8_SV_VL6__a_false = 0; + int8_t res_8_SV_VL7__a_false = 0; + int8_t res_8_SV_VL8__a_false = 0; + int8_t res_8_SV_VL16__a_false = 0; + uint8_t res_8_SV_VL1_u_a_false = 0; + uint8_t res_8_SV_VL2_u_a_false = 0; + uint8_t res_8_SV_VL3_u_a_false = 0; + uint8_t res_8_SV_VL4_u_a_false = 0; + uint8_t res_8_SV_VL5_u_a_false = 0; + uint8_t res_8_SV_VL6_u_a_false = 0; + uint8_t res_8_SV_VL7_u_a_false = 0; + uint8_t res_8_SV_VL8_u_a_false = 0; + uint8_t res_8_SV_VL16_u_a_false = 0; + int16_t res_16_SV_VL1__a_false = 0; + int16_t res_16_SV_VL2__a_false = 0; + int16_t res_16_SV_VL3__a_false = 0; + int16_t res_16_SV_VL4__a_false = 0; + int16_t res_16_SV_VL5__a_false = 0; + int16_t res_16_SV_VL6__a_false = 0; + int16_t res_16_SV_VL7__a_false = 0; + int16_t res_16_SV_VL8__a_false = 0; + int16_t res_16_SV_VL16__a_false = 0; + uint16_t res_16_SV_VL1_u_a_false = 0; + uint16_t res_16_SV_VL2_u_a_false = 0; + uint16_t res_16_SV_VL3_u_a_false = 0; + uint16_t res_16_SV_VL4_u_a_false = 0; + uint16_t res_16_SV_VL5_u_a_false = 0; + uint16_t res_16_SV_VL6_u_a_false = 0; + uint16_t res_16_SV_VL7_u_a_false = 0; + uint16_t res_16_SV_VL8_u_a_false = 0; + uint16_t res_16_SV_VL16_u_a_false = 0; + int32_t res_32_SV_VL1__a_false = 0; + int32_t res_32_SV_VL2__a_false = 0; + int32_t res_32_SV_VL3__a_false = 0; + int32_t res_32_SV_VL4__a_false = 0; + int32_t res_32_SV_VL5__a_false = 0; + int32_t res_32_SV_VL6__a_false = 0; + int32_t res_32_SV_VL7__a_false = 0; + int32_t res_32_SV_VL8__a_false = 0; + int32_t res_32_SV_VL16__a_false = 0; + uint32_t res_32_SV_VL1_u_a_false = 0; + uint32_t res_32_SV_VL2_u_a_false = 0; + uint32_t res_32_SV_VL3_u_a_false = 0; + uint32_t res_32_SV_VL4_u_a_false = 0; + uint32_t res_32_SV_VL5_u_a_false = 0; + uint32_t res_32_SV_VL6_u_a_false = 0; + uint32_t res_32_SV_VL7_u_a_false = 0; + uint32_t res_32_SV_VL8_u_a_false = 0; + uint32_t res_32_SV_VL16_u_a_false = 0; + int64_t res_64_SV_VL1__a_false = 0; + int64_t res_64_SV_VL2__a_false = 0; + int64_t res_64_SV_VL3__a_false = 0; + int64_t res_64_SV_VL4__a_false = 0; + int64_t res_64_SV_VL5__a_false = 0; + int64_t res_64_SV_VL6__a_false = 0; + int64_t res_64_SV_VL7__a_false = 0; + int64_t res_64_SV_VL8__a_false = 0; + int64_t res_64_SV_VL16__a_false = 0; + uint64_t res_64_SV_VL1_u_a_false = 0; + uint64_t res_64_SV_VL2_u_a_false = 0; + uint64_t res_64_SV_VL3_u_a_false = 0; + uint64_t res_64_SV_VL4_u_a_false = 0; + uint64_t res_64_SV_VL5_u_a_false = 0; + uint64_t res_64_SV_VL6_u_a_false = 0; + uint64_t res_64_SV_VL7_u_a_false = 0; + uint64_t res_64_SV_VL8_u_a_false = 0; + uint64_t res_64_SV_VL16_u_a_false = 0; + int8_t res_8_SV_VL1__b_false = 15; + int8_t res_8_SV_VL2__b_false = 15; + int8_t res_8_SV_VL3__b_false = 15; + int8_t res_8_SV_VL4__b_false = 15; + int8_t res_8_SV_VL5__b_false = 15; + int8_t res_8_SV_VL6__b_false = 15; + int8_t res_8_SV_VL7__b_false = 15; + int8_t res_8_SV_VL8__b_false = 15; + int8_t res_8_SV_VL16__b_false = 15; + uint8_t res_8_SV_VL1_u_b_false = 15; + uint8_t res_8_SV_VL2_u_b_false = 15; + uint8_t res_8_SV_VL3_u_b_false = 15; + uint8_t res_8_SV_VL4_u_b_false = 15; + uint8_t res_8_SV_VL5_u_b_false = 15; + uint8_t res_8_SV_VL6_u_b_false = 15; + uint8_t res_8_SV_VL7_u_b_false = 15; + uint8_t res_8_SV_VL8_u_b_false = 15; + uint8_t res_8_SV_VL16_u_b_false = 15; + int16_t res_16_SV_VL1__b_false = 7; + int16_t res_16_SV_VL2__b_false = 7; + int16_t res_16_SV_VL3__b_false = 7; + int16_t res_16_SV_VL4__b_false = 7; + int16_t res_16_SV_VL5__b_false = 7; + int16_t res_16_SV_VL6__b_false = 7; + int16_t res_16_SV_VL7__b_false = 7; + int16_t res_16_SV_VL8__b_false = 7; + int16_t res_16_SV_VL16__b_false = 7; + uint16_t res_16_SV_VL1_u_b_false = 7; + uint16_t res_16_SV_VL2_u_b_false = 7; + uint16_t res_16_SV_VL3_u_b_false = 7; + uint16_t res_16_SV_VL4_u_b_false = 7; + uint16_t res_16_SV_VL5_u_b_false = 7; + uint16_t res_16_SV_VL6_u_b_false = 7; + uint16_t res_16_SV_VL7_u_b_false = 7; + uint16_t res_16_SV_VL8_u_b_false = 7; + uint16_t res_16_SV_VL16_u_b_false = 7; + int32_t res_32_SV_VL1__b_false = 3; + int32_t res_32_SV_VL2__b_false = 3; + int32_t res_32_SV_VL3__b_false = 3; + int32_t res_32_SV_VL4__b_false = 3; + int32_t res_32_SV_VL5__b_false = 3; + int32_t res_32_SV_VL6__b_false = 3; + int32_t res_32_SV_VL7__b_false = 3; + int32_t res_32_SV_VL8__b_false = 3; + int32_t res_32_SV_VL16__b_false = 3; + uint32_t res_32_SV_VL1_u_b_false = 3; + uint32_t res_32_SV_VL2_u_b_false = 3; + uint32_t res_32_SV_VL3_u_b_false = 3; + uint32_t res_32_SV_VL4_u_b_false = 3; + uint32_t res_32_SV_VL5_u_b_false = 3; + uint32_t res_32_SV_VL6_u_b_false = 3; + uint32_t res_32_SV_VL7_u_b_false = 3; + uint32_t res_32_SV_VL8_u_b_false = 3; + uint32_t res_32_SV_VL16_u_b_false = 3; + int64_t res_64_SV_VL1__b_false = 1; + int64_t res_64_SV_VL2__b_false = 1; + int64_t res_64_SV_VL3__b_false = 1; + int64_t res_64_SV_VL4__b_false = 1; + int64_t res_64_SV_VL5__b_false = 1; + int64_t res_64_SV_VL6__b_false = 1; + int64_t res_64_SV_VL7__b_false = 1; + int64_t res_64_SV_VL8__b_false = 1; + int64_t res_64_SV_VL16__b_false = 1; + uint64_t res_64_SV_VL1_u_b_false = 1; + uint64_t res_64_SV_VL2_u_b_false = 1; + uint64_t res_64_SV_VL3_u_b_false = 1; + uint64_t res_64_SV_VL4_u_b_false = 1; + uint64_t res_64_SV_VL5_u_b_false = 1; + uint64_t res_64_SV_VL6_u_b_false = 1; + uint64_t res_64_SV_VL7_u_b_false = 1; + uint64_t res_64_SV_VL8_u_b_false = 1; + uint64_t res_64_SV_VL16_u_b_false = 1; + +#undef SVELAST_DEF +#define SVELAST_DEF(size, pat, sign, ab, su) \ + if (NAME (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0, 1)) != \ + NAME (res, size, pat, sign, ab)) \ + __builtin_abort (); \ + if (NAMEF (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0, 1)) != \ + NAMEF (res, size, pat, sign, ab)) \ + __builtin_abort (); + + ALL_POS () + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c new file mode 100644 index 00000000000..f6ba7ea7d89 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c @@ -0,0 +1,314 @@ +/* { dg-do run { target aarch64_sve256_hw } } */ +/* { dg-options "-O3 -msve-vector-bits=256 -std=gnu99" } */ + +#include "svlast.c" + +int +main (void) +{ + int8_t res_8_SV_VL1__a = 1; + int8_t res_8_SV_VL2__a = 2; + int8_t res_8_SV_VL3__a = 3; + int8_t res_8_SV_VL4__a = 4; + int8_t res_8_SV_VL5__a = 5; + int8_t res_8_SV_VL6__a = 6; + int8_t res_8_SV_VL7__a = 7; + int8_t res_8_SV_VL8__a = 8; + int8_t res_8_SV_VL16__a = 16; + uint8_t res_8_SV_VL1_u_a = 1; + uint8_t res_8_SV_VL2_u_a = 2; + uint8_t res_8_SV_VL3_u_a = 3; + uint8_t res_8_SV_VL4_u_a = 4; + uint8_t res_8_SV_VL5_u_a = 5; + uint8_t res_8_SV_VL6_u_a = 6; + uint8_t res_8_SV_VL7_u_a = 7; + uint8_t res_8_SV_VL8_u_a = 8; + uint8_t res_8_SV_VL16_u_a = 16; + int16_t res_16_SV_VL1__a = 1; + int16_t res_16_SV_VL2__a = 2; + int16_t res_16_SV_VL3__a = 3; + int16_t res_16_SV_VL4__a = 4; + int16_t res_16_SV_VL5__a = 5; + int16_t res_16_SV_VL6__a = 6; + int16_t res_16_SV_VL7__a = 7; + int16_t res_16_SV_VL8__a = 8; + int16_t res_16_SV_VL16__a = 0; + uint16_t res_16_SV_VL1_u_a = 1; + uint16_t res_16_SV_VL2_u_a = 2; + uint16_t res_16_SV_VL3_u_a = 3; + uint16_t res_16_SV_VL4_u_a = 4; + uint16_t res_16_SV_VL5_u_a = 5; + uint16_t res_16_SV_VL6_u_a = 6; + uint16_t res_16_SV_VL7_u_a = 7; + uint16_t res_16_SV_VL8_u_a = 8; + uint16_t res_16_SV_VL16_u_a = 0; + int32_t res_32_SV_VL1__a = 1; + int32_t res_32_SV_VL2__a = 2; + int32_t res_32_SV_VL3__a = 3; + int32_t res_32_SV_VL4__a = 4; + int32_t res_32_SV_VL5__a = 5; + int32_t res_32_SV_VL6__a = 6; + int32_t res_32_SV_VL7__a = 7; + int32_t res_32_SV_VL8__a = 0; + int32_t res_32_SV_VL16__a = 0; + uint32_t res_32_SV_VL1_u_a = 1; + uint32_t res_32_SV_VL2_u_a = 2; + uint32_t res_32_SV_VL3_u_a = 3; + uint32_t res_32_SV_VL4_u_a = 4; + uint32_t res_32_SV_VL5_u_a = 5; + uint32_t res_32_SV_VL6_u_a = 6; + uint32_t res_32_SV_VL7_u_a = 7; + uint32_t res_32_SV_VL8_u_a = 0; + uint32_t res_32_SV_VL16_u_a = 0; + int64_t res_64_SV_VL1__a = 1; + int64_t res_64_SV_VL2__a = 2; + int64_t res_64_SV_VL3__a = 3; + int64_t res_64_SV_VL4__a = 0; + int64_t res_64_SV_VL5__a = 0; + int64_t res_64_SV_VL6__a = 0; + int64_t res_64_SV_VL7__a = 0; + int64_t res_64_SV_VL8__a = 0; + int64_t res_64_SV_VL16__a = 0; + uint64_t res_64_SV_VL1_u_a = 1; + uint64_t res_64_SV_VL2_u_a = 2; + uint64_t res_64_SV_VL3_u_a = 3; + uint64_t res_64_SV_VL4_u_a = 0; + uint64_t res_64_SV_VL5_u_a = 0; + uint64_t res_64_SV_VL6_u_a = 0; + uint64_t res_64_SV_VL7_u_a = 0; + uint64_t res_64_SV_VL8_u_a = 0; + uint64_t res_64_SV_VL16_u_a = 0; + int8_t res_8_SV_VL1__b = 0; + int8_t res_8_SV_VL2__b = 1; + int8_t res_8_SV_VL3__b = 2; + int8_t res_8_SV_VL4__b = 3; + int8_t res_8_SV_VL5__b = 4; + int8_t res_8_SV_VL6__b = 5; + int8_t res_8_SV_VL7__b = 6; + int8_t res_8_SV_VL8__b = 7; + int8_t res_8_SV_VL16__b = 15; + uint8_t res_8_SV_VL1_u_b = 0; + uint8_t res_8_SV_VL2_u_b = 1; + uint8_t res_8_SV_VL3_u_b = 2; + uint8_t res_8_SV_VL4_u_b = 3; + uint8_t res_8_SV_VL5_u_b = 4; + uint8_t res_8_SV_VL6_u_b = 5; + uint8_t res_8_SV_VL7_u_b = 6; + uint8_t res_8_SV_VL8_u_b = 7; + uint8_t res_8_SV_VL16_u_b = 15; + int16_t res_16_SV_VL1__b = 0; + int16_t res_16_SV_VL2__b = 1; + int16_t res_16_SV_VL3__b = 2; + int16_t res_16_SV_VL4__b = 3; + int16_t res_16_SV_VL5__b = 4; + int16_t res_16_SV_VL6__b = 5; + int16_t res_16_SV_VL7__b = 6; + int16_t res_16_SV_VL8__b = 7; + int16_t res_16_SV_VL16__b = 15; + uint16_t res_16_SV_VL1_u_b = 0; + uint16_t res_16_SV_VL2_u_b = 1; + uint16_t res_16_SV_VL3_u_b = 2; + uint16_t res_16_SV_VL4_u_b = 3; + uint16_t res_16_SV_VL5_u_b = 4; + uint16_t res_16_SV_VL6_u_b = 5; + uint16_t res_16_SV_VL7_u_b = 6; + uint16_t res_16_SV_VL8_u_b = 7; + uint16_t res_16_SV_VL16_u_b = 15; + int32_t res_32_SV_VL1__b = 0; + int32_t res_32_SV_VL2__b = 1; + int32_t res_32_SV_VL3__b = 2; + int32_t res_32_SV_VL4__b = 3; + int32_t res_32_SV_VL5__b = 4; + int32_t res_32_SV_VL6__b = 5; + int32_t res_32_SV_VL7__b = 6; + int32_t res_32_SV_VL8__b = 7; + int32_t res_32_SV_VL16__b = 7; + uint32_t res_32_SV_VL1_u_b = 0; + uint32_t res_32_SV_VL2_u_b = 1; + uint32_t res_32_SV_VL3_u_b = 2; + uint32_t res_32_SV_VL4_u_b = 3; + uint32_t res_32_SV_VL5_u_b = 4; + uint32_t res_32_SV_VL6_u_b = 5; + uint32_t res_32_SV_VL7_u_b = 6; + uint32_t res_32_SV_VL8_u_b = 7; + uint32_t res_32_SV_VL16_u_b = 7; + int64_t res_64_SV_VL1__b = 0; + int64_t res_64_SV_VL2__b = 1; + int64_t res_64_SV_VL3__b = 2; + int64_t res_64_SV_VL4__b = 3; + int64_t res_64_SV_VL5__b = 3; + int64_t res_64_SV_VL6__b = 3; + int64_t res_64_SV_VL7__b = 3; + int64_t res_64_SV_VL8__b = 3; + int64_t res_64_SV_VL16__b = 3; + uint64_t res_64_SV_VL1_u_b = 0; + uint64_t res_64_SV_VL2_u_b = 1; + uint64_t res_64_SV_VL3_u_b = 2; + uint64_t res_64_SV_VL4_u_b = 3; + uint64_t res_64_SV_VL5_u_b = 3; + uint64_t res_64_SV_VL6_u_b = 3; + uint64_t res_64_SV_VL7_u_b = 3; + uint64_t res_64_SV_VL8_u_b = 3; + uint64_t res_64_SV_VL16_u_b = 3; + + int8_t res_8_SV_VL1__a_false = 0; + int8_t res_8_SV_VL2__a_false = 0; + int8_t res_8_SV_VL3__a_false = 0; + int8_t res_8_SV_VL4__a_false = 0; + int8_t res_8_SV_VL5__a_false = 0; + int8_t res_8_SV_VL6__a_false = 0; + int8_t res_8_SV_VL7__a_false = 0; + int8_t res_8_SV_VL8__a_false = 0; + int8_t res_8_SV_VL16__a_false = 0; + uint8_t res_8_SV_VL1_u_a_false = 0; + uint8_t res_8_SV_VL2_u_a_false = 0; + uint8_t res_8_SV_VL3_u_a_false = 0; + uint8_t res_8_SV_VL4_u_a_false = 0; + uint8_t res_8_SV_VL5_u_a_false = 0; + uint8_t res_8_SV_VL6_u_a_false = 0; + uint8_t res_8_SV_VL7_u_a_false = 0; + uint8_t res_8_SV_VL8_u_a_false = 0; + uint8_t res_8_SV_VL16_u_a_false = 0; + int16_t res_16_SV_VL1__a_false = 0; + int16_t res_16_SV_VL2__a_false = 0; + int16_t res_16_SV_VL3__a_false = 0; + int16_t res_16_SV_VL4__a_false = 0; + int16_t res_16_SV_VL5__a_false = 0; + int16_t res_16_SV_VL6__a_false = 0; + int16_t res_16_SV_VL7__a_false = 0; + int16_t res_16_SV_VL8__a_false = 0; + int16_t res_16_SV_VL16__a_false = 0; + uint16_t res_16_SV_VL1_u_a_false = 0; + uint16_t res_16_SV_VL2_u_a_false = 0; + uint16_t res_16_SV_VL3_u_a_false = 0; + uint16_t res_16_SV_VL4_u_a_false = 0; + uint16_t res_16_SV_VL5_u_a_false = 0; + uint16_t res_16_SV_VL6_u_a_false = 0; + uint16_t res_16_SV_VL7_u_a_false = 0; + uint16_t res_16_SV_VL8_u_a_false = 0; + uint16_t res_16_SV_VL16_u_a_false = 0; + int32_t res_32_SV_VL1__a_false = 0; + int32_t res_32_SV_VL2__a_false = 0; + int32_t res_32_SV_VL3__a_false = 0; + int32_t res_32_SV_VL4__a_false = 0; + int32_t res_32_SV_VL5__a_false = 0; + int32_t res_32_SV_VL6__a_false = 0; + int32_t res_32_SV_VL7__a_false = 0; + int32_t res_32_SV_VL8__a_false = 0; + int32_t res_32_SV_VL16__a_false = 0; + uint32_t res_32_SV_VL1_u_a_false = 0; + uint32_t res_32_SV_VL2_u_a_false = 0; + uint32_t res_32_SV_VL3_u_a_false = 0; + uint32_t res_32_SV_VL4_u_a_false = 0; + uint32_t res_32_SV_VL5_u_a_false = 0; + uint32_t res_32_SV_VL6_u_a_false = 0; + uint32_t res_32_SV_VL7_u_a_false = 0; + uint32_t res_32_SV_VL8_u_a_false = 0; + uint32_t res_32_SV_VL16_u_a_false = 0; + int64_t res_64_SV_VL1__a_false = 0; + int64_t res_64_SV_VL2__a_false = 0; + int64_t res_64_SV_VL3__a_false = 0; + int64_t res_64_SV_VL4__a_false = 0; + int64_t res_64_SV_VL5__a_false = 0; + int64_t res_64_SV_VL6__a_false = 0; + int64_t res_64_SV_VL7__a_false = 0; + int64_t res_64_SV_VL8__a_false = 0; + int64_t res_64_SV_VL16__a_false = 0; + uint64_t res_64_SV_VL1_u_a_false = 0; + uint64_t res_64_SV_VL2_u_a_false = 0; + uint64_t res_64_SV_VL3_u_a_false = 0; + uint64_t res_64_SV_VL4_u_a_false = 0; + uint64_t res_64_SV_VL5_u_a_false = 0; + uint64_t res_64_SV_VL6_u_a_false = 0; + uint64_t res_64_SV_VL7_u_a_false = 0; + uint64_t res_64_SV_VL8_u_a_false = 0; + uint64_t res_64_SV_VL16_u_a_false = 0; + int8_t res_8_SV_VL1__b_false = 31; + int8_t res_8_SV_VL2__b_false = 31; + int8_t res_8_SV_VL3__b_false = 31; + int8_t res_8_SV_VL4__b_false = 31; + int8_t res_8_SV_VL5__b_false = 31; + int8_t res_8_SV_VL6__b_false = 31; + int8_t res_8_SV_VL7__b_false = 31; + int8_t res_8_SV_VL8__b_false = 31; + int8_t res_8_SV_VL16__b_false = 31; + uint8_t res_8_SV_VL1_u_b_false = 31; + uint8_t res_8_SV_VL2_u_b_false = 31; + uint8_t res_8_SV_VL3_u_b_false = 31; + uint8_t res_8_SV_VL4_u_b_false = 31; + uint8_t res_8_SV_VL5_u_b_false = 31; + uint8_t res_8_SV_VL6_u_b_false = 31; + uint8_t res_8_SV_VL7_u_b_false = 31; + uint8_t res_8_SV_VL8_u_b_false = 31; + uint8_t res_8_SV_VL16_u_b_false = 31; + int16_t res_16_SV_VL1__b_false = 15; + int16_t res_16_SV_VL2__b_false = 15; + int16_t res_16_SV_VL3__b_false = 15; + int16_t res_16_SV_VL4__b_false = 15; + int16_t res_16_SV_VL5__b_false = 15; + int16_t res_16_SV_VL6__b_false = 15; + int16_t res_16_SV_VL7__b_false = 15; + int16_t res_16_SV_VL8__b_false = 15; + int16_t res_16_SV_VL16__b_false = 15; + uint16_t res_16_SV_VL1_u_b_false = 15; + uint16_t res_16_SV_VL2_u_b_false = 15; + uint16_t res_16_SV_VL3_u_b_false = 15; + uint16_t res_16_SV_VL4_u_b_false = 15; + uint16_t res_16_SV_VL5_u_b_false = 15; + uint16_t res_16_SV_VL6_u_b_false = 15; + uint16_t res_16_SV_VL7_u_b_false = 15; + uint16_t res_16_SV_VL8_u_b_false = 15; + uint16_t res_16_SV_VL16_u_b_false = 15; + int32_t res_32_SV_VL1__b_false = 7; + int32_t res_32_SV_VL2__b_false = 7; + int32_t res_32_SV_VL3__b_false = 7; + int32_t res_32_SV_VL4__b_false = 7; + int32_t res_32_SV_VL5__b_false = 7; + int32_t res_32_SV_VL6__b_false = 7; + int32_t res_32_SV_VL7__b_false = 7; + int32_t res_32_SV_VL8__b_false = 7; + int32_t res_32_SV_VL16__b_false = 7; + uint32_t res_32_SV_VL1_u_b_false = 7; + uint32_t res_32_SV_VL2_u_b_false = 7; + uint32_t res_32_SV_VL3_u_b_false = 7; + uint32_t res_32_SV_VL4_u_b_false = 7; + uint32_t res_32_SV_VL5_u_b_false = 7; + uint32_t res_32_SV_VL6_u_b_false = 7; + uint32_t res_32_SV_VL7_u_b_false = 7; + uint32_t res_32_SV_VL8_u_b_false = 7; + uint32_t res_32_SV_VL16_u_b_false = 7; + int64_t res_64_SV_VL1__b_false = 3; + int64_t res_64_SV_VL2__b_false = 3; + int64_t res_64_SV_VL3__b_false = 3; + int64_t res_64_SV_VL4__b_false = 3; + int64_t res_64_SV_VL5__b_false = 3; + int64_t res_64_SV_VL6__b_false = 3; + int64_t res_64_SV_VL7__b_false = 3; + int64_t res_64_SV_VL8__b_false = 3; + int64_t res_64_SV_VL16__b_false = 3; + uint64_t res_64_SV_VL1_u_b_false = 3; + uint64_t res_64_SV_VL2_u_b_false = 3; + uint64_t res_64_SV_VL3_u_b_false = 3; + uint64_t res_64_SV_VL4_u_b_false = 3; + uint64_t res_64_SV_VL5_u_b_false = 3; + uint64_t res_64_SV_VL6_u_b_false = 3; + uint64_t res_64_SV_VL7_u_b_false = 3; + uint64_t res_64_SV_VL8_u_b_false = 3; + uint64_t res_64_SV_VL16_u_b_false = 3; + + +#undef SVELAST_DEF +#define SVELAST_DEF(size, pat, sign, ab, su) \ + if (NAME (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0 ,1)) != \ + NAME (res, size, pat, sign, ab)) \ + __builtin_abort (); \ + if (NAMEF (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0 ,1)) != \ + NAMEF (res, size, pat, sign, ab)) \ + __builtin_abort (); + + ALL_POS () + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c index 1e38371842f..91fdd3c202e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, all -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c index 491c35af221..7d824caae1b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl128 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c index eebb913273a..e0aa3a5fa68 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl16 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c index 73c3b2ec045..3238015d9eb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl256 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c index 29744c81402..50861098934 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl32 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c index cf25c31bcbf..300dacce955 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl64 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c index 9ad3e227654..0a840a38384 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, all -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c index d573e5fc69c..18cefbff1e6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl128 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c index 200b0eb8242..c622ed55674 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl16 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c index f6f8858fd47..3286280687d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl256 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c index e62f59cc885..3c6afa2fdf1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl32 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c index 483558cb576..bb7d3ebf9d4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl64 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */