From patchwork Tue May 9 17:58:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 91705 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3070214vqo; Tue, 9 May 2023 11:00:35 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7/33O5wx3xQTOfQrTW7mszwcoCnGRha85QQgtJ/d9U9xMw+jOtM31eT1xnpIiJuVi53eGY X-Received: by 2002:a17:907:7e91:b0:94e:c40b:71e3 with SMTP id qb17-20020a1709077e9100b0094ec40b71e3mr14307137ejc.5.1683655235629; Tue, 09 May 2023 11:00:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683655235; cv=none; d=google.com; s=arc-20160816; b=KM3wtW2YlP6gANfgS++aT8umJdomom66thNWWUFuwprk1F4IRvwd9z+bmqQEKdJ//x n23hztVkwdym1yyh7i1rMR6mzQndODkx+TqV9VCS8btu4PTUMokD8+fhEv1eUNbbpsHL LnEaC79Gk7nQK+4ySqoAHHx1NcgqmLnrK2D8prm9EAE8ecox1GVngezzyco5c2rloihF z07Ksjmc9sKLcoYUkR4Q0pY1xXp/GwFRUQgNEb5GiS0iiimigb8QLIkP8TvoAKh+6sSl 5Q8LCiSXx+ihOWB+Z6s5VTWMvLwiZwkmHq0wuPVo6/ldZCjQYFxxD0zIhzLwRAQExU7O MHUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:mime-version :user-agent:message-id:date:subject:mail-followup-to:to:dmarc-filter :delivered-to:dkim-signature:dkim-filter; bh=Ik+GQvjb7FPYJr1vkP2ejoQJRfhqGprfvfMt0rSijoU=; b=hcAkqOV8LxmDIV6haD/YgrdaynOYsBPFtIwyePvZy2qfhMLPXP9y0stBafOfBbqFQd u2uMCRwq6aznfRbiSSn3nSVnNaOrcyAzvwvxrzDcZ96aqWF+/E6kaItSkfNzDIOAIzna coxepfolXJoWCrL9UL8SCPyGsD6JR+AH8ye44jMedH7ztdFdPkVCQYp7WuySrU3I6lhI 43wOd0bzsO4rc09LfSag9sYChG1JGvolzKrujxskIkY7qHTA+cga2kikw9AK3DKbLpMj n4amOY3G8hdRbCiHhV+UAuy4vdhO9JvcQHa48x39b4sFfaqKy9xHjyS/quHjiHVvubpn X/bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=XrBAZjhO; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id k11-20020a17090632cb00b00965e49d6954si2117124ejk.119.2023.05.09.11.00.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 11:00:35 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=XrBAZjhO; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 55E84385E003 for ; Tue, 9 May 2023 17:59:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 55E84385E003 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683655152; bh=Ik+GQvjb7FPYJr1vkP2ejoQJRfhqGprfvfMt0rSijoU=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=XrBAZjhOd2GN9K19cBE0RF80l2b43A4jj4cA8jomIL4rc25UjrlST6A3UYn5lAWor 1HKyIHOKF43cY6JgWuV66I9icfmJS5zIdVh/xcdM+sSxBZGmKAUrGVN3+pRh1nCy+p yA4RCBM3O8v6rtvI5mo7LCA7UyJMzetQJwEurcg0= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 6AEDA3857730 for ; Tue, 9 May 2023 17:58:26 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6AEDA3857730 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 84D2CFEC for ; Tue, 9 May 2023 10:59:10 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B02E23F67D for ; Tue, 9 May 2023 10:58:25 -0700 (PDT) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 1/2] aarch64: Fix cut-&-pasto in aarch64-sve2-acle-asm.exp Date: Tue, 09 May 2023 18:58:24 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-29.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765440472380892633?= X-GMAIL-MSGID: =?utf-8?q?1765440472380892633?= aarch64-sve2-acle-asm.exp tried to prevent --with-cpu/tune from affecting the results, but it used sve_flags rather than sve2_flags. This was a silent failure when running the full testsuite, but was a fatal error when running the harness individually. Tested on aarch64-linux-gnu, pushed to trunk. Richard gcc/testsuite/ * gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp: Use sve2_flags instead of sve_flags. --- .../gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp b/gcc/testsuite/gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp index 2e8d78904c5..0ad6463d832 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/aarch64-sve2-acle-asm.exp @@ -39,7 +39,7 @@ if { [check_effective_target_aarch64_sve2] } { # Turn off any codegen tweaks by default that may affect expected assembly. # Tests relying on those should turn them on explicitly. -set sve_flags "$sve_flags -mtune=generic -moverride=tune=none" +set sve2_flags "$sve2_flags -mtune=generic -moverride=tune=none" lappend extra_flags "-fno-ipa-icf" From patchwork Tue May 9 17:59:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 91706 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3071868vqo; Tue, 9 May 2023 11:02:19 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ54yI5BT4HIurbaHaWdfl4HZaDGZBLZngz0jVfTz1iMOvaoeBRUE+IbMDi9osbsNFZHTboW X-Received: by 2002:aa7:da47:0:b0:50d:56df:fca7 with SMTP id w7-20020aa7da47000000b0050d56dffca7mr12034090eds.6.1683655338839; Tue, 09 May 2023 11:02:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683655338; cv=none; d=google.com; s=arc-20160816; b=dK8XOZmvYMhWycC2kAewnXiy9N8fEvOxpi4vSFSVNkKzOGnm+6UF0ruMZwepRvK0qv lTzZlt9/Xn+KIdOlJsbV6uBbZ7zJwidRfj85BYDfsUff6hPRK6X1wUYLF0zbSmNpZfZk e9drn3z50/uRpRnJBR249cxrpaAR31DffjEL9W2LXfwpMYNE3K5LlSzPOywZtp4NN+VX qpUiFBIMFFwSrXGccCMX7xg4RChHjr2YKze/PkRC6R1o67/bwU1wOPHXW7oLMMXLd0C/ UwVsUFZGMHUBOgspTsMdBJ+CH3uqqZLgefLYXM8cI6BWH6EgxmcqNH0wTmWU0z3B4gv6 QUGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:mime-version :user-agent:message-id:date:subject:mail-followup-to:to:dmarc-filter :delivered-to:dkim-signature:dkim-filter; bh=19gKCxsRz0e1E8r8x4UNzwTA49svFfx2SLbvVLdRGbA=; b=E2+RH6/asGye/+cGg5lMunVDuTMcvrEiHG1vJxZwN0NDWvVWxIrYedmqc0k1QH23UW s+13MIc/mVuGikMtCph7+yF+5SVznxgEtdLwlwVypMgr8ycNb0ylxzmL5C5nREL+d+ys hio1fWnxYYnWvG5jnWqfClsNAE+YCWutkGjLFihJRmti2+9ZRZ/REsBYe6qH5TYzCebb XgXDbfbt5m13jofcWNxhWeImgAfJUqjAYM+Xiw8G0jCx8IIoxT0hWllzGfqovTR6dhzM yl5Fo+WfKDr6gMBsse/VdVbgN4zys2i0CecZYNt26qKIpPgahWM1ZJ+jHY/dEptNVyqU akqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=O67XWbbW; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id u11-20020aa7db8b000000b0050bde7eb4ccsi1561714edt.682.2023.05.09.11.02.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 11:02:18 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=O67XWbbW; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 981B03854173 for ; Tue, 9 May 2023 18:00:47 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 981B03854173 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683655247; bh=19gKCxsRz0e1E8r8x4UNzwTA49svFfx2SLbvVLdRGbA=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=O67XWbbWPshcRq2gp7lfQRrjRdQUxDT/yicgD+LvG04w56CvmWG3/X6WtlJmc90mv Oyku1Dnh61OWVTfbQuB22rO4j4DgQjUtxOVRTVz96ioZdL/LXuPq/G6GAOCNNAORO/ jI3fuD9mNaJBZB4PLl/RdfUxnPXxF3a1sU+PaNTs= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 3466B3853826 for ; Tue, 9 May 2023 17:59:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 3466B3853826 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76D35FEC for ; Tue, 9 May 2023 11:00:18 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3062C3F840 for ; Tue, 9 May 2023 10:59:33 -0700 (PDT) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 2/2] aarch64: Improve register allocation for lane instructions Date: Tue, 09 May 2023 18:59:31 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-29.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765440580235520356?= X-GMAIL-MSGID: =?utf-8?q?1765440580235520356?= REG_ALLOC_ORDER is much less important than it used to be, but it is still used as a tie-breaker when multiple registers in a class are equally good. Previously aarch64 used the default approach of allocating in order of increasing register number. But as the comment in the patch says, it's better to allocate FP and predicate registers in the opposite order, so that we don't eat into smaller register classes unnecessarily. This fixes some existing FIXMEs and improves the register allocation for some Arm ACLE code. Doing this also showed that *vcond_mask_ (predicated MOV/SEL) unnecessarily required p0-p7 rather than p0-p15 for the unpredicated movprfx alternatives. Only the predicated movprfx alternative requires p0-p7 (due to the movprfx itself, rather than due to the main instruction). Tested on aarch64-linux-gnu, pushed to trunk. Richard gcc/ * config/aarch64/aarch64-protos.h (aarch64_adjust_reg_alloc_order): Declare. * config/aarch64/aarch64.h (REG_ALLOC_ORDER): Define. (ADJUST_REG_ALLOC_ORDER): Likewise. * config/aarch64/aarch64.cc (aarch64_adjust_reg_alloc_order): New function. * config/aarch64/aarch64-sve.md (*vcond_mask_): Use Upa rather than Upl for unpredicated movprfx alternatives. gcc/testsuite/ * gcc.target/aarch64/sve/acle/asm/abd_f16.c: Remove XFAILs. * gcc.target/aarch64/sve/acle/asm/abd_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/asr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/asr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsr_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c: Likewise. --- gcc/config/aarch64/aarch64-protos.h | 2 + gcc/config/aarch64/aarch64-sve.md | 2 +- gcc/config/aarch64/aarch64.cc | 38 +++++++++++++++++++ gcc/config/aarch64/aarch64.h | 5 +++ .../gcc.target/aarch64/sve/acle/asm/abd_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/abd_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/abd_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/abd_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/abd_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/add_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/add_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/add_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/add_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/add_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/add_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/add_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/add_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/and_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/and_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/and_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/and_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/and_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/and_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/and_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/and_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/asr_s16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/asr_s8.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/bic_s16.c | 6 +-- .../gcc.target/aarch64/sve/acle/asm/bic_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/bic_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/bic_s8.c | 6 +-- .../gcc.target/aarch64/sve/acle/asm/bic_u16.c | 6 +-- .../gcc.target/aarch64/sve/acle/asm/bic_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/bic_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/bic_u8.c | 6 +-- .../gcc.target/aarch64/sve/acle/asm/div_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/div_u64.c | 2 +- .../aarch64/sve/acle/asm/divr_f16.c | 4 +- .../aarch64/sve/acle/asm/divr_f32.c | 4 +- .../aarch64/sve/acle/asm/divr_f64.c | 4 +- .../aarch64/sve/acle/asm/divr_s32.c | 2 +- .../aarch64/sve/acle/asm/divr_s64.c | 2 +- .../aarch64/sve/acle/asm/divr_u32.c | 2 +- .../aarch64/sve/acle/asm/divr_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/dot_s32.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/dot_s64.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/dot_u32.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/dot_u64.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/eor_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/eor_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/eor_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/eor_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/eor_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/eor_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/eor_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/eor_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/lsl_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/lsl_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/lsl_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/lsl_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/lsl_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/lsl_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/lsl_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/lsl_u8.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_s16.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_s32.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_s8.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_u16.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_u32.c | 4 +- .../aarch64/sve/acle/asm/lsl_wide_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/lsr_u16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/lsr_u8.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mad_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mad_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mad_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mad_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/max_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/max_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/max_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/max_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/max_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/max_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/max_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/max_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/min_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/min_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/min_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/min_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/min_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/min_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/min_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/min_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mla_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mla_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mla_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mla_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mla_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mls_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mls_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mls_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mls_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mls_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/msb_f16.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_f32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/msb_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/msb_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/msb_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/msb_u8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mul_f16.c | 2 +- .../aarch64/sve/acle/asm/mul_f16_notrap.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_f32.c | 2 +- .../aarch64/sve/acle/asm/mul_f32_notrap.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_f64.c | 2 +- .../aarch64/sve/acle/asm/mul_f64_notrap.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mul_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mul_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/mul_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mul_u8.c | 4 +- .../aarch64/sve/acle/asm/mulh_s16.c | 4 +- .../aarch64/sve/acle/asm/mulh_s32.c | 2 +- .../aarch64/sve/acle/asm/mulh_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mulh_s8.c | 4 +- .../aarch64/sve/acle/asm/mulh_u16.c | 4 +- .../aarch64/sve/acle/asm/mulh_u32.c | 2 +- .../aarch64/sve/acle/asm/mulh_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/mulh_u8.c | 4 +- .../aarch64/sve/acle/asm/mulx_f16.c | 6 +-- .../aarch64/sve/acle/asm/mulx_f32.c | 6 +-- .../aarch64/sve/acle/asm/mulx_f64.c | 6 +-- .../aarch64/sve/acle/asm/nmad_f16.c | 2 +- .../aarch64/sve/acle/asm/nmad_f32.c | 2 +- .../aarch64/sve/acle/asm/nmad_f64.c | 2 +- .../aarch64/sve/acle/asm/nmla_f16.c | 2 +- .../aarch64/sve/acle/asm/nmla_f32.c | 2 +- .../aarch64/sve/acle/asm/nmla_f64.c | 2 +- .../aarch64/sve/acle/asm/nmls_f16.c | 2 +- .../aarch64/sve/acle/asm/nmls_f32.c | 2 +- .../aarch64/sve/acle/asm/nmls_f64.c | 2 +- .../aarch64/sve/acle/asm/nmsb_f16.c | 2 +- .../aarch64/sve/acle/asm/nmsb_f32.c | 2 +- .../aarch64/sve/acle/asm/nmsb_f64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/orr_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/orr_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/orr_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/orr_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/orr_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/orr_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/orr_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/orr_u8.c | 4 +- .../aarch64/sve/acle/asm/scale_f16.c | 12 +++--- .../aarch64/sve/acle/asm/scale_f32.c | 6 +-- .../aarch64/sve/acle/asm/scale_f64.c | 6 +-- .../gcc.target/aarch64/sve/acle/asm/sub_s16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/sub_s32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/sub_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/sub_s8.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/sub_u16.c | 4 +- .../gcc.target/aarch64/sve/acle/asm/sub_u32.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/sub_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/sub_u8.c | 4 +- .../aarch64/sve/acle/asm/subr_f16.c | 2 +- .../aarch64/sve/acle/asm/subr_f16_notrap.c | 2 +- .../aarch64/sve/acle/asm/subr_f32.c | 2 +- .../aarch64/sve/acle/asm/subr_f32_notrap.c | 2 +- .../aarch64/sve/acle/asm/subr_f64.c | 2 +- .../aarch64/sve/acle/asm/subr_f64_notrap.c | 2 +- .../aarch64/sve/acle/asm/subr_s16.c | 4 +- .../aarch64/sve/acle/asm/subr_s32.c | 2 +- .../aarch64/sve/acle/asm/subr_s64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/subr_s8.c | 4 +- .../aarch64/sve/acle/asm/subr_u16.c | 4 +- .../aarch64/sve/acle/asm/subr_u32.c | 2 +- .../aarch64/sve/acle/asm/subr_u64.c | 2 +- .../gcc.target/aarch64/sve/acle/asm/subr_u8.c | 4 +- .../aarch64/sve2/acle/asm/bcax_s16.c | 4 +- .../aarch64/sve2/acle/asm/bcax_s32.c | 2 +- .../aarch64/sve2/acle/asm/bcax_s64.c | 2 +- .../aarch64/sve2/acle/asm/bcax_s8.c | 4 +- .../aarch64/sve2/acle/asm/bcax_u16.c | 4 +- .../aarch64/sve2/acle/asm/bcax_u32.c | 2 +- .../aarch64/sve2/acle/asm/bcax_u64.c | 2 +- .../aarch64/sve2/acle/asm/bcax_u8.c | 4 +- .../aarch64/sve2/acle/asm/qadd_s16.c | 4 +- .../aarch64/sve2/acle/asm/qadd_s32.c | 2 +- .../aarch64/sve2/acle/asm/qadd_s64.c | 2 +- .../aarch64/sve2/acle/asm/qadd_s8.c | 4 +- .../aarch64/sve2/acle/asm/qadd_u16.c | 4 +- .../aarch64/sve2/acle/asm/qadd_u32.c | 2 +- .../aarch64/sve2/acle/asm/qadd_u64.c | 2 +- .../aarch64/sve2/acle/asm/qadd_u8.c | 4 +- .../aarch64/sve2/acle/asm/qdmlalb_s16.c | 4 +- .../aarch64/sve2/acle/asm/qdmlalb_s32.c | 4 +- .../aarch64/sve2/acle/asm/qdmlalb_s64.c | 2 +- .../aarch64/sve2/acle/asm/qdmlalbt_s16.c | 4 +- .../aarch64/sve2/acle/asm/qdmlalbt_s32.c | 4 +- .../aarch64/sve2/acle/asm/qdmlalbt_s64.c | 2 +- .../aarch64/sve2/acle/asm/qsub_s16.c | 4 +- .../aarch64/sve2/acle/asm/qsub_s32.c | 2 +- .../aarch64/sve2/acle/asm/qsub_s64.c | 2 +- .../aarch64/sve2/acle/asm/qsub_s8.c | 4 +- .../aarch64/sve2/acle/asm/qsub_u16.c | 4 +- .../aarch64/sve2/acle/asm/qsub_u32.c | 2 +- .../aarch64/sve2/acle/asm/qsub_u64.c | 2 +- .../aarch64/sve2/acle/asm/qsub_u8.c | 4 +- .../aarch64/sve2/acle/asm/qsubr_s16.c | 4 +- .../aarch64/sve2/acle/asm/qsubr_s32.c | 2 +- .../aarch64/sve2/acle/asm/qsubr_s64.c | 2 +- .../aarch64/sve2/acle/asm/qsubr_s8.c | 4 +- .../aarch64/sve2/acle/asm/qsubr_u16.c | 4 +- .../aarch64/sve2/acle/asm/qsubr_u32.c | 2 +- .../aarch64/sve2/acle/asm/qsubr_u64.c | 2 +- .../aarch64/sve2/acle/asm/qsubr_u8.c | 4 +- 251 files changed, 413 insertions(+), 368 deletions(-) diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index b138494384b..2f055a26f92 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -1067,4 +1067,6 @@ extern bool aarch64_harden_sls_blr_p (void); extern void aarch64_output_patchable_area (unsigned int, bool); +extern void aarch64_adjust_reg_alloc_order (); + #endif /* GCC_AARCH64_PROTOS_H */ diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md index 4b4c02c90fe..2898b85376b 100644 --- a/gcc/config/aarch64/aarch64-sve.md +++ b/gcc/config/aarch64/aarch64-sve.md @@ -7624,7 +7624,7 @@ (define_expand "@vcond_mask_" (define_insn "*vcond_mask_" [(set (match_operand:SVE_ALL 0 "register_operand" "=w, w, w, w, ?w, ?&w, ?&w") (unspec:SVE_ALL - [(match_operand: 3 "register_operand" "Upa, Upa, Upa, Upa, Upl, Upl, Upl") + [(match_operand: 3 "register_operand" "Upa, Upa, Upa, Upa, Upl, Upa, Upa") (match_operand:SVE_ALL 1 "aarch64_sve_reg_or_dup_imm" "w, vss, vss, Ufc, Ufc, vss, Ufc") (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero" "w, 0, Dz, 0, Dz, w, w")] UNSPEC_SEL))] diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 546cb121331..bf3d1b39d26 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -27501,6 +27501,44 @@ aarch64_output_load_tp (rtx dest) return ""; } +/* Set up the value of REG_ALLOC_ORDER from scratch. + + It was previously good practice to put call-clobbered registers ahead + of call-preserved registers, but that isn't necessary these days. + IRA's model of register save/restore costs is much more sophisticated + than the model that a simple ordering could provide. We leave + HONOR_REG_ALLOC_ORDER undefined so that we can get the full benefit + of IRA's model. + + However, it is still useful to list registers that are members of + multiple classes after registers that are members of fewer classes. + For example, we have: + + - FP_LO8_REGS: v0-v7 + - FP_LO_REGS: v0-v15 + - FP_REGS: v0-v31 + + If, as a tie-breaker, we allocate FP_REGS in the order v0-v31, + we run the risk of starving other (lower-priority) pseudos that + require FP_LO8_REGS or FP_LO_REGS. Allocating FP_LO_REGS in the + order v0-v15 could similarly starve pseudos that require FP_LO8_REGS. + Allocating downwards rather than upwards avoids this problem, at least + in code that has reasonable register pressure. + + The situation for predicate registers is similar. */ + +void +aarch64_adjust_reg_alloc_order () +{ + for (int i = 0; i < FIRST_PSEUDO_REGISTER; ++i) + if (IN_RANGE (i, V0_REGNUM, V31_REGNUM)) + reg_alloc_order[i] = V31_REGNUM - (i - V0_REGNUM); + else if (IN_RANGE (i, P0_REGNUM, P15_REGNUM)) + reg_alloc_order[i] = P15_REGNUM - (i - P0_REGNUM); + else + reg_alloc_order[i] = i; +} + /* Target-specific selftests. */ #if CHECKING_P diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index 155cace6afe..801f9ebc572 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -1292,4 +1292,9 @@ extern poly_uint16 aarch64_sve_vg; STACK_BOUNDARY / BITS_PER_UNIT) \ : (crtl->outgoing_args_size + STACK_POINTER_OFFSET)) +/* Filled in by aarch64_adjust_reg_alloc_order, which is called before + the first relevant use. */ +#define REG_ALLOC_ORDER {} +#define ADJUST_REG_ALLOC_ORDER aarch64_adjust_reg_alloc_order () + #endif /* GCC_AARCH64_H */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c index c019f248d20..e84df047b6e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f16_m_tied1, svfloat16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f16_m_untied: { xfail *-*-* } +** abd_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c index bff37580c43..f2fcb34216a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f32_m_tied1, svfloat32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f32_m_untied: { xfail *-*-* } +** abd_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c index c1e5f14e619..952bd46a333 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f64_m_tied1, svfloat64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f64_m_untied: { xfail *-*-* } +** abd_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c index e2d0c0fb7ef..7d055eb31ed 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_s16_m_tied1, svint16_t, int16_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_s16_m_untied: { xfail *-*-* } +** abd_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sabd z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s16_m_tied1, svint16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s16_m_untied: { xfail *-*-* } +** abd_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c index 5c95ec04df1..2489b24e379 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s32_m_tied1, svint32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s32_m_untied: { xfail *-*-* } +** abd_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c index 2402ecf2918..0d324c99937 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s64_m_tied1, svint64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s64_m_untied: { xfail *-*-* } +** abd_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c index 49a2cc388f9..51e4a8aa6ff 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_s8_m_tied1, svint8_t, int8_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_s8_m_untied: { xfail *-*-* } +** abd_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sabd z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s8_m_tied1, svint8_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s8_m_untied: { xfail *-*-* } +** abd_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sabd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c index 60aa9429ea6..89dc58dcc17 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_u16_m_untied: { xfail *-*-* } +** abd_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uabd z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u16_m_tied1, svuint16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u16_m_untied: { xfail *-*-* } +** abd_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c index bc24107837c..4e4d0bc649a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u32_m_tied1, svuint32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u32_m_untied: { xfail *-*-* } +** abd_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c index d2cdaa06a5a..2aa9937743f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u64_m_tied1, svuint64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u64_m_untied: { xfail *-*-* } +** abd_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c index 454ef153cc3..78a16324a07 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_u8_m_untied: { xfail *-*-* } +** abd_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uabd z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u8_m_tied1, svuint8_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u8_m_untied: { xfail *-*-* } +** abd_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uabd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c index c0883edf9ab..85a63f34006 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_s16_m_tied1, svint16_t, int16_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_s16_m_untied: { xfail *-*-* } +** add_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s16_m_tied1, svint16_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s16_m_untied: { xfail *-*-* } +** add_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c index 887038ba3c7..4ba210cd24b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s32_m_tied1, svint32_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s32_m_untied: { xfail *-*-* } +** add_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c index aab63ef6211..ff8cc6d5aad 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s64_m_tied1, svint64_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s64_m_untied: { xfail *-*-* } +** add_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c index 0889c189d59..2e79ba2b12b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_s8_m_tied1, svint8_t, int8_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_s8_m_untied: { xfail *-*-* } +** add_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s8_m_tied1, svint8_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s8_m_untied: { xfail *-*-* } +** add_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c index 25cb90353d3..85880c8ab53 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_u16_m_untied: { xfail *-*-* } +** add_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u16_m_tied1, svuint16_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u16_m_untied: { xfail *-*-* } +** add_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c index ee979489b52..74dfe0cd8d5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u32_m_tied1, svuint32_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u32_m_untied: { xfail *-*-* } +** add_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c index 25d2972a695..efb8820669c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u64_m_tied1, svuint64_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u64_m_untied: { xfail *-*-* } +** add_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c index 06b68c97ce8..812c6a526b6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_u8_m_untied: { xfail *-*-* } +** add_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u8_m_tied1, svuint8_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u8_m_untied: { xfail *-*-* } +** add_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c index d54613e915d..02d830a200c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_s16_m_tied1, svint16_t, int16_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_s16_m_untied: { xfail *-*-* } +** and_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s16_m_tied1, svint16_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s16_m_untied: { xfail *-*-* } +** and_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c index 7f4082b327b..c78c18664ce 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s32_m_tied1, svint32_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s32_m_untied: { xfail *-*-* } +** and_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c index 8868258dca6..8ef1f63c607 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s64_m_tied1, svint64_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s64_m_untied: { xfail *-*-* } +** and_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c index 61d168d3fdf..a2856cd0b0f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_s8_m_tied1, svint8_t, int8_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_s8_m_untied: { xfail *-*-* } +** and_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s8_m_tied1, svint8_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s8_m_untied: { xfail *-*-* } +** and_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c index 875a08d71d1..443a2a8b070 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_u16_m_untied: { xfail *-*-* } +** and_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u16_m_tied1, svuint16_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u16_m_untied: { xfail *-*-* } +** and_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c index 80ff503963f..07d251e8b6f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u32_m_tied1, svuint32_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u32_m_untied: { xfail *-*-* } +** and_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c index 906b19c3735..5e2ee4d1a25 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u64_m_tied1, svuint64_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u64_m_untied: { xfail *-*-* } +** and_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c index b0f1c9529f0..373aafe357c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_u8_m_untied: { xfail *-*-* } +** and_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u8_m_tied1, svuint8_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u8_m_untied: { xfail *-*-* } +** and_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c index 877bf10685a..f9ce790da95 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (asr_w0_s16_m_tied1, svint16_t, uint16_t, z0 = svasr_m (p0, z0, x0)) /* -** asr_w0_s16_m_untied: { xfail *-*-* } +** asr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** asr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c index 992e93fdef7..5cf3a712c28 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (asr_w0_s8_m_tied1, svint8_t, uint8_t, z0 = svasr_m (p0, z0, x0)) /* -** asr_w0_s8_m_untied: { xfail *-*-* } +** asr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** asr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c index c80f5697f5f..79848b15b85 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_s16_m_tied1, svint16_t, int16_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_s16_m_untied: { xfail *-*-* } +** bic_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** bic z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s16_m_tied1, svint16_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s16_m_untied: { xfail *-*-* } +** bic_1_s16_m_untied: ** mov (z[0-9]+\.h), #-2 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_s16_z_tied1, svint16_t, int16_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_s16_z_untied: { xfail *-*-* } +** bic_w0_s16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** bic z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c index e02c66947d6..04367a8fad0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s32_m_tied1, svint32_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s32_m_untied: { xfail *-*-* } +** bic_1_s32_m_untied: ** mov (z[0-9]+\.s), #-2 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c index 57c1e535fea..b4c19d19064 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s64_m_tied1, svint64_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s64_m_untied: { xfail *-*-* } +** bic_1_s64_m_untied: ** mov (z[0-9]+\.d), #-2 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c index 0958a340393..d1ffefa77ee 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_s8_m_tied1, svint8_t, int8_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_s8_m_untied: { xfail *-*-* } +** bic_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** bic z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s8_m_tied1, svint8_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s8_m_untied: { xfail *-*-* } +** bic_1_s8_m_untied: ** mov (z[0-9]+\.b), #-2 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_s8_z_tied1, svint8_t, int8_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_s8_z_untied: { xfail *-*-* } +** bic_w0_s8_z_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0\.b, p0/z, z1\.b ** bic z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c index 30209ffb418..fb16646e205 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_u16_m_untied: { xfail *-*-* } +** bic_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** bic z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u16_m_tied1, svuint16_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u16_m_untied: { xfail *-*-* } +** bic_1_u16_m_untied: ** mov (z[0-9]+\.h), #-2 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_u16_z_tied1, svuint16_t, uint16_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_u16_z_untied: { xfail *-*-* } +** bic_w0_u16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** bic z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c index 9f08ab40a8c..764fd193852 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u32_m_tied1, svuint32_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u32_m_untied: { xfail *-*-* } +** bic_1_u32_m_untied: ** mov (z[0-9]+\.s), #-2 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c index de84f3af6ff..e4399807ad4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u64_m_tied1, svuint64_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u64_m_untied: { xfail *-*-* } +** bic_1_u64_m_untied: ** mov (z[0-9]+\.d), #-2 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c index 80c489b9cdb..b7528ceac33 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_u8_m_untied: { xfail *-*-* } +** bic_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** bic z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u8_m_tied1, svuint8_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u8_m_untied: { xfail *-*-* } +** bic_1_u8_m_untied: ** mov (z[0-9]+\.b), #-2 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_u8_z_tied1, svuint8_t, uint8_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_u8_z_untied: { xfail *-*-* } +** bic_w0_u8_z_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0\.b, p0/z, z1\.b ** bic z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c index 8bcd094c996..90f93643a44 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f16_m_tied1, svfloat16_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f16_m_untied: { xfail *-*-* } +** div_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c index 546c61dc783..7c1894ebe52 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f32_m_tied1, svfloat32_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f32_m_untied: { xfail *-*-* } +** div_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c index 1e24bc26484..93517c5b50f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f64_m_tied1, svfloat64_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f64_m_untied: { xfail *-*-* } +** div_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c index 8e70ae797a7..c49ca1aa524 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_s32_m_tied1, svint32_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_s32_m_untied: { xfail *-*-* } +** div_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** sdiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c index 439da1f571f..464dca28d74 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_s64_m_tied1, svint64_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_s64_m_untied: { xfail *-*-* } +** div_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** sdiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c index 8e8e464b777..232ccacf524 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_u32_m_tied1, svuint32_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_u32_m_untied: { xfail *-*-* } +** div_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** udiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c index fc152e8e57b..ac7c026eea3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_u64_m_tied1, svuint64_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_u64_m_untied: { xfail *-*-* } +** div_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** udiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c index e293be65a06..ad6eb656b10 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f16_m_tied1, svfloat16_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f16_m_untied: { xfail *-*-* } +** divr_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.h, p0/m, z0\.h, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f16_m_tied1, svfloat16_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f16_m_untied: { xfail *-*-* } +** divr_0p5_f16_m_untied: ** fmov (z[0-9]+\.h), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c index 04a7ac40bb2..60fd70711ec 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f32_m_tied1, svfloat32_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f32_m_untied: { xfail *-*-* } +** divr_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.s, p0/m, z0\.s, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f32_m_tied1, svfloat32_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f32_m_untied: { xfail *-*-* } +** divr_0p5_f32_m_untied: ** fmov (z[0-9]+\.s), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c index bef1a9b059c..f465a27b941 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f64_m_tied1, svfloat64_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f64_m_untied: { xfail *-*-* } +** divr_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.d, p0/m, z0\.d, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f64_m_tied1, svfloat64_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f64_m_untied: { xfail *-*-* } +** divr_0p5_f64_m_untied: ** fmov (z[0-9]+\.d), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c index 75a6c1d979d..dab18b0fd9f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_s32_m_tied1, svint32_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_s32_m_untied: { xfail *-*-* } +** divr_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** sdivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c index 8f4939a91fb..4668437dce3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_s64_m_tied1, svint64_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_s64_m_untied: { xfail *-*-* } +** divr_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** sdivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c index 84c243b44c2..c6d4b04f546 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_u32_m_tied1, svuint32_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_u32_m_untied: { xfail *-*-* } +** divr_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** udivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c index 03bb624726f..ace600adf03 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_u64_m_tied1, svuint64_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_u64_m_untied: { xfail *-*-* } +** divr_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** udivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c index 605bd1b30f2..0d9d6afe2f2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_s32_tied1, svint32_t, svint8_t, int8_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_s32_untied: { xfail *-*-* } +** dot_w0_s32_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sdot z0\.s, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_s32_tied1, svint32_t, svint8_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_s32_untied: { xfail *-*-* } +** dot_9_s32_untied: ** mov (z[0-9]+\.b), #9 ** movprfx z0, z1 ** sdot z0\.s, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c index b6574740b7e..a119d9cc94d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_s64_tied1, svint64_t, svint16_t, int16_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_s64_untied: { xfail *-*-* } +** dot_w0_s64_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sdot z0\.d, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_s64_tied1, svint64_t, svint16_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_s64_untied: { xfail *-*-* } +** dot_9_s64_untied: ** mov (z[0-9]+\.h), #9 ** movprfx z0, z1 ** sdot z0\.d, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c index 541e71cc212..3e57074e699 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_u32_tied1, svuint32_t, svuint8_t, uint8_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_u32_untied: { xfail *-*-* } +** dot_w0_u32_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** udot z0\.s, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_u32_tied1, svuint32_t, svuint8_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_u32_untied: { xfail *-*-* } +** dot_9_u32_untied: ** mov (z[0-9]+\.b), #9 ** movprfx z0, z1 ** udot z0\.s, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c index cc0e853737d..88d9047ba00 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_u64_tied1, svuint64_t, svuint16_t, uint16_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_u64_untied: { xfail *-*-* } +** dot_w0_u64_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** udot z0\.d, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_u64_tied1, svuint64_t, svuint16_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_u64_untied: { xfail *-*-* } +** dot_9_u64_untied: ** mov (z[0-9]+\.h), #9 ** movprfx z0, z1 ** udot z0\.d, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c index 7cf73609a1a..683248d0887 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_s16_m_tied1, svint16_t, int16_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_s16_m_untied: { xfail *-*-* } +** eor_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s16_m_tied1, svint16_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s16_m_untied: { xfail *-*-* } +** eor_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c index d5aecb20133..4c3ba9ab422 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s32_m_tied1, svint32_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s32_m_untied: { xfail *-*-* } +** eor_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** eor z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c index 157128974bf..83817cc6694 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s64_m_tied1, svint64_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s64_m_untied: { xfail *-*-* } +** eor_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** eor z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c index 083ac2dde06..91f3ea8459b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_s8_m_tied1, svint8_t, int8_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_s8_m_untied: { xfail *-*-* } +** eor_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s8_m_tied1, svint8_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s8_m_untied: { xfail *-*-* } +** eor_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c index 40b43a5f89b..875b8d0c4cb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_u16_m_untied: { xfail *-*-* } +** eor_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u16_m_tied1, svuint16_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u16_m_untied: { xfail *-*-* } +** eor_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c index 8e46d08cacc..6add2b7c1eb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u32_m_tied1, svuint32_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u32_m_untied: { xfail *-*-* } +** eor_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** eor z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c index a82398f919a..ee0bda271b2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u64_m_tied1, svuint64_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u64_m_untied: { xfail *-*-* } +** eor_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** eor z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c index 006637699e8..fdb0fb1022a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_u8_m_untied: { xfail *-*-* } +** eor_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u8_m_tied1, svuint8_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u8_m_untied: { xfail *-*-* } +** eor_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c index edaaca5f155..d5c5fd54e79 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_s16_m_tied1, svint16_t, uint16_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_s16_m_untied: { xfail *-*-* } +** lsl_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_16_s16_m_tied1, svint16_t, z0 = svlsl_m (p0, z0, 16)) /* -** lsl_16_s16_m_untied: { xfail *-*-* } +** lsl_16_s16_m_untied: ** mov (z[0-9]+\.h), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c index f98f1f94b44..b5df8a84318 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_32_s32_m_tied1, svint32_t, z0 = svlsl_m (p0, z0, 32)) /* -** lsl_32_s32_m_untied: { xfail *-*-* } +** lsl_32_s32_m_untied: ** mov (z[0-9]+\.s), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c index 39753986b1b..850a798fe1f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_64_s64_m_tied1, svint64_t, z0 = svlsl_m (p0, z0, 64)) /* -** lsl_64_s64_m_untied: { xfail *-*-* } +** lsl_64_s64_m_untied: ** mov (z[0-9]+\.d), #64 ** movprfx z0, z1 ** lsl z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c index 9a9cc959c33..d8776597129 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_s8_m_tied1, svint8_t, uint8_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_s8_m_untied: { xfail *-*-* } +** lsl_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_8_s8_m_tied1, svint8_t, z0 = svlsl_m (p0, z0, 8)) /* -** lsl_8_s8_m_untied: { xfail *-*-* } +** lsl_8_s8_m_untied: ** mov (z[0-9]+\.b), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c index 57db0fda66a..068e49b8812 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_u16_m_untied: { xfail *-*-* } +** lsl_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_16_u16_m_tied1, svuint16_t, z0 = svlsl_m (p0, z0, 16)) /* -** lsl_16_u16_m_untied: { xfail *-*-* } +** lsl_16_u16_m_untied: ** mov (z[0-9]+\.h), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c index 8773f15db44..9c2be1de967 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_32_u32_m_tied1, svuint32_t, z0 = svlsl_m (p0, z0, 32)) /* -** lsl_32_u32_m_untied: { xfail *-*-* } +** lsl_32_u32_m_untied: ** mov (z[0-9]+\.s), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c index 7b12bd43e1a..0c1e473ce9d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_64_u64_m_tied1, svuint64_t, z0 = svlsl_m (p0, z0, 64)) /* -** lsl_64_u64_m_untied: { xfail *-*-* } +** lsl_64_u64_m_untied: ** mov (z[0-9]+\.d), #64 ** movprfx z0, z1 ** lsl z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c index 894b5513857..59d386c0f77 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_u8_m_untied: { xfail *-*-* } +** lsl_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_8_u8_m_tied1, svuint8_t, z0 = svlsl_m (p0, z0, 8)) /* -** lsl_8_u8_m_untied: { xfail *-*-* } +** lsl_8_u8_m_untied: ** mov (z[0-9]+\.b), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c index a0207726144..7244f64fb1d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_16_s16_m_tied1, svint16_t, z0 = svlsl_wide_m (p0, z0, 16)) /* -** lsl_wide_16_s16_m_untied: { xfail *-*-* } +** lsl_wide_16_s16_m_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_16_s16_z_tied1, svint16_t, z0 = svlsl_wide_z (p0, z0, 16)) /* -** lsl_wide_16_s16_z_untied: { xfail *-*-* } +** lsl_wide_16_s16_z_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0\.h, p0/z, z1\.h ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c index bd67b7006b5..04333ce477a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_32_s32_m_tied1, svint32_t, z0 = svlsl_wide_m (p0, z0, 32)) /* -** lsl_wide_32_s32_m_untied: { xfail *-*-* } +** lsl_wide_32_s32_m_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_32_s32_z_tied1, svint32_t, z0 = svlsl_wide_z (p0, z0, 32)) /* -** lsl_wide_32_s32_z_untied: { xfail *-*-* } +** lsl_wide_32_s32_z_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0\.s, p0/z, z1\.s ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c index 7eb8627041d..5847db7bd97 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_8_s8_m_tied1, svint8_t, z0 = svlsl_wide_m (p0, z0, 8)) /* -** lsl_wide_8_s8_m_untied: { xfail *-*-* } +** lsl_wide_8_s8_m_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_8_s8_z_tied1, svint8_t, z0 = svlsl_wide_z (p0, z0, 8)) /* -** lsl_wide_8_s8_z_untied: { xfail *-*-* } +** lsl_wide_8_s8_z_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0\.b, p0/z, z1\.b ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c index 482f8d0557b..2c047b7f7e5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_16_u16_m_tied1, svuint16_t, z0 = svlsl_wide_m (p0, z0, 16)) /* -** lsl_wide_16_u16_m_untied: { xfail *-*-* } +** lsl_wide_16_u16_m_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_16_u16_z_tied1, svuint16_t, z0 = svlsl_wide_z (p0, z0, 16)) /* -** lsl_wide_16_u16_z_untied: { xfail *-*-* } +** lsl_wide_16_u16_z_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0\.h, p0/z, z1\.h ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c index 612897d24df..1e149633473 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_32_u32_m_tied1, svuint32_t, z0 = svlsl_wide_m (p0, z0, 32)) /* -** lsl_wide_32_u32_m_untied: { xfail *-*-* } +** lsl_wide_32_u32_m_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_32_u32_z_tied1, svuint32_t, z0 = svlsl_wide_z (p0, z0, 32)) /* -** lsl_wide_32_u32_z_untied: { xfail *-*-* } +** lsl_wide_32_u32_z_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0\.s, p0/z, z1\.s ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c index 6ca2f9e7da2..55f27217077 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_8_u8_m_tied1, svuint8_t, z0 = svlsl_wide_m (p0, z0, 8)) /* -** lsl_wide_8_u8_m_untied: { xfail *-*-* } +** lsl_wide_8_u8_m_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_8_u8_z_tied1, svuint8_t, z0 = svlsl_wide_z (p0, z0, 8)) /* -** lsl_wide_8_u8_z_untied: { xfail *-*-* } +** lsl_wide_8_u8_z_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0\.b, p0/z, z1\.b ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c index 61575645fad..a41411986f7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svlsr_m (p0, z0, x0)) /* -** lsr_w0_u16_m_untied: { xfail *-*-* } +** lsr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c index a049ca90556..b773eedba7f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svlsr_m (p0, z0, x0)) /* -** lsr_w0_u8_m_untied: { xfail *-*-* } +** lsr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c index 4b3148419c5..60d23b35698 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f16_m_tied1, svfloat16_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f16_m_untied: { xfail *-*-* } +** mad_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c index d5dbc85d5a3..1c89ac8cbf9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f32_m_tied1, svfloat32_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f32_m_untied: { xfail *-*-* } +** mad_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c index 7b5dc22826e..cc5f8dd9034 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f64_m_tied1, svfloat64_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f64_m_untied: { xfail *-*-* } +** mad_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c index 02a6d4588b8..4644fa9866c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_s16_m_untied: { xfail *-*-* } +** mad_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s16_m_tied1, svint16_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s16_m_untied: { xfail *-*-* } +** mad_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c index d676a0c1142..36efef54df7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s32_m_tied1, svint32_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s32_m_untied: { xfail *-*-* } +** mad_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c index 7aa017536af..4df7bc41772 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s64_m_tied1, svint64_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s64_m_untied: { xfail *-*-* } +** mad_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c index 90d712686ca..7e3dd676799 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_s8_m_untied: { xfail *-*-* } +** mad_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s8_m_tied1, svint8_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s8_m_untied: { xfail *-*-* } +** mad_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c index 1d2ad9c5fc9..bebb8995c48 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_u16_m_untied: { xfail *-*-* } +** mad_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u16_m_tied1, svuint16_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u16_m_untied: { xfail *-*-* } +** mad_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c index 4b51958b176..3f4486d3f4f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u32_m_tied1, svuint32_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u32_m_untied: { xfail *-*-* } +** mad_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c index c4939093eff..e4d9a73fbac 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u64_m_tied1, svuint64_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u64_m_untied: { xfail *-*-* } +** mad_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c index 0b4b1b8cfe6..01ce99845ae 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_u8_m_untied: { xfail *-*-* } +** mad_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u8_m_tied1, svuint8_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u8_m_untied: { xfail *-*-* } +** mad_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c index 6a216752282..637715edb32 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_s16_m_untied: { xfail *-*-* } +** max_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smax z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s16_m_tied1, svint16_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s16_m_untied: { xfail *-*-* } +** max_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** smax z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c index 07402c7a901..428709fc74f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s32_m_tied1, svint32_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s32_m_untied: { xfail *-*-* } +** max_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** smax z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c index 66f00fdf170..284e097de03 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s64_m_tied1, svint64_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s64_m_untied: { xfail *-*-* } +** max_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** smax z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c index c651a26f0d1..123f1a96ea6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_s8_m_untied: { xfail *-*-* } +** max_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smax z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s8_m_tied1, svint8_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s8_m_untied: { xfail *-*-* } +** max_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** smax z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c index 9a0b9543169..459f89a1f0b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_u16_m_untied: { xfail *-*-* } +** max_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umax z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u16_m_tied1, svuint16_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u16_m_untied: { xfail *-*-* } +** max_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** umax z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c index 91eba25c131..1ed5c28b941 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u32_m_tied1, svuint32_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u32_m_untied: { xfail *-*-* } +** max_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** umax z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c index 5be4c9fb77f..47d7c8398d7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u64_m_tied1, svuint64_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u64_m_untied: { xfail *-*-* } +** max_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** umax z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c index 04c9ddb36a2..4301f3eb641 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_u8_m_untied: { xfail *-*-* } +** max_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umax z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u8_m_tied1, svuint8_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u8_m_untied: { xfail *-*-* } +** max_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** umax z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c index 14dfcc4c333..a6c41cce07c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_s16_m_untied: { xfail *-*-* } +** min_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smin z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s16_m_tied1, svint16_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s16_m_untied: { xfail *-*-* } +** min_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** smin z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c index cee2b649d4f..ae9d13e342a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s32_m_tied1, svint32_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s32_m_untied: { xfail *-*-* } +** min_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** smin z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c index 0d20bd0b28d..dc2150040b0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s64_m_tied1, svint64_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s64_m_untied: { xfail *-*-* } +** min_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** smin z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c index 714b1576d5c..0c0107e3ce2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_s8_m_untied: { xfail *-*-* } +** min_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smin z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s8_m_tied1, svint8_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s8_m_untied: { xfail *-*-* } +** min_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** smin z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c index df35cf1135e..97c22427eb3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_u16_m_untied: { xfail *-*-* } +** min_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umin z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u16_m_tied1, svuint16_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u16_m_untied: { xfail *-*-* } +** min_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** umin z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c index 7f84d099d61..e5abd3c5619 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u32_m_tied1, svuint32_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u32_m_untied: { xfail *-*-* } +** min_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** umin z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c index 06e6e509920..b8b6829507b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u64_m_tied1, svuint64_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u64_m_untied: { xfail *-*-* } +** min_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** umin z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c index 2ca274278a2..3179dad35dd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_u8_m_untied: { xfail *-*-* } +** min_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umin z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u8_m_tied1, svuint8_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u8_m_untied: { xfail *-*-* } +** min_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** umin z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c index d32ce5845d1..a1d06c09871 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f16_m_tied1, svfloat16_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f16_m_untied: { xfail *-*-* } +** mla_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c index d10ba69a53e..8741a3523b7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f32_m_tied1, svfloat32_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f32_m_untied: { xfail *-*-* } +** mla_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c index 94c1e0b0753..505f77a871c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f64_m_tied1, svfloat64_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f64_m_untied: { xfail *-*-* } +** mla_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c index f3ed191db6a..9905f6e3ac3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_s16_m_untied: { xfail *-*-* } +** mla_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s16_m_tied1, svint16_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s16_m_untied: { xfail *-*-* } +** mla_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c index 5e8001a71d8..a9c32cca1ba 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s32_m_tied1, svint32_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s32_m_untied: { xfail *-*-* } +** mla_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c index 7b619e52119..ed2693b01b4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s64_m_tied1, svint64_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s64_m_untied: { xfail *-*-* } +** mla_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c index 47468947d78..151cf6547b6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_s8_m_untied: { xfail *-*-* } +** mla_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s8_m_tied1, svint8_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s8_m_untied: { xfail *-*-* } +** mla_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c index 7238e428f68..36c60ba7264 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_u16_m_untied: { xfail *-*-* } +** mla_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u16_m_tied1, svuint16_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u16_m_untied: { xfail *-*-* } +** mla_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c index 7a68bce3d1f..69503c438c8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u32_m_tied1, svuint32_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u32_m_untied: { xfail *-*-* } +** mla_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c index 6233265c830..5fcbcf6f69f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u64_m_tied1, svuint64_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u64_m_untied: { xfail *-*-* } +** mla_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c index 832ed41410e..ec92434fb7a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_u8_m_untied: { xfail *-*-* } +** mla_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u8_m_tied1, svuint8_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u8_m_untied: { xfail *-*-* } +** mla_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c index b58104d5eaf..1b217dcea3b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f16_m_tied1, svfloat16_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f16_m_untied: { xfail *-*-* } +** mls_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c index 7d6e60519b0..dddfb2cfbec 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f32_m_tied1, svfloat32_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f32_m_untied: { xfail *-*-* } +** mls_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c index a6ed28eec5c..1836674ac97 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f64_m_tied1, svfloat64_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f64_m_untied: { xfail *-*-* } +** mls_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c index e199829c4ad..1cf387c38f8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_s16_m_untied: { xfail *-*-* } +** mls_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s16_m_tied1, svint16_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s16_m_untied: { xfail *-*-* } +** mls_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c index fe386d01cd9..35c3cc248a1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s32_m_tied1, svint32_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s32_m_untied: { xfail *-*-* } +** mls_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c index 2998d733fbc..2c51d530341 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s64_m_tied1, svint64_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s64_m_untied: { xfail *-*-* } +** mls_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c index c60c431455f..c1151e9299d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_s8_m_untied: { xfail *-*-* } +** mls_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s8_m_tied1, svint8_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s8_m_untied: { xfail *-*-* } +** mls_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c index e8a9f5cd94c..48aabf85e56 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_u16_m_untied: { xfail *-*-* } +** mls_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u16_m_tied1, svuint16_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u16_m_untied: { xfail *-*-* } +** mls_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c index 47e885012ef..4748372a398 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u32_m_tied1, svuint32_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u32_m_untied: { xfail *-*-* } +** mls_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c index 4d441b75920..25a43a54901 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u64_m_tied1, svuint64_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u64_m_untied: { xfail *-*-* } +** mls_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c index 0489aaa7cf9..5bf03f5a42e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_u8_m_untied: { xfail *-*-* } +** mls_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u8_m_tied1, svuint8_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u8_m_untied: { xfail *-*-* } +** mls_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c index 894961a9ec5..b8be34459ff 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f16_m_tied1, svfloat16_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f16_m_untied: { xfail *-*-* } +** msb_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c index 0d0915958a3..d1bd768dca2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f32_m_tied1, svfloat32_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f32_m_untied: { xfail *-*-* } +** msb_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c index 52dc3968e24..902558807bc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f64_m_tied1, svfloat64_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f64_m_untied: { xfail *-*-* } +** msb_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c index 56347cfb918..e2b8e8b5352 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_s16_m_untied: { xfail *-*-* } +** msb_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s16_m_tied1, svint16_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s16_m_untied: { xfail *-*-* } +** msb_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c index fb7a7815b57..afb4d5e8cb5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s32_m_tied1, svint32_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s32_m_untied: { xfail *-*-* } +** msb_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** msb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c index 6829fab3655..c3343aff20f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s64_m_tied1, svint64_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s64_m_untied: { xfail *-*-* } +** msb_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** msb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c index d7fcafdd0df..255535e41b4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_s8_m_untied: { xfail *-*-* } +** msb_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s8_m_tied1, svint8_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s8_m_untied: { xfail *-*-* } +** msb_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c index 437a96040e1..d7fe8f081b6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_u16_m_untied: { xfail *-*-* } +** msb_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u16_m_tied1, svuint16_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u16_m_untied: { xfail *-*-* } +** msb_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c index aaaf0344aea..99b61193f2e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u32_m_tied1, svuint32_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u32_m_untied: { xfail *-*-* } +** msb_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** msb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c index 5c5d3307378..a7aa611977b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u64_m_tied1, svuint64_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u64_m_untied: { xfail *-*-* } +** msb_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** msb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c index 5665ec9e320..17ce5e99aa4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_u8_m_untied: { xfail *-*-* } +** msb_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u8_m_tied1, svuint8_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u8_m_untied: { xfail *-*-* } +** msb_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c index ef3de0c5953..fd9753b0ee2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f16_m_tied1, svfloat16_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f16_m_untied: { xfail *-*-* } +** mul_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c index 481fe999c47..6520aa8601a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f16_m_tied1, svfloat16_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f16_m_untied: { xfail *-*-* } +** mul_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c index 5b3df6fde9a..3c643375359 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f32_m_tied1, svfloat32_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f32_m_untied: { xfail *-*-* } +** mul_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c index eb2d240efd6..137fb054d73 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f32_m_tied1, svfloat32_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f32_m_untied: { xfail *-*-* } +** mul_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c index f5654a9f19d..00a46c22d1d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f64_m_tied1, svfloat64_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f64_m_untied: { xfail *-*-* } +** mul_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c index d865618d465..0a6b92a2686 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f64_m_tied1, svfloat64_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f64_m_untied: { xfail *-*-* } +** mul_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c index aa08bc27405..80295f7bec3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_s16_m_untied: { xfail *-*-* } +** mul_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s16_m_tied1, svint16_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s16_m_untied: { xfail *-*-* } +** mul_2_s16_m_untied: ** mov (z[0-9]+\.h), #2 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c index 7acf77fdbbf..01c224932d9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s32_m_tied1, svint32_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s32_m_untied: { xfail *-*-* } +** mul_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c index 549105f1efd..c3cf581a0a4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s64_m_tied1, svint64_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s64_m_untied: { xfail *-*-* } +** mul_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c index 012e6f25098..4ac4c8eeb2a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_s8_m_untied: { xfail *-*-* } +** mul_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s8_m_tied1, svint8_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s8_m_untied: { xfail *-*-* } +** mul_2_s8_m_untied: ** mov (z[0-9]+\.b), #2 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c index 300987eb6e6..affee965005 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_u16_m_untied: { xfail *-*-* } +** mul_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u16_m_tied1, svuint16_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u16_m_untied: { xfail *-*-* } +** mul_2_u16_m_untied: ** mov (z[0-9]+\.h), #2 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c index 288d17b163c..38b4bc71b40 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u32_m_tied1, svuint32_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u32_m_untied: { xfail *-*-* } +** mul_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c index f6959dbc723..ab655554db7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u64_m_tied1, svuint64_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u64_m_untied: { xfail *-*-* } +** mul_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c index b2745a48f50..ef0a5220dc0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_u8_m_untied: { xfail *-*-* } +** mul_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u8_m_tied1, svuint8_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u8_m_untied: { xfail *-*-* } +** mul_2_u8_m_untied: ** mov (z[0-9]+\.b), #2 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c index a81532f5d89..576aedce8dd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_s16_m_untied: { xfail *-*-* } +** mulh_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smulh z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s16_m_tied1, svint16_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s16_m_untied: { xfail *-*-* } +** mulh_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** smulh z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c index 078feeb6a32..331a46fad76 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s32_m_tied1, svint32_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s32_m_untied: { xfail *-*-* } +** mulh_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** smulh z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c index a87d4d5ce0b..c284bcf789d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s64_m_tied1, svint64_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s64_m_untied: { xfail *-*-* } +** mulh_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** smulh z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c index f9cd01afdc9..43271097e12 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_s8_m_untied: { xfail *-*-* } +** mulh_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smulh z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s8_m_tied1, svint8_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s8_m_untied: { xfail *-*-* } +** mulh_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** smulh z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c index e9173eb243e..7f239984ca8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_u16_m_untied: { xfail *-*-* } +** mulh_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umulh z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u16_m_tied1, svuint16_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u16_m_untied: { xfail *-*-* } +** mulh_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** umulh z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c index de1f24f090c..2c187d62041 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u32_m_tied1, svuint32_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u32_m_untied: { xfail *-*-* } +** mulh_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** umulh z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c index 0d7e12a7c84..1176a31317e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u64_m_tied1, svuint64_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u64_m_untied: { xfail *-*-* } +** mulh_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** umulh z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c index db7b1be1bdf..5bd1009a284 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_u8_m_untied: { xfail *-*-* } +** mulh_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umulh z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u8_m_tied1, svuint8_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u8_m_untied: { xfail *-*-* } +** mulh_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** umulh z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c index b8d6bf5d92c..174c10e83dc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f16_m_untied: { xfail *-*-* } +** mulx_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f16_m_untied: { xfail *-*-* } +** mulx_0p5_f16_m_untied: ** fmov (z[0-9]+\.h), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f16_m_untied: { xfail *-*-* } +** mulx_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c index b8f5c1310d7..8baf4e849d2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f32_m_untied: { xfail *-*-* } +** mulx_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f32_m_untied: { xfail *-*-* } +** mulx_0p5_f32_m_untied: ** fmov (z[0-9]+\.s), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f32_m_untied: { xfail *-*-* } +** mulx_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c index 746cc94143d..1ab13caba56 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f64_m_untied: { xfail *-*-* } +** mulx_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f64_m_untied: { xfail *-*-* } +** mulx_0p5_f64_m_untied: ** fmov (z[0-9]+\.d), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f64_m_untied: { xfail *-*-* } +** mulx_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c index 92e0664e647..b280f2685ff 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f16_m_tied1, svfloat16_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f16_m_untied: { xfail *-*-* } +** nmad_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c index cef731ebcfe..f8c91b5b52f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f32_m_tied1, svfloat32_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f32_m_untied: { xfail *-*-* } +** nmad_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c index 43b97c0de50..4ff6471b2e1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f64_m_tied1, svfloat64_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f64_m_untied: { xfail *-*-* } +** nmad_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c index 75d0ec7d3ab..cd5bb6fd5ba 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f16_m_tied1, svfloat16_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f16_m_untied: { xfail *-*-* } +** nmla_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c index da594d3eb95..f8d44fd4d25 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f32_m_tied1, svfloat32_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f32_m_untied: { xfail *-*-* } +** nmla_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c index 73f15f41762..4e599be327c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f64_m_tied1, svfloat64_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f64_m_untied: { xfail *-*-* } +** nmla_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c index ccf7e51ffc9..dc8b1fea7c5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f16_m_tied1, svfloat16_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f16_m_untied: { xfail *-*-* } +** nmls_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c index 10d345026f7..84e74e13aa6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f32_m_tied1, svfloat32_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f32_m_untied: { xfail *-*-* } +** nmls_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c index bf2a4418a9f..27d4682d28f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f64_m_tied1, svfloat64_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f64_m_untied: { xfail *-*-* } +** nmls_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c index 5311ceb4408..c485fb6b654 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f16_m_tied1, svfloat16_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f16_m_untied: { xfail *-*-* } +** nmsb_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c index 6f1407a8717..1c1294d5458 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f32_m_tied1, svfloat32_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f32_m_untied: { xfail *-*-* } +** nmsb_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c index 5e4e1dd7ea6..50c55a09306 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f64_m_tied1, svfloat64_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f64_m_untied: { xfail *-*-* } +** nmsb_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c index 62b707a9c69..f91af0a2494 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_s16_m_untied: { xfail *-*-* } +** orr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s16_m_tied1, svint16_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s16_m_untied: { xfail *-*-* } +** orr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c index 2e0e1e8883d..514e65a788e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s32_m_tied1, svint32_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s32_m_untied: { xfail *-*-* } +** orr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** orr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c index 1538fdd14b1..4f6cad749c5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s64_m_tied1, svint64_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s64_m_untied: { xfail *-*-* } +** orr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** orr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c index b6483b6e76e..d8a175b9a03 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_s8_m_untied: { xfail *-*-* } +** orr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s8_m_tied1, svint8_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s8_m_untied: { xfail *-*-* } +** orr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c index 000a0444c9b..4f2e28d10dc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_u16_m_untied: { xfail *-*-* } +** orr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u16_m_tied1, svuint16_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u16_m_untied: { xfail *-*-* } +** orr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c index 8e2351d162b..0f155c6e9d7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u32_m_tied1, svuint32_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u32_m_untied: { xfail *-*-* } +** orr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** orr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c index 323e2101e47..eec5e98444b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u64_m_tied1, svuint64_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u64_m_untied: { xfail *-*-* } +** orr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** orr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c index efe5591b472..17be109914d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_u8_m_untied: { xfail *-*-* } +** orr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u8_m_tied1, svuint8_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u8_m_untied: { xfail *-*-* } +** orr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c index 9c554255b44..cb4225c9a47 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_m_tied1, svfloat16_t, int16_t, z0 = svscale_m (p0, z0, x0)) /* -** scale_w0_f16_m_untied: { xfail *-*-* } +** scale_w0_f16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f16_m_tied1, svfloat16_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f16_m_untied: { xfail *-*-* } +** scale_3_f16_m_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_z_tied1, svfloat16_t, int16_t, z0 = svscale_z (p0, z0, x0)) /* -** scale_w0_f16_z_untied: { xfail *-*-* } +** scale_w0_f16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** fscale z0\.h, p0/m, z0\.h, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f16_z_tied1, svfloat16_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f16_z_untied: { xfail *-*-* } +** scale_3_f16_z_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0\.h, p0/z, z1\.h ** fscale z0\.h, p0/m, z0\.h, \1 @@ -211,7 +211,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_x_tied1, svfloat16_t, int16_t, z0 = svscale_x (p0, z0, x0)) /* -** scale_w0_f16_x_untied: { xfail *-*-* } +** scale_w0_f16_x_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f16_x_tied1, svfloat16_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f16_x_untied: { xfail *-*-* } +** scale_3_f16_x_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c index 12a1b1d8686..5079ee36493 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f32_m_tied1, svfloat32_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f32_m_untied: { xfail *-*-* } +** scale_3_f32_m_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** fscale z0\.s, p0/m, z0\.s, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f32_z_tied1, svfloat32_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f32_z_untied: { xfail *-*-* } +** scale_3_f32_z_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0\.s, p0/z, z1\.s ** fscale z0\.s, p0/m, z0\.s, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f32_x_tied1, svfloat32_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f32_x_untied: { xfail *-*-* } +** scale_3_f32_x_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** fscale z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c index f6b11718584..4d6235bfbaf 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f64_m_tied1, svfloat64_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f64_m_untied: { xfail *-*-* } +** scale_3_f64_m_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** fscale z0\.d, p0/m, z0\.d, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f64_z_tied1, svfloat64_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f64_z_untied: { xfail *-*-* } +** scale_3_f64_z_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0\.d, p0/z, z1\.d ** fscale z0\.d, p0/m, z0\.d, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f64_x_tied1, svfloat64_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f64_x_untied: { xfail *-*-* } +** scale_3_f64_x_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** fscale z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c index aea8ea2b4aa..5b156a79612 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_s16_m_tied1, svint16_t, int16_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_s16_m_untied: { xfail *-*-* } +** sub_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sub z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s16_m_tied1, svint16_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s16_m_untied: { xfail *-*-* } +** sub_1_s16_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1\.h diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c index db6f3df9019..344be4fa50b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s32_m_tied1, svint32_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s32_m_untied: { xfail *-*-* } +** sub_1_s32_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1\.s diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c index b9184c3a821..b6eb7f2fc22 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s64_m_tied1, svint64_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s64_m_untied: { xfail *-*-* } +** sub_1_s64_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c index 0d7ba99aa56..3edd4b09a96 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_s8_m_tied1, svint8_t, int8_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_s8_m_untied: { xfail *-*-* } +** sub_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sub z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s8_m_tied1, svint8_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s8_m_untied: { xfail *-*-* } +** sub_1_s8_m_untied: ** mov (z[0-9]+\.b), #-1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c index 89620e159bf..77cf40891c2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_u16_m_untied: { xfail *-*-* } +** sub_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sub z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u16_m_tied1, svuint16_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u16_m_untied: { xfail *-*-* } +** sub_1_u16_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1\.h diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c index c4b405d4dd4..0befdd72ec5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u32_m_tied1, svuint32_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u32_m_untied: { xfail *-*-* } +** sub_1_u32_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1\.s diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c index fb7f7173a00..3602c112cea 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u64_m_tied1, svuint64_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u64_m_untied: { xfail *-*-* } +** sub_1_u64_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c index 4552041910f..036fca2bb29 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_u8_m_untied: { xfail *-*-* } +** sub_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sub z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u8_m_tied1, svuint8_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u8_m_untied: { xfail *-*-* } +** sub_1_u8_m_untied: ** mov (z[0-9]+\.b), #-1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c index 6929b286218..b4d6f7bdd7e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f16_m_tied1, svfloat16_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f16_m_untied: { xfail *-*-* } +** subr_m1_f16_m_untied: ** fmov (z[0-9]+\.h), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c index a31ebd2ef7f..78985a1311b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f16_m_tied1, svfloat16_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f16_m_untied: { xfail *-*-* } +** subr_m1_f16_m_untied: ** fmov (z[0-9]+\.h), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c index 5bf90a39145..a0a4b98675c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f32_m_tied1, svfloat32_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f32_m_untied: { xfail *-*-* } +** subr_m1_f32_m_untied: ** fmov (z[0-9]+\.s), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c index 75ae0dc6164..04aec038aad 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f32_m_tied1, svfloat32_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f32_m_untied: { xfail *-*-* } +** subr_m1_f32_m_untied: ** fmov (z[0-9]+\.s), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c index 7091c40bbb2..64806b395d2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f64_m_tied1, svfloat64_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f64_m_untied: { xfail *-*-* } +** subr_m1_f64_m_untied: ** fmov (z[0-9]+\.d), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c index 98598dd7702..7458e5cc66d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f64_m_tied1, svfloat64_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f64_m_untied: { xfail *-*-* } +** subr_m1_f64_m_untied: ** fmov (z[0-9]+\.d), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c index d3dad62dafe..a63a9bca787 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_s16_m_untied: { xfail *-*-* } +** subr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s16_m_tied1, svint16_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s16_m_untied: { xfail *-*-* } +** subr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c index ce62e2f210a..e709abe424f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s32_m_tied1, svint32_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s32_m_untied: { xfail *-*-* } +** subr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** subr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c index ada9e977c99..bafcd8ecd41 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s64_m_tied1, svint64_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s64_m_untied: { xfail *-*-* } +** subr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** subr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c index 90d2a6de9a5..b9615de6655 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_s8_m_untied: { xfail *-*-* } +** subr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s8_m_tied1, svint8_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s8_m_untied: { xfail *-*-* } +** subr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c index 379a80fb189..0c344c4d10f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_u16_m_untied: { xfail *-*-* } +** subr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u16_m_tied1, svuint16_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u16_m_untied: { xfail *-*-* } +** subr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c index 215f8b44922..9d3a69cf9ea 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u32_m_tied1, svuint32_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u32_m_untied: { xfail *-*-* } +** subr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** subr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c index 78d94515bd4..4d48e944657 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u64_m_tied1, svuint64_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u64_m_untied: { xfail *-*-* } +** subr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** subr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c index fe5f96da833..65606b6dda0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_u8_m_untied: { xfail *-*-* } +** subr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u8_m_tied1, svuint8_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u8_m_untied: { xfail *-*-* } +** subr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c index acad87d9635..5716b89bf71 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_s16_tied2, svint16_t, int16_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_s16_untied: { xfail *-*-*} +** bcax_w0_s16_untied: ** mov (z[0-9]+)\.h, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s16_tied2, svint16_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s16_untied: { xfail *-*-*} +** bcax_11_s16_untied: ** mov (z[0-9]+)\.h, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c index aeb43574656..16123401555 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s32_tied2, svint32_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s32_untied: { xfail *-*-*} +** bcax_11_s32_untied: ** mov (z[0-9]+)\.s, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c index 2087e583342..54ca151da23 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s64_tied2, svint64_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s64_untied: { xfail *-*-*} +** bcax_11_s64_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1|\1, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c index 548aafad857..3e2a0ee77d8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_s8_tied2, svint8_t, int8_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_s8_untied: { xfail *-*-*} +** bcax_w0_s8_untied: ** mov (z[0-9]+)\.b, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s8_tied2, svint8_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s8_untied: { xfail *-*-*} +** bcax_11_s8_untied: ** mov (z[0-9]+)\.b, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c index b63a4774ba7..72c40ace304 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_u16_tied2, svuint16_t, uint16_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_u16_untied: { xfail *-*-*} +** bcax_w0_u16_untied: ** mov (z[0-9]+)\.h, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u16_tied2, svuint16_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u16_untied: { xfail *-*-*} +** bcax_11_u16_untied: ** mov (z[0-9]+)\.h, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c index d03c938b77e..ca75164eca2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u32_tied2, svuint32_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u32_untied: { xfail *-*-*} +** bcax_11_u32_untied: ** mov (z[0-9]+)\.s, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c index e03906214e8..8145a0c6258 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u64_tied2, svuint64_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u64_untied: { xfail *-*-*} +** bcax_11_u64_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1|\1, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c index 0957d58bd0e..655d271a92b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_u8_tied2, svuint8_t, uint8_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_u8_untied: { xfail *-*-*} +** bcax_w0_u8_untied: ** mov (z[0-9]+)\.b, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u8_tied2, svuint8_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u8_untied: { xfail *-*-*} +** bcax_11_u8_untied: ** mov (z[0-9]+)\.b, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c index 6330c4265bb..5c53cac7608 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_s16_m_untied: { xfail *-*-* } +** qadd_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqadd z0\.h, p0/m, z0\.h, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s16_m_tied1, svint16_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s16_m_untied: { xfail *-*-* } +** qadd_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqadd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c index bab4874bc39..bb355c5a76d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s32_m_tied1, svint32_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s32_m_untied: { xfail *-*-* } +** qadd_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqadd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c index c2ad92123e5..8c350987985 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s64_m_tied1, svint64_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s64_m_untied: { xfail *-*-* } +** qadd_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqadd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c index 61343beacb8..2a514e32480 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_s8_m_untied: { xfail *-*-* } +** qadd_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqadd z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s8_m_tied1, svint8_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s8_m_untied: { xfail *-*-* } +** qadd_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqadd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c index f6c7ca9e075..870a9106325 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c @@ -166,7 +166,7 @@ TEST_UNIFORM_ZX (qadd_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_u16_m_untied: { xfail *-*-* } +** qadd_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqadd z0\.h, p0/m, z0\.h, \1 @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u16_m_tied1, svuint16_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u16_m_untied: { xfail *-*-* } +** qadd_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqadd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c index 7701d13a051..94c05fdc137 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u32_m_tied1, svuint32_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u32_m_untied: { xfail *-*-* } +** qadd_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqadd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c index df8c3f8637b..cf5b2d27b74 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u64_m_tied1, svuint64_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u64_m_untied: { xfail *-*-* } +** qadd_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqadd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c index 6c856e2871c..77cb1b71dd4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_u8_m_untied: { xfail *-*-* } +** qadd_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqadd z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_u8_m_tied1, svuint8_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u8_m_untied: { xfail *-*-* } +** qadd_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqadd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c index 4d1e90395e2..a37743be9d8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalb_w0_s16_tied1, svint16_t, svint8_t, int8_t, z0 = svqdmlalb (z0, z4, x0)) /* -** qdmlalb_w0_s16_untied: { xfail *-*-* } +** qdmlalb_w0_s16_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqdmlalb z0\.h, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s16_tied1, svint16_t, svint8_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s16_untied: { xfail *-*-* } +** qdmlalb_11_s16_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** sqdmlalb z0\.h, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c index 94373773e61..1c319eaac05 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalb_w0_s32_tied1, svint32_t, svint16_t, int16_t, z0 = svqdmlalb (z0, z4, x0)) /* -** qdmlalb_w0_s32_untied: { xfail *-*-* } +** qdmlalb_w0_s32_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqdmlalb z0\.s, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s32_tied1, svint32_t, svint16_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s32_untied: { xfail *-*-* } +** qdmlalb_11_s32_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** sqdmlalb z0\.s, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c index 8ac848b0b75..3f2ab886578 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s64_tied1, svint64_t, svint32_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s64_untied: { xfail *-*-* } +** qdmlalb_11_s64_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** sqdmlalb z0\.d, z4\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c index d591db3cfb8..e21d31fdbab 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalbt_w0_s16_tied1, svint16_t, svint8_t, int8_t, z0 = svqdmlalbt (z0, z4, x0)) /* -** qdmlalbt_w0_s16_untied: { xfail *-*-*} +** qdmlalbt_w0_s16_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqdmlalbt z0\.h, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s16_tied1, svint16_t, svint8_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s16_untied: { xfail *-*-*} +** qdmlalbt_11_s16_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.h, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c index e8326fed617..32978e0913e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalbt_w0_s32_tied1, svint32_t, svint16_t, int16_t, z0 = svqdmlalbt (z0, z4, x0)) /* -** qdmlalbt_w0_s32_untied: { xfail *-*-*} +** qdmlalbt_w0_s32_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqdmlalbt z0\.s, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s32_tied1, svint32_t, svint16_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s32_untied: { xfail *-*-*} +** qdmlalbt_11_s32_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.s, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c index f29e4de18dc..22886bca504 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s64_tied1, svint64_t, svint32_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s64_untied: { xfail *-*-*} +** qdmlalbt_11_s64_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.d, z4\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c index c102e58ed91..624f8bc3dce 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_s16_m_untied: { xfail *-*-* } +** qsub_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqsub z0\.h, p0/m, z0\.h, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s16_m_tied1, svint16_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s16_m_untied: { xfail *-*-* } +** qsub_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqsub z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c index e703ce9be7c..b435f692b8c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s32_m_tied1, svint32_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s32_m_untied: { xfail *-*-* } +** qsub_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqsub z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c index e901013f7fa..07eac9d0bdc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s64_m_tied1, svint64_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s64_m_untied: { xfail *-*-* } +** qsub_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqsub z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c index 067ee6e6cb1..71eec645eeb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_s8_m_untied: { xfail *-*-* } +** qsub_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqsub z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s8_m_tied1, svint8_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s8_m_untied: { xfail *-*-* } +** qsub_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqsub z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c index 61be7463472..a544d8cfcf8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c @@ -166,7 +166,7 @@ TEST_UNIFORM_ZX (qsub_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_u16_m_untied: { xfail *-*-* } +** qsub_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqsub z0\.h, p0/m, z0\.h, \1 @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u16_m_tied1, svuint16_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u16_m_untied: { xfail *-*-* } +** qsub_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqsub z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c index d90dcadb263..20c95d22cce 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u32_m_tied1, svuint32_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u32_m_untied: { xfail *-*-* } +** qsub_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqsub z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c index b25c6a569ba..a5a0d242821 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u64_m_tied1, svuint64_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u64_m_untied: { xfail *-*-* } +** qsub_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqsub z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c index 686b2b425fb..cdcf039bbaa 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_u8_m_untied: { xfail *-*-* } +** qsub_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqsub z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_u8_m_tied1, svuint8_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u8_m_untied: { xfail *-*-* } +** qsub_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqsub z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c index 577310d9614..ed315171d3b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_s16_m_untied: { xfail *-*-* } +** qsubr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqsubr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s16_m_tied1, svint16_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s16_m_untied: { xfail *-*-* } +** qsubr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c index f6a06c38061..810e01e829a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s32_m_tied1, svint32_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s32_m_untied: { xfail *-*-* } +** qsubr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c index 12b06356a6c..03a4eebd31d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s64_m_tied1, svint64_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s64_m_untied: { xfail *-*-* } +** qsubr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c index ce814a8393e..88c5387506b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_s8_m_untied: { xfail *-*-* } +** qsubr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqsubr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s8_m_tied1, svint8_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s8_m_untied: { xfail *-*-* } +** qsubr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqsubr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c index f406bf2ed86..974e564ff10 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_u16_m_untied: { xfail *-*-* } +** qsubr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqsubr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u16_m_tied1, svuint16_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u16_m_untied: { xfail *-*-* } +** qsubr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c index 5c4bc9ee197..54c9bdabc64 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u32_m_tied1, svuint32_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u32_m_untied: { xfail *-*-* } +** qsubr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c index d0d146ea5e6..75769d5aa57 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u64_m_tied1, svuint64_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u64_m_untied: { xfail *-*-* } +** qsubr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c index 7b487fd93b1..279d611af27 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_u8_m_untied: { xfail *-*-* } +** qsubr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqsubr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u8_m_tied1, svuint8_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u8_m_untied: { xfail *-*-* } +** qsubr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqsubr z0\.b, p0/m, z0\.b, \1