From patchwork Tue Jun 20 20:49:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 110693 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3931105vqr; Tue, 20 Jun 2023 13:50:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4oPu4GgpFYdlajIzosq5L+UmVZvDGCGIlv8gfzxam4+0N5XQraPipZuntFB5u14RkQ/9dp X-Received: by 2002:a17:907:1609:b0:986:4789:1029 with SMTP id hb9-20020a170907160900b0098647891029mr13130533ejc.23.1687294222234; Tue, 20 Jun 2023 13:50:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687294222; cv=none; d=google.com; s=arc-20160816; b=bv+5txB0HFBMnrdcu2Q115Sxi4UlFSHoi04gorrw8LjfAe4qoW8SRUJacDKOIqqCrU OpaS3/Q905Q+66IVtyYYhwZ57NYXvM0xnjkGveszsfT+UTaVSgzrKXCTb+lwJib2R/bA YDLM8rp5dF0qYdUSvtmtyIU4B64Kvb92cmTsCnG6CLVOrSvvfYcqzmMSaRtEKqcV/sq/ 04rUD0I1XCV7RKbxWjtftM6LEMG9YUfu7IsmolN0IehYJ4zTT6/vXXVreCO+3l9YOulE /Up8p2X3EAk9nbF7V+M8KzsM/AilflGgYZC2wkQ3+mzyZvUaQsf+7BpDHcqVqaqbPZ1T u06A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:mime-version :user-agent:message-id:date:subject:mail-followup-to:to:dmarc-filter :delivered-to:dkim-signature:dkim-filter; bh=LmLqfU46iSrq8VvFWqkvawU4qdlDyViwM8dyqLDG+DM=; b=nEYMSj26WS52tphk/yvijSHTv8E7vMo7EII9hE4wHAZj0CO0nwqQ3Vm9Bx2gvDvqKG Sk7Smybm2bBWYvNhDwxCP8pGrtynLyciFcSGnofTvFobGYwrgjiPVQYn3p+uatJMxB1p MZGvpI35SDgcPzkHLGDO62X7Uelec23pj5riBArljtaFOL38z47q94+JUz1KtUUuOYBe y/UEXb8Qf8XIMLVQN+xwXEE3C7eIs5ja3OsD5/tEavNKiQFOuMazoTk5bPBxHfJILith 3m0hudiT+9DqfaegkxbJTZnW2mIC8rzyJ7aOEZzhN+LrX88IcmXIHGYPnsfyd5l0BeJ9 KPVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=y4mreW1J; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id g21-20020a17090669d500b0097894b09c9asi1536790ejs.213.2023.06.20.13.50.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 13:50:22 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=y4mreW1J; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 04F0B3858C78 for ; Tue, 20 Jun 2023 20:50:21 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 04F0B3858C78 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1687294221; bh=LmLqfU46iSrq8VvFWqkvawU4qdlDyViwM8dyqLDG+DM=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=y4mreW1J4OQPKUH+3X8dwLyUCdZgZPqUqfH3/R3lMM1htpMQx9eNwxl4DvS/Xn360 mT/rx/PqGuAPOVhvEyZQhev4HSXSuU/R4uG1uWSiEGsR+9p5Tx2nLZB3W/4AVp+epg dK8s1lLWnwAvlCYbXZksTuNSSOXULtvoNe+VsVfk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 733383858D1E for ; Tue, 20 Jun 2023 20:49:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 733383858D1E Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CA691063 for ; Tue, 20 Jun 2023 13:50:17 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF2A33F663 for ; Tue, 20 Jun 2023 13:49:32 -0700 (PDT) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [pushed] aarch64: Robustify stack tie handling Date: Tue, 20 Jun 2023 21:49:31 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-27.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769256226287362714?= X-GMAIL-MSGID: =?utf-8?q?1769256226287362714?= The SVE handling of stack clash protection copied the stack pointer to X11 before the probe and set up X11 as the CFA for unwind purposes: /* This is done to provide unwinding information for the stack adjustments we're about to do, however to prevent the optimizers from removing the R11 move and leaving the CFA note (which would be very wrong) we tie the old and new stack pointer together. The tie will expand to nothing but the optimizers will not touch the instruction. */ rtx stack_ptr_copy = gen_rtx_REG (Pmode, STACK_CLASH_SVE_CFA_REGNUM); emit_move_insn (stack_ptr_copy, stack_pointer_rtx); emit_insn (gen_stack_tie (stack_ptr_copy, stack_pointer_rtx)); /* We want the CFA independent of the stack pointer for the duration of the loop. */ add_reg_note (insn, REG_CFA_DEF_CFA, stack_ptr_copy); RTX_FRAME_RELATED_P (insn) = 1; -fcprop-registers is now smart enough to realise that X11 = SP, replace X11 with SP in the stack tie, and delete the instruction created above. This patch tries to prevent that by making stack_tie fussy about the register numbers. It fixes failures in gcc.target/aarch64/sve/pcs/stack_clash*.c. Tested on aarch64-linux-gnu & pushed. Richard gcc/ * config/aarch64/aarch64.md (stack_tie): Hard-code the first register operand to the stack pointer. Require the second register operand to have the number specified in a separate const_int operand. * config/aarch64/aarch64.cc (aarch64_emit_stack_tie): New function. (aarch64_allocate_and_probe_stack_space): Use it. (aarch64_expand_prologue, aarch64_expand_epilogue): Likewise. (aarch64_expand_epilogue): Likewise. --- gcc/config/aarch64/aarch64.cc | 18 ++++++++++++++---- gcc/config/aarch64/aarch64.md | 7 ++++--- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index ee37ceaa255..b99f12c99e9 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -9664,6 +9664,16 @@ aarch64_stack_clash_protection_alloca_probe_range (void) return STACK_CLASH_CALLER_GUARD; } +/* Emit a stack tie that acts as a scheduling barrier for all previous and + subsequent memory accesses and that requires the stack pointer and REG + to have their current values. REG can be stack_pointer_rtx if no + other register's value needs to be fixed. */ + +static void +aarch64_emit_stack_tie (rtx reg) +{ + emit_insn (gen_stack_tie (reg, gen_int_mode (REGNO (reg), DImode))); +} /* Allocate POLY_SIZE bytes of stack space using TEMP1 and TEMP2 as scratch registers. If POLY_SIZE is not large enough to require a probe this function @@ -9776,7 +9786,7 @@ aarch64_allocate_and_probe_stack_space (rtx temp1, rtx temp2, the instruction. */ rtx stack_ptr_copy = gen_rtx_REG (Pmode, STACK_CLASH_SVE_CFA_REGNUM); emit_move_insn (stack_ptr_copy, stack_pointer_rtx); - emit_insn (gen_stack_tie (stack_ptr_copy, stack_pointer_rtx)); + aarch64_emit_stack_tie (stack_ptr_copy); /* We want the CFA independent of the stack pointer for the duration of the loop. */ @@ -10145,7 +10155,7 @@ aarch64_expand_prologue (void) aarch64_add_cfa_expression (insn, regno_reg_rtx[reg1], hard_frame_pointer_rtx, 0); } - emit_insn (gen_stack_tie (stack_pointer_rtx, hard_frame_pointer_rtx)); + aarch64_emit_stack_tie (hard_frame_pointer_rtx); } aarch64_save_callee_saves (saved_regs_offset, R0_REGNUM, R30_REGNUM, @@ -10248,7 +10258,7 @@ aarch64_expand_epilogue (bool for_sibcall) || cfun->calls_alloca || crtl->calls_eh_return) { - emit_insn (gen_stack_tie (stack_pointer_rtx, stack_pointer_rtx)); + aarch64_emit_stack_tie (stack_pointer_rtx); need_barrier_p = false; } @@ -10287,7 +10297,7 @@ aarch64_expand_epilogue (bool for_sibcall) callee_adjust != 0, &cfi_ops); if (need_barrier_p) - emit_insn (gen_stack_tie (stack_pointer_rtx, stack_pointer_rtx)); + aarch64_emit_stack_tie (stack_pointer_rtx); if (callee_adjust != 0) aarch64_pop_regs (reg1, reg2, callee_adjust, &cfi_ops); diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 25f7905c6a0..01cf989641f 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -7325,10 +7325,11 @@ (define_insn "tlsdesc_small_sve_" (define_insn "stack_tie" [(set (mem:BLK (scratch)) - (unspec:BLK [(match_operand:DI 0 "register_operand" "rk") - (match_operand:DI 1 "register_operand" "rk")] + (unspec:BLK [(reg:DI SP_REGNUM) + (match_operand:DI 0 "register_operand" "rk") + (match_operand:DI 1 "const_int_operand")] UNSPEC_PRLG_STK))] - "" + "REGNO (operands[0]) == INTVAL (operands[1])" "" [(set_attr "length" "0")] )