From patchwork Wed Apr 26 17:01:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick O'Neill X-Patchwork-Id: 87900 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp386806vqo; Wed, 26 Apr 2023 10:02:53 -0700 (PDT) X-Google-Smtp-Source: AKy350Z4Al0QSjuDTKXZH8Ha1cO/0Llvu8KgKlbrzxz+11iDcroAJkcWf5k9g+g0hG+VdNUQujdl X-Received: by 2002:a17:906:3d2a:b0:94f:2840:14c9 with SMTP id l10-20020a1709063d2a00b0094f284014c9mr17156963ejf.62.1682528573622; Wed, 26 Apr 2023 10:02:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682528573; cv=none; d=google.com; s=arc-20160816; b=yPZIjUnYuWdQULUmfbfixAbOqccbPJtXBTthJCJl/lH68M4OrYTvf+WNqM5kjD6/Pg zUyhOPAm5nOCJ28gAFpVwj2wyPPWZf9BLNoWnDHxLBQxVrEDl9Osy58aXyVS/L6nRNws q+9+i06nRLhatI+HT1r4kudkMisEI8Ha/x2Etw32X9yZ0WBl7PTmEsa29RWiQm32xC93 ux/Pr83Rixsmvl7OGQELG48PbL4rQRw61zPwgsL+f9Shf4fq6+CpNRQZhP60euiYZOKq oXhSgyNnR3a5bl3VKDtxuJXIrZLqFgazERQuHvnNQr8FQCNVDt6iCZaZeT92h/XjNpsi BTmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature:dmarc-filter:delivered-to; bh=M52zpHbZNZxpuzNVkDSQatfCDHmOgrlPfToefsPV668=; b=Gh7E2MIqzk07NnQBZ4bF1IJeyHRfeJ+vXVydmruIZSnemVJY0OnsrLBsk1j5DJ2kaI xEpGpbHwQjEJ0Hum1yUQKrOOOPWoPf3IiN1JCIvqIjOlbiGo7pvKfXJLXGt8PIzX8tlG fsHJkXb+bhpkNp83Nsdsoqul+kL74Aiq25ytcukJTttk72QKHAiPDR1TrIxzVECAQo7Y SaymB1Jdo3VlsxPOnGqyexdlffTRI0xZLd3ni+FZQBPoUIodxuq125tX1MXrpZS+6eQw aT80xJk8i+3qjUIqjWX1gUIjKoI4YkfT47MF73jUjsA+zXK9nt7ABWoN3UB95mJV6IsI WA3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=ScMja6mJ; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id n3-20020a170906724300b0094f1eee5a9dsi13110857ejk.555.2023.04.26.10.02.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Apr 2023 10:02:53 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=ScMja6mJ; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A4A5E3857736 for ; Wed, 26 Apr 2023 17:02:45 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by sourceware.org (Postfix) with ESMTPS id 78D953858D1E for ; Wed, 26 Apr 2023 17:01:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 78D953858D1E Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-pf1-x443.google.com with SMTP id d2e1a72fcca58-63b5465fb99so6135090b3a.1 for ; Wed, 26 Apr 2023 10:01:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1682528518; x=1685120518; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=M52zpHbZNZxpuzNVkDSQatfCDHmOgrlPfToefsPV668=; b=ScMja6mJo/8rw2VrDw299VnyLYgWvPc90buVZjNnahN47HcCsaGI2F5pIO4Y6GtmVb Wf8UbAwrLlAbCCc8r/5AQoz5AP4A0eHoBsjrqrt/m3sLsudokJvJo9RGKVelzbHmG5zi GvIugkgO87xExukAoyoyF6I8nFfstRwPSCRshlfrNzugsfXa8HIvcsCPSg8rrXzbKgUq NoFw+pcIhEsPu4V669WBWSilT88drNyG9oYnOYXLh/8r4ncPJ865ytV3r7Fu6fxjoNGK +ZhhNqhaCmpMo1bSp56ESqizyzx1ca4LjrT18cC5nKDUoFsTpC7GV24SBjukNloJVoMB faCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682528518; x=1685120518; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M52zpHbZNZxpuzNVkDSQatfCDHmOgrlPfToefsPV668=; b=F8jI3QBPIrx6UINzuhs4t65+4ipxIcxn+rA8rtpCwg1FJTaOwu7JMyBe+kSAiyfHJG eQQ4NsF/juybk3t+tbQAVMUwrSZuVQ6lxCYwpQsxy96Ye2rQe30RM4n/79VwUKPmBmlZ JVq+jTZnGpy3Lpd+Lsv19cYypObpOHq2VIMrL0fgt97FEsRnMFxs8yiAWqr3PJ6M6t1q 2tkdYMVIdKOml+C3bFbCLhAjYHIXdbHnqcTX+Yk/VUpARF3abjLK3cC1XAf5f/Ux3o8l hkuhszJ1jlFD3jIO6Las35XjA3FFF0Fnc2C/VvC4rzQ+ZBAFzl7lGgG72XeSXjsxnRhc nwig== X-Gm-Message-State: AAQBX9fq7TsZw5ki8kCF6rhwFWOPhlMk4gxhAdiw680CV4D7r8sN2pRf 34HgNRGQONYmQsC+/syZEP4lfA== X-Received: by 2002:a17:90b:300f:b0:24b:b22d:c78c with SMTP id hg15-20020a17090b300f00b0024bb22dc78cmr11296951pjb.9.1682528517851; Wed, 26 Apr 2023 10:01:57 -0700 (PDT) Received: from patrick-ThinkPad-X1-Carbon-Gen-8.hq.rivosinc.com ([50.221.140.188]) by smtp.gmail.com with ESMTPSA id gw13-20020a17090b0a4d00b002470b9503desm11707659pjb.55.2023.04.26.10.01.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Apr 2023 10:01:57 -0700 (PDT) From: Patrick O'Neill To: jeffreyalaw@gmail.com, gcc-patches@gcc.gnu.org Cc: palmer@rivosinc.com, kito.cheng@gmail.com, david.abd@gmail.com, schwab@linux-m68k.org, Patrick O'Neill Subject: [committed] RISCV: Inline subword atomic ops Date: Wed, 26 Apr 2023 10:01:29 -0700 Message-Id: <20230426170129.1076929-1-patrick@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <01c38548-7c08-a975-e9fb-a7d2d622168b@gmail.com> References: <01c38548-7c08-a975-e9fb-a7d2d622168b@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763524697522875326?= X-GMAIL-MSGID: =?utf-8?q?1764259081583689047?= Committed - I had to reformat the changelog so it would push and resolve a trivial merge conflict in riscv.opt. --- RISC-V has no support for subword atomic operations; code currently generates libatomic library calls. This patch changes the default behavior to inline subword atomic calls (using the same logic as the existing library call). Behavior can be specified using the -minline-atomics and -mno-inline-atomics command line flags. gcc/libgcc/config/riscv/atomic.c has the same logic implemented in asm. This will need to stay for backwards compatibility and the -mno-inline-atomics flag. 2023-04-18 Patrick O'Neill gcc/ChangeLog: PR target/104338 * config/riscv/riscv-protos.h: Add helper function stubs. * config/riscv/riscv.cc: Add helper functions for subword masking. * config/riscv/riscv.opt: Add command-line flag. * config/riscv/sync.md: Add masking logic and inline asm for fetch_and_op, fetch_and_nand, CAS, and exchange ops. * doc/invoke.texi: Add blurb regarding command-line flag. libgcc/ChangeLog: PR target/104338 * config/riscv/atomic.c: Add reference to duplicate logic. gcc/testsuite/ChangeLog: PR target/104338 * gcc.target/riscv/inline-atomics-1.c: New test. * gcc.target/riscv/inline-atomics-2.c: New test. * gcc.target/riscv/inline-atomics-3.c: New test. * gcc.target/riscv/inline-atomics-4.c: New test. * gcc.target/riscv/inline-atomics-5.c: New test. * gcc.target/riscv/inline-atomics-6.c: New test. * gcc.target/riscv/inline-atomics-7.c: New test. * gcc.target/riscv/inline-atomics-8.c: New test. Signed-off-by: Patrick O'Neill Signed-off-by: Palmer Dabbelt --- gcc/config/riscv/riscv-protos.h | 2 + gcc/config/riscv/riscv.cc | 49 ++ gcc/config/riscv/riscv.opt | 4 + gcc/config/riscv/sync.md | 301 +++++++++ gcc/doc/invoke.texi | 10 +- .../gcc.target/riscv/inline-atomics-1.c | 18 + .../gcc.target/riscv/inline-atomics-2.c | 9 + .../gcc.target/riscv/inline-atomics-3.c | 569 ++++++++++++++++++ .../gcc.target/riscv/inline-atomics-4.c | 566 +++++++++++++++++ .../gcc.target/riscv/inline-atomics-5.c | 87 +++ .../gcc.target/riscv/inline-atomics-6.c | 87 +++ .../gcc.target/riscv/inline-atomics-7.c | 69 +++ .../gcc.target/riscv/inline-atomics-8.c | 69 +++ libgcc/config/riscv/atomic.c | 2 + 14 files changed, 1841 insertions(+), 1 deletion(-) create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-1.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-2.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-3.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-4.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-5.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-6.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-7.c create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-8.c -- 2.34.1 diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h index 607ff6ea697..f87661bde2c 100644 --- a/gcc/config/riscv/riscv-protos.h +++ b/gcc/config/riscv/riscv-protos.h @@ -79,6 +79,8 @@ extern void riscv_reinit (void); extern poly_uint64 riscv_regmode_natural_size (machine_mode); extern bool riscv_v_ext_vector_mode_p (machine_mode); extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT); +extern void riscv_subword_address (rtx, rtx *, rtx *, rtx *, rtx *); +extern void riscv_lshift_subword (machine_mode, rtx, rtx, rtx *); /* Routines implemented in riscv-c.cc. */ void riscv_cpu_cpp_builtins (cpp_reader *); diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc index a2d2dd0bb67..0f890469d7a 100644 --- a/gcc/config/riscv/riscv.cc +++ b/gcc/config/riscv/riscv.cc @@ -7161,6 +7161,55 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs) & ~zeroed_hardregs); } +/* Given memory reference MEM, expand code to compute the aligned + memory address, shift and mask values and store them into + *ALIGNED_MEM, *SHIFT, *MASK and *NOT_MASK. */ + +void +riscv_subword_address (rtx mem, rtx *aligned_mem, rtx *shift, rtx *mask, + rtx *not_mask) +{ + /* Align the memory address to a word. */ + rtx addr = force_reg (Pmode, XEXP (mem, 0)); + + rtx addr_mask = gen_int_mode (-4, Pmode); + + rtx aligned_addr = gen_reg_rtx (Pmode); + emit_move_insn (aligned_addr, gen_rtx_AND (Pmode, addr, addr_mask)); + + *aligned_mem = change_address (mem, SImode, aligned_addr); + + /* Calculate the shift amount. */ + emit_move_insn (*shift, gen_rtx_AND (SImode, gen_lowpart (SImode, addr), + gen_int_mode (3, SImode))); + emit_move_insn (*shift, gen_rtx_ASHIFT (SImode, *shift, + gen_int_mode (3, SImode))); + + /* Calculate the mask. */ + int unshifted_mask = GET_MODE_MASK (GET_MODE (mem)); + + emit_move_insn (*mask, gen_int_mode (unshifted_mask, SImode)); + + emit_move_insn (*mask, gen_rtx_ASHIFT (SImode, *mask, + gen_lowpart (QImode, *shift))); + + emit_move_insn (*not_mask, gen_rtx_NOT(SImode, *mask)); +} + +/* Leftshift a subword within an SImode register. */ + +void +riscv_lshift_subword (machine_mode mode, rtx value, rtx shift, + rtx *shifted_value) +{ + rtx value_reg = gen_reg_rtx (SImode); + emit_move_insn (value_reg, simplify_gen_subreg (SImode, value, + mode, 0)); + + emit_move_insn(*shifted_value, gen_rtx_ASHIFT (SImode, value_reg, + gen_lowpart (QImode, shift))); +} + /* Initialize the GCC target structure. */ #undef TARGET_ASM_ALIGNED_HI_OP #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t" diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt index ef1bdfcfe28..63d4710cb15 100644 --- a/gcc/config/riscv/riscv.opt +++ b/gcc/config/riscv/riscv.opt @@ -255,6 +255,10 @@ misa-spec= Target RejectNegative Joined Enum(isa_spec_class) Var(riscv_isa_spec) Init(TARGET_DEFAULT_ISA_SPEC) Set the version of RISC-V ISA spec. +minline-atomics +Target Var(TARGET_INLINE_SUBWORD_ATOMIC) Init(1) +Always inline subword atomic operations. + Enum Name(riscv_autovec_preference) Type(enum riscv_autovec_preference_enum) The RISC-V auto-vectorization preference: diff --git a/gcc/config/riscv/sync.md b/gcc/config/riscv/sync.md index c932ef87b9d..83be6431cb6 100644 --- a/gcc/config/riscv/sync.md +++ b/gcc/config/riscv/sync.md @@ -21,8 +21,11 @@ (define_c_enum "unspec" [ UNSPEC_COMPARE_AND_SWAP + UNSPEC_COMPARE_AND_SWAP_SUBWORD UNSPEC_SYNC_OLD_OP + UNSPEC_SYNC_OLD_OP_SUBWORD UNSPEC_SYNC_EXCHANGE + UNSPEC_SYNC_EXCHANGE_SUBWORD UNSPEC_ATOMIC_STORE UNSPEC_MEMORY_BARRIER ]) @@ -91,6 +94,135 @@ [(set_attr "type" "atomic") (set (attr "length") (const_int 8))]) +(define_insn "subword_atomic_fetch_strong_" + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location + (set (match_dup 1) + (unspec_volatile:SI + [(any_atomic:SI (match_dup 1) + (match_operand:SI 2 "register_operand" "rI")) ;; value for op + (match_operand:SI 3 "register_operand" "rI")] ;; mask + UNSPEC_SYNC_OLD_OP_SUBWORD)) + (match_operand:SI 4 "register_operand" "rI") ;; not_mask + (clobber (match_scratch:SI 5 "=&r")) ;; tmp_1 + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_2 + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" + { + return "1:\;" + "lr.w.aq\t%0, %1\;" + "\t%5, %0, %2\;" + "and\t%5, %5, %3\;" + "and\t%6, %0, %4\;" + "or\t%6, %6, %5\;" + "sc.w.rl\t%5, %6, %1\;" + "bnez\t%5, 1b"; + } + [(set (attr "length") (const_int 28))]) + +(define_expand "atomic_fetch_nand" + [(match_operand:SHORT 0 "register_operand") ;; old value at mem + (not:SHORT (and:SHORT (match_operand:SHORT 1 "memory_operand") ;; mem location + (match_operand:SHORT 2 "reg_or_0_operand"))) ;; value for op + (match_operand:SI 3 "const_int_operand")] ;; model + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" +{ + /* We have no QImode/HImode atomics, so form a mask, then use + subword_atomic_fetch_strong_nand to implement a LR/SC version of the + operation. */ + + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining + is disabled */ + + rtx old = gen_reg_rtx (SImode); + rtx mem = operands[1]; + rtx value = operands[2]; + rtx aligned_mem = gen_reg_rtx (SImode); + rtx shift = gen_reg_rtx (SImode); + rtx mask = gen_reg_rtx (SImode); + rtx not_mask = gen_reg_rtx (SImode); + + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); + + rtx shifted_value = gen_reg_rtx (SImode); + riscv_lshift_subword (mode, value, shift, &shifted_value); + + emit_insn (gen_subword_atomic_fetch_strong_nand (old, aligned_mem, + shifted_value, + mask, not_mask)); + + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, + gen_lowpart (QImode, shift))); + + emit_move_insn (operands[0], gen_lowpart (mode, old)); + + DONE; +}) + +(define_insn "subword_atomic_fetch_strong_nand" + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location + (set (match_dup 1) + (unspec_volatile:SI + [(not:SI (and:SI (match_dup 1) + (match_operand:SI 2 "register_operand" "rI"))) ;; value for op + (match_operand:SI 3 "register_operand" "rI")] ;; mask + UNSPEC_SYNC_OLD_OP_SUBWORD)) + (match_operand:SI 4 "register_operand" "rI") ;; not_mask + (clobber (match_scratch:SI 5 "=&r")) ;; tmp_1 + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_2 + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" + { + return "1:\;" + "lr.w.aq\t%0, %1\;" + "and\t%5, %0, %2\;" + "not\t%5, %5\;" + "and\t%5, %5, %3\;" + "and\t%6, %0, %4\;" + "or\t%6, %6, %5\;" + "sc.w.rl\t%5, %6, %1\;" + "bnez\t%5, 1b"; + } + [(set (attr "length") (const_int 32))]) + +(define_expand "atomic_fetch_" + [(match_operand:SHORT 0 "register_operand") ;; old value at mem + (any_atomic:SHORT (match_operand:SHORT 1 "memory_operand") ;; mem location + (match_operand:SHORT 2 "reg_or_0_operand")) ;; value for op + (match_operand:SI 3 "const_int_operand")] ;; model + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" +{ + /* We have no QImode/HImode atomics, so form a mask, then use + subword_atomic_fetch_strong_ to implement a LR/SC version of the + operation. */ + + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining + is disabled */ + + rtx old = gen_reg_rtx (SImode); + rtx mem = operands[1]; + rtx value = operands[2]; + rtx aligned_mem = gen_reg_rtx (SImode); + rtx shift = gen_reg_rtx (SImode); + rtx mask = gen_reg_rtx (SImode); + rtx not_mask = gen_reg_rtx (SImode); + + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); + + rtx shifted_value = gen_reg_rtx (SImode); + riscv_lshift_subword (mode, value, shift, &shifted_value); + + emit_insn (gen_subword_atomic_fetch_strong_ (old, aligned_mem, + shifted_value, + mask, not_mask)); + + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, + gen_lowpart (QImode, shift))); + + emit_move_insn (operands[0], gen_lowpart (mode, old)); + + DONE; +}) + (define_insn "atomic_exchange" [(set (match_operand:GPR 0 "register_operand" "=&r") (unspec_volatile:GPR @@ -104,6 +236,56 @@ [(set_attr "type" "atomic") (set (attr "length") (const_int 8))]) +(define_expand "atomic_exchange" + [(match_operand:SHORT 0 "register_operand") ;; old value at mem + (match_operand:SHORT 1 "memory_operand") ;; mem location + (match_operand:SHORT 2 "register_operand") ;; value + (match_operand:SI 3 "const_int_operand")] ;; model + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" +{ + rtx old = gen_reg_rtx (SImode); + rtx mem = operands[1]; + rtx value = operands[2]; + rtx aligned_mem = gen_reg_rtx (SImode); + rtx shift = gen_reg_rtx (SImode); + rtx mask = gen_reg_rtx (SImode); + rtx not_mask = gen_reg_rtx (SImode); + + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); + + rtx shifted_value = gen_reg_rtx (SImode); + riscv_lshift_subword (mode, value, shift, &shifted_value); + + emit_insn (gen_subword_atomic_exchange_strong (old, aligned_mem, + shifted_value, not_mask)); + + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, + gen_lowpart (QImode, shift))); + + emit_move_insn (operands[0], gen_lowpart (mode, old)); + DONE; +}) + +(define_insn "subword_atomic_exchange_strong" + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location + (set (match_dup 1) + (unspec_volatile:SI + [(match_operand:SI 2 "reg_or_0_operand" "rI") ;; value + (match_operand:SI 3 "reg_or_0_operand" "rI")] ;; not_mask + UNSPEC_SYNC_EXCHANGE_SUBWORD)) + (clobber (match_scratch:SI 4 "=&r"))] ;; tmp_1 + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" + { + return "1:\;" + "lr.w.aq\t%0, %1\;" + "and\t%4, %0, %3\;" + "or\t%4, %4, %2\;" + "sc.w.rl\t%4, %4, %1\;" + "bnez\t%4, 1b"; + } + [(set (attr "length") (const_int 20))]) + (define_insn "atomic_cas_value_strong" [(set (match_operand:GPR 0 "register_operand" "=&r") (match_operand:GPR 1 "memory_operand" "+A")) @@ -153,6 +335,125 @@ DONE; }) +(define_expand "atomic_compare_and_swap" + [(match_operand:SI 0 "register_operand") ;; bool output + (match_operand:SHORT 1 "register_operand") ;; val output + (match_operand:SHORT 2 "memory_operand") ;; memory + (match_operand:SHORT 3 "reg_or_0_operand") ;; expected value + (match_operand:SHORT 4 "reg_or_0_operand") ;; desired value + (match_operand:SI 5 "const_int_operand") ;; is_weak + (match_operand:SI 6 "const_int_operand") ;; mod_s + (match_operand:SI 7 "const_int_operand")] ;; mod_f + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" +{ + emit_insn (gen_atomic_cas_value_strong (operands[1], operands[2], + operands[3], operands[4], + operands[6], operands[7])); + + rtx val = gen_reg_rtx (SImode); + if (operands[1] != const0_rtx) + emit_move_insn (val, gen_rtx_SIGN_EXTEND (SImode, operands[1])); + else + emit_move_insn (val, const0_rtx); + + rtx exp = gen_reg_rtx (SImode); + if (operands[3] != const0_rtx) + emit_move_insn (exp, gen_rtx_SIGN_EXTEND (SImode, operands[3])); + else + emit_move_insn (exp, const0_rtx); + + rtx compare = val; + if (exp != const0_rtx) + { + rtx difference = gen_rtx_MINUS (SImode, val, exp); + compare = gen_reg_rtx (SImode); + emit_move_insn (compare, difference); + } + + if (word_mode != SImode) + { + rtx reg = gen_reg_rtx (word_mode); + emit_move_insn (reg, gen_rtx_SIGN_EXTEND (word_mode, compare)); + compare = reg; + } + + emit_move_insn (operands[0], gen_rtx_EQ (SImode, compare, const0_rtx)); + DONE; +}) + +(define_expand "atomic_cas_value_strong" + [(match_operand:SHORT 0 "register_operand") ;; val output + (match_operand:SHORT 1 "memory_operand") ;; memory + (match_operand:SHORT 2 "reg_or_0_operand") ;; expected value + (match_operand:SHORT 3 "reg_or_0_operand") ;; desired value + (match_operand:SI 4 "const_int_operand") ;; mod_s + (match_operand:SI 5 "const_int_operand") ;; mod_f + (match_scratch:SHORT 6)] + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" +{ + /* We have no QImode/HImode atomics, so form a mask, then use + subword_atomic_cas_strong to implement a LR/SC version of the + operation. */ + + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining + is disabled */ + + rtx old = gen_reg_rtx (SImode); + rtx mem = operands[1]; + rtx aligned_mem = gen_reg_rtx (SImode); + rtx shift = gen_reg_rtx (SImode); + rtx mask = gen_reg_rtx (SImode); + rtx not_mask = gen_reg_rtx (SImode); + + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); + + rtx o = operands[2]; + rtx n = operands[3]; + rtx shifted_o = gen_reg_rtx (SImode); + rtx shifted_n = gen_reg_rtx (SImode); + + riscv_lshift_subword (mode, o, shift, &shifted_o); + riscv_lshift_subword (mode, n, shift, &shifted_n); + + emit_move_insn (shifted_o, gen_rtx_AND (SImode, shifted_o, mask)); + emit_move_insn (shifted_n, gen_rtx_AND (SImode, shifted_n, mask)); + + emit_insn (gen_subword_atomic_cas_strong (old, aligned_mem, + shifted_o, shifted_n, + mask, not_mask)); + + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, + gen_lowpart (QImode, shift))); + + emit_move_insn (operands[0], gen_lowpart (mode, old)); + + DONE; +}) + +(define_insn "subword_atomic_cas_strong" + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location + (set (match_dup 1) + (unspec_volatile:SI [(match_operand:SI 2 "reg_or_0_operand" "rJ") ;; expected value + (match_operand:SI 3 "reg_or_0_operand" "rJ")] ;; desired value + UNSPEC_COMPARE_AND_SWAP_SUBWORD)) + (match_operand:SI 4 "register_operand" "rI") ;; mask + (match_operand:SI 5 "register_operand" "rI") ;; not_mask + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_1 + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" + { + return "1:\;" + "lr.w.aq\t%0, %1\;" + "and\t%6, %0, %4\;" + "bne\t%6, %z2, 1f\;" + "and\t%6, %0, %5\;" + "or\t%6, %6, %3\;" + "sc.w.rl\t%6, %6, %1\;" + "bnez\t%6, 1b\;" + "1:"; + } + [(set (attr "length") (const_int 28))]) + (define_expand "atomic_test_and_set" [(match_operand:QI 0 "register_operand" "") ;; bool output (match_operand:QI 1 "memory_operand" "+A") ;; memory diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index e5ee2d536fc..2f40c58b21c 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -1227,7 +1227,8 @@ See RS/6000 and PowerPC Options. -mbig-endian -mlittle-endian -mstack-protector-guard=@var{guard} -mstack-protector-guard-reg=@var{reg} -mstack-protector-guard-offset=@var{offset} --mcsr-check -mno-csr-check} +-mcsr-check -mno-csr-check +-minline-atomics -mno-inline-atomics} @emph{RL78 Options} @gccoptlist{-msim -mmul=none -mmul=g13 -mmul=g14 -mallregs @@ -29024,6 +29025,13 @@ Do or don't use smaller but slower prologue and epilogue code that uses library function calls. The default is to use fast inline prologues and epilogues. +@opindex minline-atomics +@item -minline-atomics +@itemx -mno-inline-atomics +Do or don't use smaller but slower subword atomic emulation code that uses +libatomic function calls. The default is to use fast inline subword atomics +that do not require libatomic. + @opindex mshorten-memrefs @item -mshorten-memrefs @itemx -mno-shorten-memrefs diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c new file mode 100644 index 00000000000..5c5623d9b2f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c @@ -0,0 +1,18 @@ +/* { dg-do compile } */ +/* { dg-options "-mno-inline-atomics" } */ +/* { dg-message "note: '__sync_fetch_and_nand' changed semantics in GCC 4.4" "fetch_and_nand" { target *-*-* } 0 } */ +/* { dg-final { scan-assembler "\tcall\t__sync_fetch_and_add_1" } } */ +/* { dg-final { scan-assembler "\tcall\t__sync_fetch_and_nand_1" } } */ +/* { dg-final { scan-assembler "\tcall\t__sync_bool_compare_and_swap_1" } } */ + +char foo; +char bar; +char baz; + +int +main () +{ + __sync_fetch_and_add(&foo, 1); + __sync_fetch_and_nand(&bar, 1); + __sync_bool_compare_and_swap (&baz, 1, 2); +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c new file mode 100644 index 00000000000..01b43908692 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c @@ -0,0 +1,9 @@ +/* { dg-do compile } */ +/* Verify that subword atomics do not generate calls. */ +/* { dg-options "-minline-atomics" } */ +/* { dg-message "note: '__sync_fetch_and_nand' changed semantics in GCC 4.4" "fetch_and_nand" { target *-*-* } 0 } */ +/* { dg-final { scan-assembler-not "\tcall\t__sync_fetch_and_add_1" } } */ +/* { dg-final { scan-assembler-not "\tcall\t__sync_fetch_and_nand_1" } } */ +/* { dg-final { scan-assembler-not "\tcall\t__sync_bool_compare_and_swap_1" } } */ + +#include "inline-atomics-1.c" \ No newline at end of file diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c new file mode 100644 index 00000000000..709f3734377 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c @@ -0,0 +1,569 @@ +/* Check all char alignments. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-op-1.c */ +/* Test __atomic routines for existence and proper execution on 1 byte + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics -Wno-address-of-packed-member" } */ + +/* Test the execution of the __atomic_*OP builtin routines for a char. */ + +extern void abort(void); + +char count, res; +const char init = ~0; + +struct A +{ + char a; + char b; + char c; + char d; +} __attribute__ ((packed)) A; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add (char* v) +{ + *v = 0; + count = 1; + + if (__atomic_fetch_add (v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub (char* v) +{ + *v = res = 20; + count = 0; + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +void +test_fetch_and (char* v) +{ + *v = init; + + if (__atomic_fetch_and (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_and (v, init, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + *v = ~*v; + if (__atomic_fetch_and (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_fetch_nand (char* v) +{ + *v = init; + + if (__atomic_fetch_nand (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_fetch_nand (v, 0, __ATOMIC_ACQUIRE) != 0 ) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL) != 0) + abort (); + + if (__atomic_fetch_nand (v, 0, __ATOMIC_SEQ_CST) != init) + abort (); +} + +void +test_fetch_xor (char* v) +{ + *v = init; + count = 0; + + if (__atomic_fetch_xor (v, count, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE) != 0) + abort (); + + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_SEQ_CST) != init) + abort (); +} + +void +test_fetch_or (char* v) +{ + *v = 0; + count = 1; + + if (__atomic_fetch_or (v, count, __ATOMIC_RELAXED) != 0) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, 2, __ATOMIC_CONSUME) != 1) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, 8, __ATOMIC_RELEASE) != 7) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_ACQ_REL) != 15) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_SEQ_CST) != 31) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch (char* v) +{ + *v = 0; + count = 1; + + if (__atomic_add_fetch (v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch (char* v) +{ + *v = res = 20; + count = 0; + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +void +test_and_fetch (char* v) +{ + *v = init; + + if (__atomic_and_fetch (v, 0, __ATOMIC_RELAXED) != 0) + abort (); + + *v = init; + if (__atomic_and_fetch (v, init, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + *v = ~*v; + if (__atomic_and_fetch (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL) != 0) + abort (); + + *v = ~*v; + if (__atomic_and_fetch (v, 0, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_nand_fetch (char* v) +{ + *v = init; + + if (__atomic_nand_fetch (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE) != init) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_RELEASE) != 0) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST) != init) + abort (); +} + + + +void +test_xor_fetch (char* v) +{ + *v = init; + count = 0; + + if (__atomic_xor_fetch (v, count, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_or_fetch (char* v) +{ + *v = 0; + count = 1; + + if (__atomic_or_fetch (v, count, __ATOMIC_RELAXED) != 1) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, 2, __ATOMIC_CONSUME) != 3) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_ACQUIRE) != 7) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, 8, __ATOMIC_RELEASE) != 15) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_ACQ_REL) != 31) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_SEQ_CST) != 63) + abort (); +} + + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add (char* v) +{ + *v = 0; + count = 1; + + __atomic_add_fetch (v, count, __ATOMIC_RELAXED); + if (*v != 1) + abort (); + + __atomic_fetch_add (v, count, __ATOMIC_CONSUME); + if (*v != 2) + abort (); + + __atomic_add_fetch (v, 1 , __ATOMIC_ACQUIRE); + if (*v != 3) + abort (); + + __atomic_fetch_add (v, 1, __ATOMIC_RELEASE); + if (*v != 4) + abort (); + + __atomic_add_fetch (v, count, __ATOMIC_ACQ_REL); + if (*v != 5) + abort (); + + __atomic_fetch_add (v, count, __ATOMIC_SEQ_CST); + if (*v != 6) + abort (); +} + + +void +test_sub (char* v) +{ + *v = res = 20; + count = 0; + + __atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, count + 1, __ATOMIC_CONSUME); + if (*v != --res) + abort (); + + __atomic_sub_fetch (v, 1, __ATOMIC_ACQUIRE); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, 1, __ATOMIC_RELEASE); + if (*v != --res) + abort (); + + __atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, count + 1, __ATOMIC_SEQ_CST); + if (*v != --res) + abort (); +} + +void +test_and (char* v) +{ + *v = init; + + __atomic_and_fetch (v, 0, __ATOMIC_RELAXED); + if (*v != 0) + abort (); + + *v = init; + __atomic_fetch_and (v, init, __ATOMIC_CONSUME); + if (*v != init) + abort (); + + __atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != 0) + abort (); + + *v = ~*v; + __atomic_fetch_and (v, init, __ATOMIC_RELEASE); + if (*v != init) + abort (); + + __atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL); + if (*v != 0) + abort (); + + *v = ~*v; + __atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST); + if (*v != 0) + abort (); +} + +void +test_nand (char* v) +{ + *v = init; + + __atomic_fetch_nand (v, 0, __ATOMIC_RELAXED); + if (*v != init) + abort (); + + __atomic_fetch_nand (v, init, __ATOMIC_CONSUME); + if (*v != 0) + abort (); + + __atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != init) + abort (); + + __atomic_nand_fetch (v, init, __ATOMIC_RELEASE); + if (*v != 0) + abort (); + + __atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL); + if (*v != init) + abort (); + + __atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST); + if (*v != init) + abort (); +} + + + +void +test_xor (char* v) +{ + *v = init; + count = 0; + + __atomic_xor_fetch (v, count, __ATOMIC_RELAXED); + if (*v != init) + abort (); + + __atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME); + if (*v != 0) + abort (); + + __atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != 0) + abort (); + + __atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE); + if (*v != init) + abort (); + + __atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL); + if (*v != init) + abort (); + + __atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST); + if (*v != 0) + abort (); +} + +void +test_or (char* v) +{ + *v = 0; + count = 1; + + __atomic_or_fetch (v, count, __ATOMIC_RELAXED); + if (*v != 1) + abort (); + + count *= 2; + __atomic_fetch_or (v, count, __ATOMIC_CONSUME); + if (*v != 3) + abort (); + + count *= 2; + __atomic_or_fetch (v, 4, __ATOMIC_ACQUIRE); + if (*v != 7) + abort (); + + count *= 2; + __atomic_fetch_or (v, 8, __ATOMIC_RELEASE); + if (*v != 15) + abort (); + + count *= 2; + __atomic_or_fetch (v, count, __ATOMIC_ACQ_REL); + if (*v != 31) + abort (); + + count *= 2; + __atomic_fetch_or (v, count, __ATOMIC_SEQ_CST); + if (*v != 63) + abort (); +} + +int +main () +{ + char* V[] = {&A.a, &A.b, &A.c, &A.d}; + + for (int i = 0; i < 4; i++) { + test_fetch_add (V[i]); + test_fetch_sub (V[i]); + test_fetch_and (V[i]); + test_fetch_nand (V[i]); + test_fetch_xor (V[i]); + test_fetch_or (V[i]); + + test_add_fetch (V[i]); + test_sub_fetch (V[i]); + test_and_fetch (V[i]); + test_nand_fetch (V[i]); + test_xor_fetch (V[i]); + test_or_fetch (V[i]); + + test_add (V[i]); + test_sub (V[i]); + test_and (V[i]); + test_nand (V[i]); + test_xor (V[i]); + test_or (V[i]); + } + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c new file mode 100644 index 00000000000..eecfaae5cc6 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c @@ -0,0 +1,566 @@ +/* Check all short alignments. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-op-2.c */ +/* Test __atomic routines for existence and proper execution on 2 byte + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics -Wno-address-of-packed-member" } */ + +/* Test the execution of the __atomic_*OP builtin routines for a short. */ + +extern void abort(void); + +short count, res; +const short init = ~0; + +struct A +{ + short a; + short b; +} __attribute__ ((packed)) A; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add (short* v) +{ + *v = 0; + count = 1; + + if (__atomic_fetch_add (v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub (short* v) +{ + *v = res = 20; + count = 0; + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +void +test_fetch_and (short* v) +{ + *v = init; + + if (__atomic_fetch_and (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_and (v, init, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + *v = ~*v; + if (__atomic_fetch_and (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_fetch_nand (short* v) +{ + *v = init; + + if (__atomic_fetch_nand (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_fetch_nand (v, 0, __ATOMIC_ACQUIRE) != 0 ) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL) != 0) + abort (); + + if (__atomic_fetch_nand (v, 0, __ATOMIC_SEQ_CST) != init) + abort (); +} + +void +test_fetch_xor (short* v) +{ + *v = init; + count = 0; + + if (__atomic_fetch_xor (v, count, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE) != 0) + abort (); + + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_fetch_xor (v, ~count, __ATOMIC_SEQ_CST) != init) + abort (); +} + +void +test_fetch_or (short* v) +{ + *v = 0; + count = 1; + + if (__atomic_fetch_or (v, count, __ATOMIC_RELAXED) != 0) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, 2, __ATOMIC_CONSUME) != 1) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, 8, __ATOMIC_RELEASE) != 7) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_ACQ_REL) != 15) + abort (); + + count *= 2; + if (__atomic_fetch_or (v, count, __ATOMIC_SEQ_CST) != 31) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch (short* v) +{ + *v = 0; + count = 1; + + if (__atomic_add_fetch (v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch (short* v) +{ + *v = res = 20; + count = 0; + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +void +test_and_fetch (short* v) +{ + *v = init; + + if (__atomic_and_fetch (v, 0, __ATOMIC_RELAXED) != 0) + abort (); + + *v = init; + if (__atomic_and_fetch (v, init, __ATOMIC_CONSUME) != init) + abort (); + + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + *v = ~*v; + if (__atomic_and_fetch (v, init, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL) != 0) + abort (); + + *v = ~*v; + if (__atomic_and_fetch (v, 0, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_nand_fetch (short* v) +{ + *v = init; + + if (__atomic_nand_fetch (v, 0, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE) != init) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_RELEASE) != 0) + abort (); + + if (__atomic_nand_fetch (v, init, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST) != init) + abort (); +} + + + +void +test_xor_fetch (short* v) +{ + *v = init; + count = 0; + + if (__atomic_xor_fetch (v, count, __ATOMIC_RELAXED) != init) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_CONSUME) != 0) + abort (); + + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_RELEASE) != init) + abort (); + + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQ_REL) != init) + abort (); + + if (__atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST) != 0) + abort (); +} + +void +test_or_fetch (short* v) +{ + *v = 0; + count = 1; + + if (__atomic_or_fetch (v, count, __ATOMIC_RELAXED) != 1) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, 2, __ATOMIC_CONSUME) != 3) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_ACQUIRE) != 7) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, 8, __ATOMIC_RELEASE) != 15) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_ACQ_REL) != 31) + abort (); + + count *= 2; + if (__atomic_or_fetch (v, count, __ATOMIC_SEQ_CST) != 63) + abort (); +} + + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add (short* v) +{ + *v = 0; + count = 1; + + __atomic_add_fetch (v, count, __ATOMIC_RELAXED); + if (*v != 1) + abort (); + + __atomic_fetch_add (v, count, __ATOMIC_CONSUME); + if (*v != 2) + abort (); + + __atomic_add_fetch (v, 1 , __ATOMIC_ACQUIRE); + if (*v != 3) + abort (); + + __atomic_fetch_add (v, 1, __ATOMIC_RELEASE); + if (*v != 4) + abort (); + + __atomic_add_fetch (v, count, __ATOMIC_ACQ_REL); + if (*v != 5) + abort (); + + __atomic_fetch_add (v, count, __ATOMIC_SEQ_CST); + if (*v != 6) + abort (); +} + + +void +test_sub (short* v) +{ + *v = res = 20; + count = 0; + + __atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, count + 1, __ATOMIC_CONSUME); + if (*v != --res) + abort (); + + __atomic_sub_fetch (v, 1, __ATOMIC_ACQUIRE); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, 1, __ATOMIC_RELEASE); + if (*v != --res) + abort (); + + __atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL); + if (*v != --res) + abort (); + + __atomic_fetch_sub (v, count + 1, __ATOMIC_SEQ_CST); + if (*v != --res) + abort (); +} + +void +test_and (short* v) +{ + *v = init; + + __atomic_and_fetch (v, 0, __ATOMIC_RELAXED); + if (*v != 0) + abort (); + + *v = init; + __atomic_fetch_and (v, init, __ATOMIC_CONSUME); + if (*v != init) + abort (); + + __atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != 0) + abort (); + + *v = ~*v; + __atomic_fetch_and (v, init, __ATOMIC_RELEASE); + if (*v != init) + abort (); + + __atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL); + if (*v != 0) + abort (); + + *v = ~*v; + __atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST); + if (*v != 0) + abort (); +} + +void +test_nand (short* v) +{ + *v = init; + + __atomic_fetch_nand (v, 0, __ATOMIC_RELAXED); + if (*v != init) + abort (); + + __atomic_fetch_nand (v, init, __ATOMIC_CONSUME); + if (*v != 0) + abort (); + + __atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != init) + abort (); + + __atomic_nand_fetch (v, init, __ATOMIC_RELEASE); + if (*v != 0) + abort (); + + __atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL); + if (*v != init) + abort (); + + __atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST); + if (*v != init) + abort (); +} + + + +void +test_xor (short* v) +{ + *v = init; + count = 0; + + __atomic_xor_fetch (v, count, __ATOMIC_RELAXED); + if (*v != init) + abort (); + + __atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME); + if (*v != 0) + abort (); + + __atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE); + if (*v != 0) + abort (); + + __atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE); + if (*v != init) + abort (); + + __atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL); + if (*v != init) + abort (); + + __atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST); + if (*v != 0) + abort (); +} + +void +test_or (short* v) +{ + *v = 0; + count = 1; + + __atomic_or_fetch (v, count, __ATOMIC_RELAXED); + if (*v != 1) + abort (); + + count *= 2; + __atomic_fetch_or (v, count, __ATOMIC_CONSUME); + if (*v != 3) + abort (); + + count *= 2; + __atomic_or_fetch (v, 4, __ATOMIC_ACQUIRE); + if (*v != 7) + abort (); + + count *= 2; + __atomic_fetch_or (v, 8, __ATOMIC_RELEASE); + if (*v != 15) + abort (); + + count *= 2; + __atomic_or_fetch (v, count, __ATOMIC_ACQ_REL); + if (*v != 31) + abort (); + + count *= 2; + __atomic_fetch_or (v, count, __ATOMIC_SEQ_CST); + if (*v != 63) + abort (); +} + +int +main () { + short* V[] = {&A.a, &A.b}; + + for (int i = 0; i < 2; i++) { + test_fetch_add (V[i]); + test_fetch_sub (V[i]); + test_fetch_and (V[i]); + test_fetch_nand (V[i]); + test_fetch_xor (V[i]); + test_fetch_or (V[i]); + + test_add_fetch (V[i]); + test_sub_fetch (V[i]); + test_and_fetch (V[i]); + test_nand_fetch (V[i]); + test_xor_fetch (V[i]); + test_or_fetch (V[i]); + + test_add (V[i]); + test_sub (V[i]); + test_and (V[i]); + test_nand (V[i]); + test_xor (V[i]); + test_or (V[i]); + } + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c new file mode 100644 index 00000000000..52093894a79 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c @@ -0,0 +1,87 @@ +/* Test __atomic routines for existence and proper execution on 1 byte + values with each valid memory model. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-compare-exchange-1.c */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics" } */ + +/* Test the execution of the __atomic_compare_exchange_n builtin for a char. */ + +extern void abort(void); + +char v = 0; +char expected = 0; +char max = ~0; +char desired = ~0; +char zero = 0; + +#define STRONG 0 +#define WEAK 1 + +int +main () +{ + + if (!__atomic_compare_exchange_n (&v, &expected, max, STRONG , __ATOMIC_RELAXED, __ATOMIC_RELAXED)) + abort (); + if (expected != 0) + abort (); + + if (__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + abort (); + if (expected != max) + abort (); + + if (!__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) + abort (); + if (expected != max) + abort (); + if (v != 0) + abort (); + + if (__atomic_compare_exchange_n (&v, &expected, desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) + abort (); + if (expected != 0) + abort (); + + if (!__atomic_compare_exchange_n (&v, &expected, desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) + abort (); + if (expected != 0) + abort (); + if (v != max) + abort (); + + /* Now test the generic version. */ + + v = 0; + + if (!__atomic_compare_exchange (&v, &expected, &max, STRONG, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) + abort (); + if (expected != 0) + abort (); + + if (__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + abort (); + if (expected != max) + abort (); + + if (!__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) + abort (); + if (expected != max) + abort (); + if (v != 0) + abort (); + + if (__atomic_compare_exchange (&v, &expected, &desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) + abort (); + if (expected != 0) + abort (); + + if (!__atomic_compare_exchange (&v, &expected, &desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) + abort (); + if (expected != 0) + abort (); + if (v != max) + abort (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c new file mode 100644 index 00000000000..8fee8c44811 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c @@ -0,0 +1,87 @@ +/* Test __atomic routines for existence and proper execution on 2 byte + values with each valid memory model. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-compare-exchange-2.c */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics" } */ + +/* Test the execution of the __atomic_compare_exchange_n builtin for a short. */ + +extern void abort(void); + +short v = 0; +short expected = 0; +short max = ~0; +short desired = ~0; +short zero = 0; + +#define STRONG 0 +#define WEAK 1 + +int +main () +{ + + if (!__atomic_compare_exchange_n (&v, &expected, max, STRONG , __ATOMIC_RELAXED, __ATOMIC_RELAXED)) + abort (); + if (expected != 0) + abort (); + + if (__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + abort (); + if (expected != max) + abort (); + + if (!__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) + abort (); + if (expected != max) + abort (); + if (v != 0) + abort (); + + if (__atomic_compare_exchange_n (&v, &expected, desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) + abort (); + if (expected != 0) + abort (); + + if (!__atomic_compare_exchange_n (&v, &expected, desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) + abort (); + if (expected != 0) + abort (); + if (v != max) + abort (); + + /* Now test the generic version. */ + + v = 0; + + if (!__atomic_compare_exchange (&v, &expected, &max, STRONG, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) + abort (); + if (expected != 0) + abort (); + + if (__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + abort (); + if (expected != max) + abort (); + + if (!__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) + abort (); + if (expected != max) + abort (); + if (v != 0) + abort (); + + if (__atomic_compare_exchange (&v, &expected, &desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) + abort (); + if (expected != 0) + abort (); + + if (!__atomic_compare_exchange (&v, &expected, &desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) + abort (); + if (expected != 0) + abort (); + if (v != max) + abort (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c new file mode 100644 index 00000000000..24c344c0ce3 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c @@ -0,0 +1,69 @@ +/* Test __atomic routines for existence and proper execution on 1 byte + values with each valid memory model. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-exchange-1.c */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics" } */ + +/* Test the execution of the __atomic_exchange_n builtin for a char. */ + +extern void abort(void); + +char v, count, ret; + +int +main () +{ + v = 0; + count = 0; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELAXED) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQUIRE) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELEASE) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQ_REL) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_SEQ_CST) != count) + abort (); + count++; + + /* Now test the generic version. */ + + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELAXED); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQUIRE); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELEASE); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQ_REL); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_SEQ_CST); + if (ret != count - 1 || v != count) + abort (); + count++; + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c new file mode 100644 index 00000000000..edc212df04e --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c @@ -0,0 +1,69 @@ +/* Test __atomic routines for existence and proper execution on 2 byte + values with each valid memory model. */ +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-exchange-2.c */ +/* { dg-do run } */ +/* { dg-options "-minline-atomics" } */ + +/* Test the execution of the __atomic_X builtin for a short. */ + +extern void abort(void); + +short v, count, ret; + +int +main () +{ + v = 0; + count = 0; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELAXED) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQUIRE) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELEASE) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQ_REL) != count) + abort (); + count++; + + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_SEQ_CST) != count) + abort (); + count++; + + /* Now test the generic version. */ + + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELAXED); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQUIRE); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELEASE); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQ_REL); + if (ret != count - 1 || v != count) + abort (); + count++; + + __atomic_exchange (&v, &count, &ret, __ATOMIC_SEQ_CST); + if (ret != count - 1 || v != count) + abort (); + count++; + + return 0; +} diff --git a/libgcc/config/riscv/atomic.c b/libgcc/config/riscv/atomic.c index 69f53623509..573d163ea04 100644 --- a/libgcc/config/riscv/atomic.c +++ b/libgcc/config/riscv/atomic.c @@ -30,6 +30,8 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see #define INVERT "not %[tmp1], %[tmp1]\n\t" #define DONT_INVERT "" +/* Logic duplicated in gcc/gcc/config/riscv/sync.md for use when inlining is enabled */ + #define GENERATE_FETCH_AND_OP(type, size, opname, insn, invert, cop) \ type __sync_fetch_and_ ## opname ## _ ## size (type *p, type v) \ { \