From patchwork Mon Jun 5 07:00:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103106 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497407vqr; Mon, 5 Jun 2023 00:03:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ONhDcL1aNI5+hJKUB87PBqyy8mKlVMmfwsVLk6RtzVaMdvw8xnk2f+PRCrtEtCMP6d90K X-Received: by 2002:a17:90a:498b:b0:258:c89e:b137 with SMTP id d11-20020a17090a498b00b00258c89eb137mr6072838pjh.14.1685948591413; Mon, 05 Jun 2023 00:03:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948591; cv=none; d=google.com; s=arc-20160816; b=RWBzmmiC6RxspDJEfVS6l4TsU6B31PKdYrxNc++/OLKdlMLg6gCn5fYJ1jP2gG3TXk xKnGmKwkwKKC3jtb4QlbjpXVtp5Us8zKXle9ikBG7CJp+EnlBAepubYWFyhU1q5dfD74 P2JXdLsw7Hv0u8dFaJ/39geXjWOw43/zJ6Ci67y8lJGKIRup2CQvV2vqLF6UOi4lPgJX ibX59gP1OGMGEdOmuMB8gUhugpoudIkMC6vRrOL+GljVUQTVKL10EpNKN7ncKleQb0g1 CVeAHosGoWqJ6b6hyxtwvOuTQAsd0BczWkI1+HxeB8+6Dr9UN5gmupg6cFg7b4bNmiCn VABQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=HYYZEdoyhCcFLtGVBCQpofpEcdbdiQhxfzmgwdFcnAQ=; b=Ipt0bXYzD3bMuX7baQIcnPEiqEqJT0WBIcJUmDQQU17YQzGspTzn3f2+3f85/wN04N XkYmP8BuFaR8YPZNIVWRhQ1RNCeEKOsOF10Us1UUDcb1UfKlYfQITMot05Tf/DHFbHJE JYeXTiablV01IlbPF1zcASFkhuhUfcHD3zdaMqznXkdU8GkOiRQXWcVcDfrHsft9M4H/ gN5rh2gZEnSDkJfDNDuM/oRysL0Nq2QkVGjYhwk7mX2vJ8rqutOEVc7+/YWjUVXsP6Yl F+5jGQSiLEfkr6ZeoOZX1fNHYiaJd1LWA0zaS+OyYc3CKmrX6Vy4dO15JERZi05nbBN0 y+6w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020a6540cb000000b005286d7712d1si5062280pgp.672.2023.06.05.00.02.57; Mon, 05 Jun 2023 00:03:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbjFEHBl (ORCPT + 99 others); Mon, 5 Jun 2023 03:01:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229964AbjFEHBf (ORCPT ); Mon, 5 Jun 2023 03:01:35 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B6680EA; Mon, 5 Jun 2023 00:01:33 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06301152B; Mon, 5 Jun 2023 00:02:19 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B5C653F793; Mon, 5 Jun 2023 00:01:31 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 01/27] locking/atomic: arm: fix sync ops Date: Mon, 5 Jun 2023 08:00:58 +0100 Message-Id: <20230605070124.3741859-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845230076255693?= X-GMAIL-MSGID: =?utf-8?q?1767845230076255693?= The sync_*() ops on arch/arm are defined in terms of the regular bitops with no special handling. This is not correct, as UP kernels elide barriers for the fully-ordered operations, and so the required ordering is lost when such UP kernels are run under a hypervsior on an SMP system. Fix this by defining sync ops with the required barriers. Note: On 32-bit arm, the sync_*() ops are currently only used by Xen, which requires ARMv7, but the semantics can be implemented for ARMv6+. Fixes: e54d2f61528165bb ("xen/arm: sync_bitops") Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King Cc: Stefano Stabellini Cc: Will Deacon --- arch/arm/include/asm/assembler.h | 17 +++++++++++++++++ arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++---- arch/arm/lib/bitops.h | 14 +++++++++++--- arch/arm/lib/testchangebit.S | 4 ++++ arch/arm/lib/testclearbit.S | 4 ++++ arch/arm/lib/testsetbit.S | 4 ++++ 6 files changed, 65 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 505a306e0271a..aebe2c8f6a686 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@) #endif .endm +/* + * Raw SMP data memory barrier + */ + .macro __smp_dmb mode +#if __LINUX_ARM_ARCH__ >= 7 + .ifeqs "\mode","arm" + dmb ish + .else + W(dmb) ish + .endif +#elif __LINUX_ARM_ARCH__ == 6 + mcr p15, 0, r0, c7, c10, 5 @ dmb +#else + .error "Incompatible SMP platform" +#endif + .endm + #if defined(CONFIG_CPU_V7M) /* * setmode is used to assert to be in svc mode during boot. For v7-M diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h index 6f5d627c44a3c..f46b3c570f92e 100644 --- a/arch/arm/include/asm/sync_bitops.h +++ b/arch/arm/include/asm/sync_bitops.h @@ -14,14 +14,35 @@ * ops which are SMP safe even on a UP kernel. */ +/* + * Unordered + */ + #define sync_set_bit(nr, p) _set_bit(nr, p) #define sync_clear_bit(nr, p) _clear_bit(nr, p) #define sync_change_bit(nr, p) _change_bit(nr, p) -#define sync_test_and_set_bit(nr, p) _test_and_set_bit(nr, p) -#define sync_test_and_clear_bit(nr, p) _test_and_clear_bit(nr, p) -#define sync_test_and_change_bit(nr, p) _test_and_change_bit(nr, p) #define sync_test_bit(nr, addr) test_bit(nr, addr) -#define arch_sync_cmpxchg arch_cmpxchg +/* + * Fully ordered + */ + +int _sync_test_and_set_bit(int nr, volatile unsigned long * p); +#define sync_test_and_set_bit(nr, p) _sync_test_and_set_bit(nr, p) + +int _sync_test_and_clear_bit(int nr, volatile unsigned long * p); +#define sync_test_and_clear_bit(nr, p) _sync_test_and_clear_bit(nr, p) + +int _sync_test_and_change_bit(int nr, volatile unsigned long * p); +#define sync_test_and_change_bit(nr, p) _sync_test_and_change_bit(nr, p) + +#define arch_sync_cmpxchg(ptr, old, new) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __smp_mb__before_atomic(); \ + __ret = arch_cmpxchg_relaxed((ptr), (old), (new)); \ + __smp_mb__after_atomic(); \ + __ret; \ +}) #endif diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h index 95bd359912889..f069d1b2318e6 100644 --- a/arch/arm/lib/bitops.h +++ b/arch/arm/lib/bitops.h @@ -28,7 +28,7 @@ UNWIND( .fnend ) ENDPROC(\name ) .endm - .macro testop, name, instr, store + .macro __testop, name, instr, store, barrier ENTRY( \name ) UNWIND( .fnstart ) ands ip, r1, #3 @@ -38,7 +38,7 @@ UNWIND( .fnstart ) mov r0, r0, lsr #5 add r1, r1, r0, lsl #2 @ Get word offset mov r3, r2, lsl r3 @ create mask - smp_dmb + \barrier #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP) .arch_extension mp ALT_SMP(W(pldw) [r1]) @@ -50,13 +50,21 @@ UNWIND( .fnstart ) strex ip, r2, [r1] cmp ip, #0 bne 1b - smp_dmb + \barrier cmp r0, #0 movne r0, #1 2: bx lr UNWIND( .fnend ) ENDPROC(\name ) .endm + + .macro testop, name, instr, store + __testop \name, \instr, \store, smp_dmb + .endm + + .macro sync_testop, name, instr, store + __testop \name, \instr, \store, __smp_dmb + .endm #else .macro bitop, name, instr ENTRY( \name ) diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S index 4ebecc67e6e04..f13fe9bc2399a 100644 --- a/arch/arm/lib/testchangebit.S +++ b/arch/arm/lib/testchangebit.S @@ -10,3 +10,7 @@ .text testop _test_and_change_bit, eor, str + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_change_bit, eor, str +#endif diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S index 009afa0f5b4a7..4d2c5ca620ebf 100644 --- a/arch/arm/lib/testclearbit.S +++ b/arch/arm/lib/testclearbit.S @@ -10,3 +10,7 @@ .text testop _test_and_clear_bit, bicne, strne + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_clear_bit, bicne, strne +#endif diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S index f3192e55acc87..649dbab65d8d0 100644 --- a/arch/arm/lib/testsetbit.S +++ b/arch/arm/lib/testsetbit.S @@ -10,3 +10,7 @@ .text testop _test_and_set_bit, orreq, streq + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_set_bit, orreq, streq +#endif