From patchwork Fri Mar 1 13:05:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huacai Chen X-Patchwork-Id: 208829 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2097:b0:108:e6aa:91d0 with SMTP id gs23csp1063130dyb; Fri, 1 Mar 2024 05:07:24 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVFnmLo90LnxiROPbsqUbBjw+8mvmgcc3TkFqWemJNiRM3YhG0uCBpa2zA2e7q+EKo07OZn/Df21Uh4yo87YOB+Qny+Vw== X-Google-Smtp-Source: AGHT+IGbKIbwcq7mAVyK7k5PvLuKqIzGCgIuwqc0oiPxlG2kQozJyu9uRNfqTKtL136QNLeGL+aQ X-Received: by 2002:a05:6a20:72a2:b0:1a0:d0ef:c652 with SMTP id o34-20020a056a2072a200b001a0d0efc652mr1328125pzk.29.1709298444713; Fri, 01 Mar 2024 05:07:24 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709298444; cv=pass; d=google.com; s=arc-20160816; b=PwP89K2+GOgSQavqlLIrBTQ0G0oZPzebPj8CVwWGdhU+EvDhVMmV0MqtbHx8rqxw1A hs8PIO/5Q9FLS/tCrKXOSJwDIn8sXkxR+0gA1JIpnIY7okuya6JOfyw5l+95y/HqW5M8 dKCAiLjTAoWrqlOxjU81pxPwG5xANU+rhIvoJ8XgSd1RApXMjVTtEYmIpl0urVn01XRO WGbdRqj1QqmBFyYWDMthLySu+Y2R/EQ2+sSTJWB0WQZqLabTpsPxOXuwCn41AB03Opk9 /CeVZEUwoy+YjcpJUL5zcm+isnshYwIQrT7In3G3p+g9a5xQ+K6PPDRsunbJN4jG3pe/ t7Cw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=KKGcWw/LJIc+740dOkJoIbiOZXhxvQaZV2+lJK4BxvY=; fh=NZWzpEJ0vRK6vS3r2CUbXza2S58BfpgqhZRV1Lxz/oo=; b=M3RMQJjH1HCqqb/TzLJBQqumbH3eEu1ZDK++UdZ+IT+BCw68YAylEk9q2R8Ll4CDVE uJfVsP/PjKTc3795V9zogaKl1kPjV34Wm0X3UKfcdK+gvxR8nY8ZTi8Nm0/Za3yoHSwQ RbgzXVORmpS/hZmlpI5SMbkZbuh0Iplo2qTadpr5Um7uJPngx07FxMBCwIR3x+8M7kp2 KMDJIfevGQ85qwIoyLrsbJUfyj8/SPVvPxZhP9BJSwkxr9x5es57kJnlRzSc45405qez f3atrIWU7jAVV89ouW3KtPHlbuEiYJhHiEIPSUFGkk8eI3rzpQEWxhxrzlcPQnQfzKLz /4bA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-88433-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88433-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id y71-20020a62ce4a000000b006e569238c97si3336992pfg.164.2024.03.01.05.07.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 05:07:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-88433-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-88433-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88433-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8B10DB2445B for ; Fri, 1 Mar 2024 13:06:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8A22F6D517; Fri, 1 Mar 2024 13:05:57 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E51FC69E1C; Fri, 1 Mar 2024 13:05:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709298355; cv=none; b=KwnA41mGiuOf/Fo5xY+u5Kcg3tyBhAcToH93zhB/pypZiczG0s1C5A8bk6L94e//bShkLd6CR5m3fF+xIeeObGExPj0S2EGa4VtasQxTEAROEvqKk9DlyRwdcrkhu5tK4PE4+X50gurfuQW6rSuQusx2moEEYtTguDNdTjRxWJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709298355; c=relaxed/simple; bh=IiKch8zeWz42IEmKp8mhTIm+LF6IgS04Jf5vRlLvZpM=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=HcgR6ZkZcgxvZXHjVZ58T5vvXRjHPDr/+ntDY7Loy/xPcAZFn1iMqt/KAGmsr4zW9goCrN5LopJYOY3D8O78OIfvnQ/K7buPUH0gsTd0f8al3McU8bHtQMRuGdNogrf1XpDmDZv8vz2jBO41HiT2kMPbk1AcZBAU8OBqO/TIZhk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4072BC433C7; Fri, 1 Mar 2024 13:05:52 +0000 (UTC) From: Huacai Chen To: Peter Zijlstra , Ingo Molnar , Will Deacon , Arnd Bergmann Cc: Huacai Chen , Waiman Long , Boqun Feng , Guo Ren , Rui Wang , WANG Xuerui , Jiaxun Yang , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Huacai Chen Subject: [PATCH 1/2] mmiowb: Rename mmiowb_spin_{lock, unlock}() to mmiowb_in_{lock, unlock}() Date: Fri, 1 Mar 2024 21:05:31 +0800 Message-ID: <20240301130532.3953167-1-chenhuacai@loongson.cn> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792329325922548624 X-GMAIL-MSGID: 1792329325922548624 We are extending mmiowb tracking system from spinlock to mutex, so rename mmiowb_spin_{lock, unlock}() to mmiowb_in_{lock, unlock}() to reflect the fact. No functional changes. Signed-off-by: Huacai Chen --- include/asm-generic/mmiowb.h | 8 ++++---- include/linux/spinlock.h | 6 +++--- kernel/locking/spinlock_debug.c | 6 +++--- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h index 5698fca3bf56..eb2335f9f35e 100644 --- a/include/asm-generic/mmiowb.h +++ b/include/asm-generic/mmiowb.h @@ -40,13 +40,13 @@ static inline void mmiowb_set_pending(void) ms->mmiowb_pending = ms->nesting_count; } -static inline void mmiowb_spin_lock(void) +static inline void mmiowb_in_lock(void) { struct mmiowb_state *ms = __mmiowb_state(); ms->nesting_count++; } -static inline void mmiowb_spin_unlock(void) +static inline void mmiowb_in_unlock(void) { struct mmiowb_state *ms = __mmiowb_state(); @@ -59,7 +59,7 @@ static inline void mmiowb_spin_unlock(void) } #else #define mmiowb_set_pending() do { } while (0) -#define mmiowb_spin_lock() do { } while (0) -#define mmiowb_spin_unlock() do { } while (0) +#define mmiowb_in_lock() do { } while (0) +#define mmiowb_in_unlock() do { } while (0) #endif /* CONFIG_MMIOWB */ #endif /* __ASM_GENERIC_MMIOWB_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 3fcd20de6ca8..60eda70cddd0 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -185,7 +185,7 @@ static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock) { __acquire(lock); arch_spin_lock(&lock->raw_lock); - mmiowb_spin_lock(); + mmiowb_in_lock(); } static inline int do_raw_spin_trylock(raw_spinlock_t *lock) @@ -193,14 +193,14 @@ static inline int do_raw_spin_trylock(raw_spinlock_t *lock) int ret = arch_spin_trylock(&(lock)->raw_lock); if (ret) - mmiowb_spin_lock(); + mmiowb_in_lock(); return ret; } static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) { - mmiowb_spin_unlock(); + mmiowb_in_unlock(); arch_spin_unlock(&lock->raw_lock); __release(lock); } diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c index 87b03d2e41db..632a88322433 100644 --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c @@ -114,7 +114,7 @@ void do_raw_spin_lock(raw_spinlock_t *lock) { debug_spin_lock_before(lock); arch_spin_lock(&lock->raw_lock); - mmiowb_spin_lock(); + mmiowb_in_lock(); debug_spin_lock_after(lock); } @@ -123,7 +123,7 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) int ret = arch_spin_trylock(&lock->raw_lock); if (ret) { - mmiowb_spin_lock(); + mmiowb_in_lock(); debug_spin_lock_after(lock); } #ifndef CONFIG_SMP @@ -137,7 +137,7 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) void do_raw_spin_unlock(raw_spinlock_t *lock) { - mmiowb_spin_unlock(); + mmiowb_in_unlock(); debug_spin_unlock(lock); arch_spin_unlock(&lock->raw_lock); } From patchwork Fri Mar 1 13:05:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huacai Chen X-Patchwork-Id: 208828 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2097:b0:108:e6aa:91d0 with SMTP id gs23csp1062517dyb; Fri, 1 Mar 2024 05:06:30 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXZpwDGfaeJyp6/213SoPLQmM4METnRSl/13JMhHzBZwgYbU2fDLBlyyJjXZqEYBjq5n3dCCP9LVxTg5yHfZKLXPNbUsw== X-Google-Smtp-Source: AGHT+IGAUATlv6m5fQKKycMGjuTw+tjewXNynt7jQUbaK0nBDpCswjEshYdSVNhPj/aM9e5yU3ef X-Received: by 2002:a17:906:6813:b0:a43:49ca:2473 with SMTP id k19-20020a170906681300b00a4349ca2473mr1371232ejr.0.1709298390610; Fri, 01 Mar 2024 05:06:30 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709298390; cv=pass; d=google.com; s=arc-20160816; b=YaxmNHJnOHfC5ew2w9tyFTj6ziP4MX+wQgVwCiF8eKLKshknNqICrq2+fjts1sSLLY JhiOjxvtIC+oI29ZMtS56V+g7dpUVmbt8JfMruDa0GBoXjAJL2uORjEqnOUdIUvMP6Td DD3ecKaSXR8v7/GW5WaItDF6M2fmp0uL+lY0lneLpW+QNOIj3YfzhtYGbGQkI9tq9eRp 4zzS2s20IA9TXDSnSstvSisQ7qJ/XsJObsozp/uRVS/rGm7hVH2iQtDCZ45hZvRFWhJx lU3HgMk2+V8ivXSJ/IQZG086M3z314OHHvC/kCTBaj9ywXSBE9+w7fBmn256aoMpSrXh WYmg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=tUxFP62V22g4bpXDC9T2dSEey2G5PbglTkcrNYCU2U4=; fh=NZWzpEJ0vRK6vS3r2CUbXza2S58BfpgqhZRV1Lxz/oo=; b=FTCnPy97NQW5Rxvv4wrYxy1fOTgjcvWfgaD2u5oCIeF6iviGJ7LZHfexpRO3YRgFMa ePlcgBUGRyEqEy70xipe9RL4tEP2PEsP3BsW9NTRoQrHtbTkzKc7h45ZJpS4wsTiylLx B02oJFXHHpfrT7Uc+RwxprjXTEGX6AE/Rp8JC+8HFTRfhZ2xCEDEadfMMuxG0xq7YwMv 954CTypEDUuUY5dJSA2p/5HmvHjNj0zknbvIkWKJCi/HDWscNIbsnlr1tidOajlImtcR 94u2VfMmdt2pq1iBgQ0TZ0zX5Xxf2bZeF1KIbgbzM+ZiHlW3oxrxMJW0FiSv5vwRdlS9 Z+kA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-88434-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88434-ouuuleilei=gmail.com@vger.kernel.org" Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id x9-20020a1709065ac900b00a3f17b3e98asi1427316ejs.870.2024.03.01.05.06.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 05:06:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-88434-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-88434-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88434-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 3D6D21F2390B for ; Fri, 1 Mar 2024 13:06:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5F0A369DFF; Fri, 1 Mar 2024 13:06:03 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B210C6CDA2; Fri, 1 Mar 2024 13:06:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709298361; cv=none; b=qWe9cc7YzAZzRTfSVZ9Q4g6s3ECsG1JlbXlxE3bGlBtbe6JxgprGZzJ1Gd0yWV7jezxs447IcBuBjpi7HIH379YyOJcZhXwB7YKOX0LWWuO0FK08ecUPZcFZt8YcmR4nvVL0B56/IuLhP2QsbIyGtkefoxih+SdwdMhDmzMKH6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709298361; c=relaxed/simple; bh=CjDE5ydNR/BncURT9vZlYDlNHu0bNpYjqN4jeVRjGyo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rEQEvQUujosVusR+1TDIo2+9vGjtodoE5ABTV2kzBlQ6umH/rsfYxOIf6MWJVRnlG2E4IJllTHIMJXMmHIIWf0onT95ImVAv3ly93KzgrGgeC56x13upcDzQ/Ei3A6cLwA44ikwIU86Pk6GIYIB1D77s7zsJh+h0WwvCgM/MV2g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 264E1C433C7; Fri, 1 Mar 2024 13:05:57 +0000 (UTC) From: Huacai Chen To: Peter Zijlstra , Ingo Molnar , Will Deacon , Arnd Bergmann Cc: Huacai Chen , Waiman Long , Boqun Feng , Guo Ren , Rui Wang , WANG Xuerui , Jiaxun Yang , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Huacai Chen Subject: [PATCH 2/2] mmiowb: Hook up mmiowb helpers to mutexes as well as spinlocks Date: Fri, 1 Mar 2024 21:05:32 +0800 Message-ID: <20240301130532.3953167-2-chenhuacai@loongson.cn> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240301130532.3953167-1-chenhuacai@loongson.cn> References: <20240301130532.3953167-1-chenhuacai@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792329269302140150 X-GMAIL-MSGID: 1792329269302140150 Commit fb24ea52f78e0d595852e ("drivers: Remove explicit invocations of mmiowb()") remove all mmiowb() in drivers, but it says: "NOTE: mmiowb() has only ever guaranteed ordering in conjunction with spin_unlock(). However, pairing each mmiowb() removal in this patch with the corresponding call to spin_unlock() is not at all trivial, so there is a small chance that this change may regress any drivers incorrectly relying on mmiowb() to order MMIO writes between CPUs using lock-free synchronisation." The mmio in radeon_ring_commit() is protected by a mutex rather than a spinlock, but in the mutex fastpath it behaves similar to spinlock. We can add mmiowb() calls in the radeon driver but the maintainer says he doesn't like such a workaround, and radeon is not the only example of mutex protected mmio. So we extend the mmiowb tracking system from spinlock to mutex, hook up mmiowb helpers to mutexes as well as spinlocks. Without this, we get such an error when run 'glxgears' on weak ordering architectures such as LoongArch: radeon 0000:04:00.0: ring 0 stalled for more than 10324msec radeon 0000:04:00.0: ring 3 stalled for more than 10240msec radeon 0000:04:00.0: GPU lockup (current fence id 0x000000000001f412 last fence id 0x000000000001f414 on ring 3) radeon 0000:04:00.0: GPU lockup (current fence id 0x000000000000f940 last fence id 0x000000000000f941 on ring 0) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) radeon 0000:04:00.0: scheduling IB failed (-35). [drm:radeon_gem_va_ioctl [radeon]] *ERROR* Couldn't update BO_VA (-35) Link: https://lore.kernel.org/dri-devel/29df7e26-d7a8-4f67-b988-44353c4270ac@amd.com/T/#t Signed-off-by: Huacai Chen --- kernel/locking/mutex.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index cbae8c0b89ab..f51d09aec643 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -127,8 +127,10 @@ static inline struct task_struct *__mutex_trylock_common(struct mutex *lock, boo } if (atomic_long_try_cmpxchg_acquire(&lock->owner, &owner, task | flags)) { - if (task == curr) + if (task == curr) { + mmiowb_in_lock(); return NULL; + } break; } } @@ -168,8 +170,10 @@ static __always_inline bool __mutex_trylock_fast(struct mutex *lock) unsigned long curr = (unsigned long)current; unsigned long zero = 0UL; - if (atomic_long_try_cmpxchg_acquire(&lock->owner, &zero, curr)) + if (atomic_long_try_cmpxchg_acquire(&lock->owner, &zero, curr)) { + mmiowb_in_lock(); return true; + } return false; } @@ -178,6 +182,7 @@ static __always_inline bool __mutex_unlock_fast(struct mutex *lock) { unsigned long curr = (unsigned long)current; + mmiowb_in_unlock(); return atomic_long_try_cmpxchg_release(&lock->owner, &curr, 0UL); } #endif @@ -918,6 +923,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne * Except when HANDOFF, in that case we must not clear the owner field, * but instead set it to the top waiter. */ + mmiowb_in_unlock(); owner = atomic_long_read(&lock->owner); for (;;) { MUTEX_WARN_ON(__owner_task(owner) != current);