atomics: Use atomic_try_cmpxchg_release in rcuref_put_slowpath()

Message ID 20230509150255.3691-1-ubizjak@gmail.com
State New
Headers
Series atomics: Use atomic_try_cmpxchg_release in rcuref_put_slowpath() |

Commit Message

Uros Bizjak May 9, 2023, 3:02 p.m. UTC
  Use atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old
in rcuref_put_slowpath(). 86 CMPXCHG instruction returns success in
ZF flag, so this change saves a compare after cmpxchg.  Additionaly,
the compiler reorders some code blocks to follow likely/unlikely
annotations in the atomic_try_cmpxchg macro, improving the code from

  9a:	f0 0f b1 0b          	lock cmpxchg %ecx,(%rbx)
  9e:	83 f8 ff             	cmp    $0xffffffff,%eax
  a1:	74 04                	je     a7 <rcuref_put_slowpath+0x27>
  a3:	31 c0                	xor    %eax,%eax

to

  9a:	f0 0f b1 0b          	lock cmpxchg %ecx,(%rbx)
  9e:	75 4c                	jne    ec <rcuref_put_slowpath+0x6c>
  a0:	b0 01                	mov    $0x1,%al

No functional change intended.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
---
 lib/rcuref.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

David Laight May 10, 2023, 11:35 a.m. UTC | #1
From: Uros Bizjak
> Sent: 09 May 2023 16:03
> 
> Use atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old
> in rcuref_put_slowpath(). 86 CMPXCHG instruction returns success in
> ZF flag, so this change saves a compare after cmpxchg.  Additionaly,
> the compiler reorders some code blocks to follow likely/unlikely
> annotations in the atomic_try_cmpxchg macro, improving the code from
> 
>   9a:	f0 0f b1 0b          	lock cmpxchg %ecx,(%rbx)
>   9e:	83 f8 ff             	cmp    $0xffffffff,%eax
>   a1:	74 04                	je     a7 <rcuref_put_slowpath+0x27>
>   a3:	31 c0                	xor    %eax,%eax
> 
> to
> 
>   9a:	f0 0f b1 0b          	lock cmpxchg %ecx,(%rbx)
>   9e:	75 4c                	jne    ec <rcuref_put_slowpath+0x6c>
>   a0:	b0 01                	mov    $0x1,%al
> 
> No functional change intended.

While I'm not against the change I bet you can't detect
any actual difference. IIRC:
- The 'cmp+je' get merged into a single u-op.
- The 'lock cmpxchg' will take long enough that the instruction
  decoder won't be a bottleneck.
- Whether the je/jne is predicted taken is pretty much random.
  So you'll speculatively execute somewhere (could be anywhere)
  while the locked cycle completes.
So the only change is three less bytes of object code.
That will change the cache line alignment of later code.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
  

Patch

diff --git a/lib/rcuref.c b/lib/rcuref.c
index 5ec00a4a64d1..97f300eca927 100644
--- a/lib/rcuref.c
+++ b/lib/rcuref.c
@@ -248,7 +248,7 @@  bool rcuref_put_slowpath(rcuref_t *ref)
 		 * require a retry. If this fails the caller is not
 		 * allowed to deconstruct the object.
 		 */
-		if (atomic_cmpxchg_release(&ref->refcnt, RCUREF_NOREF, RCUREF_DEAD) != RCUREF_NOREF)
+		if (!atomic_try_cmpxchg_release(&ref->refcnt, &cnt, RCUREF_DEAD))
 			return false;
 
 		/*