locking/qspinlock: Optimize pending state waiting for unlock

Message ID 20221224120545.262989-1-guoren@kernel.org
State New
Headers
Series locking/qspinlock: Optimize pending state waiting for unlock |

Commit Message

Guo Ren Dec. 24, 2022, 12:05 p.m. UTC
  From: Guo Ren <guoren@linux.alibaba.com>

When we're pending, we only care about lock value. The xchg_tail
wouldn't affect the pending state. That means the hardware thread
could stay in a sleep state and leaves the rest execution units'
resources of pipeline to other hardware threads. This optimization
may work only for SMT scenarios because the granularity between
cores is cache-block.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
---
 kernel/locking/qspinlock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
  

Comments

Waiman Long Dec. 25, 2022, 1:55 a.m. UTC | #1
On 12/24/22 07:05, guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> When we're pending, we only care about lock value. The xchg_tail
> wouldn't affect the pending state. That means the hardware thread
> could stay in a sleep state and leaves the rest execution units'
> resources of pipeline to other hardware threads. This optimization
> may work only for SMT scenarios because the granularity between
> cores is cache-block.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> ---
>   kernel/locking/qspinlock.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 2b23378775fe..ebe6b8ec7cb3 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -371,7 +371,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>   	/*
>   	 * We're pending, wait for the owner to go away.
>   	 *
> -	 * 0,1,1 -> 0,1,0
> +	 * 0,1,1 -> *,1,0
>   	 *
>   	 * this wait loop must be a load-acquire such that we match the
>   	 * store-release that clears the locked bit and create lock
Yes, we don't care about the tail.
> @@ -380,7 +380,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>   	 * barriers.
>   	 */
>   	if (val & _Q_LOCKED_MASK)
> -		atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
> +		smp_cond_load_acquire(&lock->locked, !VAL);
>   
>   	/*
>   	 * take ownership and clear the pending bit.

We may save an AND operation here which may be a cycle or two.  I 
remember that it may be more costly to load a byte instead of an integer 
in some arches. So it doesn't seem like that much of an optimization 
from my point of view. I know that arm64 will enter a low power state in 
this *cond_load_acquire() loop, but I believe any change in the state of 
the the lock cacheline will wake it up. So it doesn't really matter if 
you are checking a byte or an int.

Do you have any other data point to support your optimization claim?

Cheers,
Longman
  
Guo Ren Dec. 25, 2022, 2:57 a.m. UTC | #2
On Sun, Dec 25, 2022 at 9:55 AM Waiman Long <longman@redhat.com> wrote:
>
> On 12/24/22 07:05, guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > When we're pending, we only care about lock value. The xchg_tail
> > wouldn't affect the pending state. That means the hardware thread
> > could stay in a sleep state and leaves the rest execution units'
> > resources of pipeline to other hardware threads. This optimization
> > may work only for SMT scenarios because the granularity between
> > cores is cache-block.
Please have a look at the comment I've written.

> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Waiman Long <longman@redhat.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > ---
> >   kernel/locking/qspinlock.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 2b23378775fe..ebe6b8ec7cb3 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -371,7 +371,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >       /*
> >        * We're pending, wait for the owner to go away.
> >        *
> > -      * 0,1,1 -> 0,1,0
> > +      * 0,1,1 -> *,1,0
> >        *
> >        * this wait loop must be a load-acquire such that we match the
> >        * store-release that clears the locked bit and create lock
> Yes, we don't care about the tail.
> > @@ -380,7 +380,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >        * barriers.
> >        */
> >       if (val & _Q_LOCKED_MASK)
> > -             atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
> > +             smp_cond_load_acquire(&lock->locked, !VAL);
> >
> >       /*
> >        * take ownership and clear the pending bit.
>
> We may save an AND operation here which may be a cycle or two.  I
> remember that it may be more costly to load a byte instead of an integer
> in some arches. So it doesn't seem like that much of an optimization
> from my point of view.
The reason is, of course, not here. See my commit comment.

> I know that arm64 will enter a low power state in
> this *cond_load_acquire() loop, but I believe any change in the state of
> the the lock cacheline will wake it up. So it doesn't really matter if
> you are checking a byte or an int.
The situation is the SMT scenarios in the same core. Not an entering
low-power state situation. Of course, the granularity between cores is
"cacheline", but the granularity between SMT hw threads of the same
core could be "byte" which internal LSU handles. For example, when a
hw-thread yields the resources of the core to other hw-threads, this
patch could help the hw-thread stay in the sleep state and prevent it
from being woken up by other hw-threads xchg_tail.

Finally, from the software semantic view, does the patch make it more
accurate? (We don't care about the tail here.)

>
> Do you have any other data point to support your optimization claim?
>
> Cheers,
> Longman
>
  
Waiman Long Dec. 25, 2022, 3:29 a.m. UTC | #3
On 12/24/22 21:57, Guo Ren wrote:
> On Sun, Dec 25, 2022 at 9:55 AM Waiman Long <longman@redhat.com> wrote:
>> On 12/24/22 07:05, guoren@kernel.org wrote:
>>> From: Guo Ren <guoren@linux.alibaba.com>
>>>
>>> When we're pending, we only care about lock value. The xchg_tail
>>> wouldn't affect the pending state. That means the hardware thread
>>> could stay in a sleep state and leaves the rest execution units'
>>> resources of pipeline to other hardware threads. This optimization
>>> may work only for SMT scenarios because the granularity between
>>> cores is cache-block.
> Please have a look at the comment I've written.
>
>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
>>> Signed-off-by: Guo Ren <guoren@kernel.org>
>>> Cc: Waiman Long <longman@redhat.com>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Boqun Feng <boqun.feng@gmail.com>
>>> Cc: Will Deacon <will@kernel.org>
>>> Cc: Ingo Molnar <mingo@redhat.com>
>>> ---
>>>    kernel/locking/qspinlock.c | 4 ++--
>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>>> index 2b23378775fe..ebe6b8ec7cb3 100644
>>> --- a/kernel/locking/qspinlock.c
>>> +++ b/kernel/locking/qspinlock.c
>>> @@ -371,7 +371,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>>        /*
>>>         * We're pending, wait for the owner to go away.
>>>         *
>>> -      * 0,1,1 -> 0,1,0
>>> +      * 0,1,1 -> *,1,0
>>>         *
>>>         * this wait loop must be a load-acquire such that we match the
>>>         * store-release that clears the locked bit and create lock
>> Yes, we don't care about the tail.
>>> @@ -380,7 +380,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>>         * barriers.
>>>         */
>>>        if (val & _Q_LOCKED_MASK)
>>> -             atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
>>> +             smp_cond_load_acquire(&lock->locked, !VAL);
>>>
>>>        /*
>>>         * take ownership and clear the pending bit.
>> We may save an AND operation here which may be a cycle or two.  I
>> remember that it may be more costly to load a byte instead of an integer
>> in some arches. So it doesn't seem like that much of an optimization
>> from my point of view.
> The reason is, of course, not here. See my commit comment.
>
>> I know that arm64 will enter a low power state in
>> this *cond_load_acquire() loop, but I believe any change in the state of
>> the the lock cacheline will wake it up. So it doesn't really matter if
>> you are checking a byte or an int.
> The situation is the SMT scenarios in the same core. Not an entering
> low-power state situation. Of course, the granularity between cores is
> "cacheline", but the granularity between SMT hw threads of the same
> core could be "byte" which internal LSU handles. For example, when a
> hw-thread yields the resources of the core to other hw-threads, this
> patch could help the hw-thread stay in the sleep state and prevent it
> from being woken up by other hw-threads xchg_tail.
>
> Finally, from the software semantic view, does the patch make it more
> accurate? (We don't care about the tail here.)

Thanks for the clarification.

I am not arguing for the simplification part. I just want to clarify my 
limited understanding of how the CPU hardware are actually dealing with 
these conditions.

With that, I am fine with this patch. It would be nice if you can 
elaborate a bit more in your commit log.

Acked-by: Waiman Long <longman@redhat.com>
  
Waiman Long Dec. 25, 2022, 3:30 a.m. UTC | #4
On 12/24/22 22:29, Waiman Long wrote:
> On 12/24/22 21:57, Guo Ren wrote:
>> On Sun, Dec 25, 2022 at 9:55 AM Waiman Long <longman@redhat.com> wrote:
>>> On 12/24/22 07:05, guoren@kernel.org wrote:
>>>> From: Guo Ren <guoren@linux.alibaba.com>
>>>>
>>>> When we're pending, we only care about lock value. The xchg_tail
>>>> wouldn't affect the pending state. That means the hardware thread
>>>> could stay in a sleep state and leaves the rest execution units'
>>>> resources of pipeline to other hardware threads. This optimization
>>>> may work only for SMT scenarios because the granularity between
>>>> cores is cache-block.
>> Please have a look at the comment I've written.
>>
>>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
>>>> Signed-off-by: Guo Ren <guoren@kernel.org>
>>>> Cc: Waiman Long <longman@redhat.com>
>>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>>> Cc: Boqun Feng <boqun.feng@gmail.com>
>>>> Cc: Will Deacon <will@kernel.org>
>>>> Cc: Ingo Molnar <mingo@redhat.com>
>>>> ---
>>>>    kernel/locking/qspinlock.c | 4 ++--
>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>>>> index 2b23378775fe..ebe6b8ec7cb3 100644
>>>> --- a/kernel/locking/qspinlock.c
>>>> +++ b/kernel/locking/qspinlock.c
>>>> @@ -371,7 +371,7 @@ void __lockfunc 
>>>> queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>>>        /*
>>>>         * We're pending, wait for the owner to go away.
>>>>         *
>>>> -      * 0,1,1 -> 0,1,0
>>>> +      * 0,1,1 -> *,1,0
>>>>         *
>>>>         * this wait loop must be a load-acquire such that we match the
>>>>         * store-release that clears the locked bit and create lock
>>> Yes, we don't care about the tail.
>>>> @@ -380,7 +380,7 @@ void __lockfunc 
>>>> queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>>>         * barriers.
>>>>         */
>>>>        if (val & _Q_LOCKED_MASK)
>>>> -             atomic_cond_read_acquire(&lock->val, !(VAL & 
>>>> _Q_LOCKED_MASK));
>>>> +             smp_cond_load_acquire(&lock->locked, !VAL);
>>>>
>>>>        /*
>>>>         * take ownership and clear the pending bit.
>>> We may save an AND operation here which may be a cycle or two.  I
>>> remember that it may be more costly to load a byte instead of an 
>>> integer
>>> in some arches. So it doesn't seem like that much of an optimization
>>> from my point of view.
>> The reason is, of course, not here. See my commit comment.
>>
>>> I know that arm64 will enter a low power state in
>>> this *cond_load_acquire() loop, but I believe any change in the 
>>> state of
>>> the the lock cacheline will wake it up. So it doesn't really matter if
>>> you are checking a byte or an int.
>> The situation is the SMT scenarios in the same core. Not an entering
>> low-power state situation. Of course, the granularity between cores is
>> "cacheline", but the granularity between SMT hw threads of the same
>> core could be "byte" which internal LSU handles. For example, when a
>> hw-thread yields the resources of the core to other hw-threads, this
>> patch could help the hw-thread stay in the sleep state and prevent it
>> from being woken up by other hw-threads xchg_tail.
>>
>> Finally, from the software semantic view, does the patch make it more
>> accurate? (We don't care about the tail here.)
>
> Thanks for the clarification.
>
> I am not arguing for the simplification part. I just want to clarify 
> my limited understanding of how the CPU hardware are actually dealing 
> with these conditions.
>
> With that, I am fine with this patch. It would be nice if you can 
> elaborate a bit more in your commit log.
>
> Acked-by: Waiman Long <longman@redhat.com>
>
BTW, have you actually observe any performance improvement with this patch?

Cheers,
Longman
  
Guo Ren Dec. 25, 2022, 11:59 a.m. UTC | #5
On Sun, Dec 25, 2022 at 11:31 AM Waiman Long <longman@redhat.com> wrote:
>
> On 12/24/22 22:29, Waiman Long wrote:
> > On 12/24/22 21:57, Guo Ren wrote:
> >> On Sun, Dec 25, 2022 at 9:55 AM Waiman Long <longman@redhat.com> wrote:
> >>> On 12/24/22 07:05, guoren@kernel.org wrote:
> >>>> From: Guo Ren <guoren@linux.alibaba.com>
> >>>>
> >>>> When we're pending, we only care about lock value. The xchg_tail
> >>>> wouldn't affect the pending state. That means the hardware thread
> >>>> could stay in a sleep state and leaves the rest execution units'
> >>>> resources of pipeline to other hardware threads. This optimization
> >>>> may work only for SMT scenarios because the granularity between
> >>>> cores is cache-block.
> >> Please have a look at the comment I've written.
> >>
> >>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> >>>> Signed-off-by: Guo Ren <guoren@kernel.org>
> >>>> Cc: Waiman Long <longman@redhat.com>
> >>>> Cc: Peter Zijlstra <peterz@infradead.org>
> >>>> Cc: Boqun Feng <boqun.feng@gmail.com>
> >>>> Cc: Will Deacon <will@kernel.org>
> >>>> Cc: Ingo Molnar <mingo@redhat.com>
> >>>> ---
> >>>>    kernel/locking/qspinlock.c | 4 ++--
> >>>>    1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> >>>> index 2b23378775fe..ebe6b8ec7cb3 100644
> >>>> --- a/kernel/locking/qspinlock.c
> >>>> +++ b/kernel/locking/qspinlock.c
> >>>> @@ -371,7 +371,7 @@ void __lockfunc
> >>>> queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >>>>        /*
> >>>>         * We're pending, wait for the owner to go away.
> >>>>         *
> >>>> -      * 0,1,1 -> 0,1,0
> >>>> +      * 0,1,1 -> *,1,0
> >>>>         *
> >>>>         * this wait loop must be a load-acquire such that we match the
> >>>>         * store-release that clears the locked bit and create lock
> >>> Yes, we don't care about the tail.
> >>>> @@ -380,7 +380,7 @@ void __lockfunc
> >>>> queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >>>>         * barriers.
> >>>>         */
> >>>>        if (val & _Q_LOCKED_MASK)
> >>>> -             atomic_cond_read_acquire(&lock->val, !(VAL &
> >>>> _Q_LOCKED_MASK));
> >>>> +             smp_cond_load_acquire(&lock->locked, !VAL);
> >>>>
> >>>>        /*
> >>>>         * take ownership and clear the pending bit.
> >>> We may save an AND operation here which may be a cycle or two.  I
> >>> remember that it may be more costly to load a byte instead of an
> >>> integer
> >>> in some arches. So it doesn't seem like that much of an optimization
> >>> from my point of view.
> >> The reason is, of course, not here. See my commit comment.
> >>
> >>> I know that arm64 will enter a low power state in
> >>> this *cond_load_acquire() loop, but I believe any change in the
> >>> state of
> >>> the the lock cacheline will wake it up. So it doesn't really matter if
> >>> you are checking a byte or an int.
> >> The situation is the SMT scenarios in the same core. Not an entering
> >> low-power state situation. Of course, the granularity between cores is
> >> "cacheline", but the granularity between SMT hw threads of the same
> >> core could be "byte" which internal LSU handles. For example, when a
> >> hw-thread yields the resources of the core to other hw-threads, this
> >> patch could help the hw-thread stay in the sleep state and prevent it
> >> from being woken up by other hw-threads xchg_tail.
> >>
> >> Finally, from the software semantic view, does the patch make it more
> >> accurate? (We don't care about the tail here.)
> >
> > Thanks for the clarification.
> >
> > I am not arguing for the simplification part. I just want to clarify
> > my limited understanding of how the CPU hardware are actually dealing
> > with these conditions.
> >
> > With that, I am fine with this patch. It would be nice if you can
> > elaborate a bit more in your commit log.
> >
> > Acked-by: Waiman Long <longman@redhat.com>
> >
> BTW, have you actually observe any performance improvement with this patch?
Not yet. I'm researching how the hardware could satisfy qspinlock
better. Here are three points I concluded:
 1. Atomic forward progress guarantee: Prevent unnecessary LL/SC
retry, which may cause expensive bus transactions when crossing the
NUMA nodes.
 2. Sub-word atomic primitive: Enable freedom from interference
between locked, pending, and tail.
 3. Load-cond primitive: Prevent processor from wasting loop
operations for detection.

For points 2 & 3, I have a continuous proposal to add new
atomic_read_cond_mask & smp_load_cond_mask for Linux atomic primitives
[1].

[1]: https://lore.kernel.org/lkml/20221225115529.490378-1-guoren@kernel.org/



>
> Cheers,
> Longman
>


--
Best Regards
 Guo Ren
  
Ingo Molnar Jan. 4, 2023, 8:19 p.m. UTC | #6
* Guo Ren <guoren@kernel.org> wrote:

> > >> The situation is the SMT scenarios in the same core. Not an entering
> > >> low-power state situation. Of course, the granularity between cores is
> > >> "cacheline", but the granularity between SMT hw threads of the same
> > >> core could be "byte" which internal LSU handles. For example, when a
> > >> hw-thread yields the resources of the core to other hw-threads, this
> > >> patch could help the hw-thread stay in the sleep state and prevent it
> > >> from being woken up by other hw-threads xchg_tail.
> > >>
> > >> Finally, from the software semantic view, does the patch make it more
> > >> accurate? (We don't care about the tail here.)
> > >
> > > Thanks for the clarification.
> > >
> > > I am not arguing for the simplification part. I just want to clarify
> > > my limited understanding of how the CPU hardware are actually dealing
> > > with these conditions.
> > >
> > > With that, I am fine with this patch. It would be nice if you can
> > > elaborate a bit more in your commit log.
> > >
> > > Acked-by: Waiman Long <longman@redhat.com>
> > >
> > BTW, have you actually observe any performance improvement with this patch?
> Not yet. I'm researching how the hardware could satisfy qspinlock
> better. Here are three points I concluded:
>  1. Atomic forward progress guarantee: Prevent unnecessary LL/SC
> retry, which may cause expensive bus transactions when crossing the
> NUMA nodes.
>  2. Sub-word atomic primitive: Enable freedom from interference
> between locked, pending, and tail.
>  3. Load-cond primitive: Prevent processor from wasting loop
> operations for detection.

As to this patch, please send a -v2 version of this patch that has this 
discussion & explanation included in the changelog, as requested by Waiman.

Thanks,

	Ingo
  
Guo Ren Jan. 5, 2023, 2:31 a.m. UTC | #7
On Thu, Jan 5, 2023 at 4:19 AM Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Guo Ren <guoren@kernel.org> wrote:
>
> > > >> The situation is the SMT scenarios in the same core. Not an entering
> > > >> low-power state situation. Of course, the granularity between cores is
> > > >> "cacheline", but the granularity between SMT hw threads of the same
> > > >> core could be "byte" which internal LSU handles. For example, when a
> > > >> hw-thread yields the resources of the core to other hw-threads, this
> > > >> patch could help the hw-thread stay in the sleep state and prevent it
> > > >> from being woken up by other hw-threads xchg_tail.
> > > >>
> > > >> Finally, from the software semantic view, does the patch make it more
> > > >> accurate? (We don't care about the tail here.)
> > > >
> > > > Thanks for the clarification.
> > > >
> > > > I am not arguing for the simplification part. I just want to clarify
> > > > my limited understanding of how the CPU hardware are actually dealing
> > > > with these conditions.
> > > >
> > > > With that, I am fine with this patch. It would be nice if you can
> > > > elaborate a bit more in your commit log.
> > > >
> > > > Acked-by: Waiman Long <longman@redhat.com>
> > > >
> > > BTW, have you actually observe any performance improvement with this patch?
> > Not yet. I'm researching how the hardware could satisfy qspinlock
> > better. Here are three points I concluded:
> >  1. Atomic forward progress guarantee: Prevent unnecessary LL/SC
> > retry, which may cause expensive bus transactions when crossing the
> > NUMA nodes.
> >  2. Sub-word atomic primitive: Enable freedom from interference
> > between locked, pending, and tail.
> >  3. Load-cond primitive: Prevent processor from wasting loop
> > operations for detection.
>
> As to this patch, please send a -v2 version of this patch that has this
> discussion & explanation included in the changelog, as requested by Waiman.
Done

https://lore.kernel.org/lkml/20230105021952.3090070-1-guoren@kernel.org/

>
> Thanks,
>
>         Ingo
  
Ingo Molnar Jan. 5, 2023, 10:03 a.m. UTC | #8
* Guo Ren <guoren@kernel.org> wrote:

> On Thu, Jan 5, 2023 at 4:19 AM Ingo Molnar <mingo@kernel.org> wrote:
> >
> >
> > * Guo Ren <guoren@kernel.org> wrote:
> >
> > > > >> The situation is the SMT scenarios in the same core. Not an entering
> > > > >> low-power state situation. Of course, the granularity between cores is
> > > > >> "cacheline", but the granularity between SMT hw threads of the same
> > > > >> core could be "byte" which internal LSU handles. For example, when a
> > > > >> hw-thread yields the resources of the core to other hw-threads, this
> > > > >> patch could help the hw-thread stay in the sleep state and prevent it
> > > > >> from being woken up by other hw-threads xchg_tail.
> > > > >>
> > > > >> Finally, from the software semantic view, does the patch make it more
> > > > >> accurate? (We don't care about the tail here.)
> > > > >
> > > > > Thanks for the clarification.
> > > > >
> > > > > I am not arguing for the simplification part. I just want to clarify
> > > > > my limited understanding of how the CPU hardware are actually dealing
> > > > > with these conditions.
> > > > >
> > > > > With that, I am fine with this patch. It would be nice if you can
> > > > > elaborate a bit more in your commit log.
> > > > >
> > > > > Acked-by: Waiman Long <longman@redhat.com>
> > > > >
> > > > BTW, have you actually observe any performance improvement with this patch?
> > > Not yet. I'm researching how the hardware could satisfy qspinlock
> > > better. Here are three points I concluded:
> > >  1. Atomic forward progress guarantee: Prevent unnecessary LL/SC
> > > retry, which may cause expensive bus transactions when crossing the
> > > NUMA nodes.
> > >  2. Sub-word atomic primitive: Enable freedom from interference
> > > between locked, pending, and tail.
> > >  3. Load-cond primitive: Prevent processor from wasting loop
> > > operations for detection.
> >
> > As to this patch, please send a -v2 version of this patch that has this
> > discussion & explanation included in the changelog, as requested by Waiman.
> Done
> 
> https://lore.kernel.org/lkml/20230105021952.3090070-1-guoren@kernel.org/

Applied to tip:locking/core for a v6.3 merge, thanks!

	Ingo
  

Patch

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 2b23378775fe..ebe6b8ec7cb3 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -371,7 +371,7 @@  void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	/*
 	 * We're pending, wait for the owner to go away.
 	 *
-	 * 0,1,1 -> 0,1,0
+	 * 0,1,1 -> *,1,0
 	 *
 	 * this wait loop must be a load-acquire such that we match the
 	 * store-release that clears the locked bit and create lock
@@ -380,7 +380,7 @@  void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * barriers.
 	 */
 	if (val & _Q_LOCKED_MASK)
-		atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
+		smp_cond_load_acquire(&lock->locked, !VAL);
 
 	/*
 	 * take ownership and clear the pending bit.