[v2,02/11] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function
Message ID | 20230209153204.683821550@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp404361wrn; Thu, 9 Feb 2023 07:36:52 -0800 (PST) X-Google-Smtp-Source: AK7set+JlxI/iNK6vRsUIiq25MnVKXc1AQCbvch+M9OzJk/6eNPdrWYNyZvFjLz6pu2ivZfm6AXf X-Received: by 2002:a50:a44e:0:b0:4aa:b2d3:eb2f with SMTP id v14-20020a50a44e000000b004aab2d3eb2fmr12760243edb.38.1675957012331; Thu, 09 Feb 2023 07:36:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675957012; cv=none; d=google.com; s=arc-20160816; b=tPOcGKfF0lh27kWRhtz7NKAgnWfS0s/aFk21sR62l+Z0MH+AwxXynSogvnGjrpzbY5 vZrqj2xwq3RfLIw1ix9kDQae3bNJgcBS+dVxfCqVnmqXCnrKB7G78iNwfZ4sVMHZiSg+ 3UnzIb5YDR38ZWD6+8mTVwvfBYj/Zd0tR5Z0ld8iSBDanpEUgOvTofUCgXs5Cd8RJuj5 DDF72MSX4Z2or5sfNR3xsHFMEnsimyKi0uCt1BJTgkEicpwU7GeqVNYWTNWL52nIOsY5 2F4y7AeNfljVtPfpdF4qFas0ELZBcgGOi8CjZAWlME/EHtqwktQnYtp1aavFHIGWmiw5 XMDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=iUvNynje4NLXuVViJqGVXz34mfwfY1r+EmC29iXiBlQ=; b=VqKikGybCSNcb0Z1TykPx5UiciHDPxHN0zbWxvEGKZCj4TQ9eUSVZfIOVT3LbvKpfv A5pIframOG+SXLT5y9IHZDePufXMq56sVSq+m62LDcSIDBpwFqiBWvbuKlKHgXPQXAR/ y2C4tYN6rw2AMhu4ETk6JnVhy3t2sxHRJVbpukbOQh5ANfWRhi+9kUuv5ifqirdlT4o7 52KGDb4DV4gYooGxTGcnj2mLdZgNVkwKYDI01J1lSdDEJA5+AWcly2nUuf5YK8yNQvv+ ZxbD4QHBPApzmDSzK1+karW2ZKrq9isp1xAmnyiu7wxVEF4aiMJfDygi00FKaAb/pKFC L+Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EwDBUPVW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z24-20020aa7d418000000b004ab1f8be6acsi1767148edq.323.2023.02.09.07.36.29; Thu, 09 Feb 2023 07:36:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EwDBUPVW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbjBIPe7 (ORCPT <rfc822;ybw1215001957@gmail.com> + 99 others); Thu, 9 Feb 2023 10:34:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231129AbjBIPet (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 9 Feb 2023 10:34:49 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEA165ACFC for <linux-kernel@vger.kernel.org>; Thu, 9 Feb 2023 07:33:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675956802; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=iUvNynje4NLXuVViJqGVXz34mfwfY1r+EmC29iXiBlQ=; b=EwDBUPVWdyTl9lxExTL6/sPyvIxpIzni24vXNIPqEc517DE12Un9e1rZeaa6hW+mwYH/MV PTeoZnhPT5i5gxCmkt7IF/kLMyGnehOhk4UmVn7xZg5wM7lKvsMtcekBxwzhf/5BqVK2xz Sp2kHFDZxBkQxqoUhNqtPtU1ifTNGes= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-556-Qjur93AHP26bSv2h0FSKcg-1; Thu, 09 Feb 2023 10:33:18 -0500 X-MC-Unique: Qjur93AHP26bSv2h0FSKcg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CC7EC85CCE9; Thu, 9 Feb 2023 15:33:14 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A2D7E40398A2; Thu, 9 Feb 2023 15:33:14 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 6718A403CC06F; Thu, 9 Feb 2023 12:32:51 -0300 (-03) Message-ID: <20230209153204.683821550@redhat.com> User-Agent: quilt/0.67 Date: Thu, 09 Feb 2023 12:01:52 -0300 From: Marcelo Tosatti <mtosatti@redhat.com> To: Christoph Lameter <cl@linux.com> Cc: Aaron Tomlin <atomlin@atomlin.com>, Frederic Weisbecker <frederic@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti <mtosatti@redhat.com> Subject: [PATCH v2 02/11] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function References: <20230209150150.380060673@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757368299850162887?= X-GMAIL-MSGID: =?utf-8?q?1757368299850162887?= |
Series |
fold per-CPU vmstats remotely
|
|
Commit Message
Marcelo Tosatti
Feb. 9, 2023, 3:01 p.m. UTC
Goal is to have vmstat_shepherd to transfer from
per-CPU counters to global counters remotely. For this,
an atomic this_cpu_cmpxchg is necessary.
Following the kernel convention for cmpxchg/cmpxchg_local,
change ARM's this_cpu_cmpxchg_ helpers to be atomic,
and add this_cpu_cmpxchg_local_ helpers which are not atomic.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Comments
On 09.02.23 16:01, Marcelo Tosatti wrote: > Goal is to have vmstat_shepherd to transfer from > per-CPU counters to global counters remotely. For this, > an atomic this_cpu_cmpxchg is necessary. > > Following the kernel convention for cmpxchg/cmpxchg_local, > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > =================================================================== > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > _pcp_protect_return(xchg_relaxed, pcp, val) > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_2(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_4(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_8(pcp, o, n) \ > + _pcp_protect_return(cmpxchg, pcp, o, n) > + > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + Call me confused (not necessarily your fault :) ). We have cmpxchg_local, cmpxchg_relaxed and cmpxchg. this_cpu_cmpxchg_local_* now calls ... *drumroll* ... cmpxchg_relaxed. IIUC, cmpxchg_local is only guaranteed to be atomic WRO the current CPU (especially, protection against interrupts when the operation is implemented using multiple instructions). We do have a generic implementation that disables/enables interrupts. IIUC, cmpxchg_relaxed an atomic update without any memory ordering guarantees (in contrast to cmpxchg, cmpxchg_acquire, cmpxchg_acquire). We default to arch_cmpxchg if we don't have arch_cmpxchg_relaxed. arch_cmpxchg defaults to arch_cmpxchg_local, if not supported. Naturally I wonder: (a) Should these new variants be rather called this_cpu_cmpxchg_relaxed_* ? (b) Should these new variants rather call the "_local" variant? Shedding some light on this would be great.
On 02.03.23 11:42, David Hildenbrand wrote: > On 09.02.23 16:01, Marcelo Tosatti wrote: >> Goal is to have vmstat_shepherd to transfer from >> per-CPU counters to global counters remotely. For this, >> an atomic this_cpu_cmpxchg is necessary. >> >> Following the kernel convention for cmpxchg/cmpxchg_local, >> change ARM's this_cpu_cmpxchg_ helpers to be atomic, >> and add this_cpu_cmpxchg_local_ helpers which are not atomic. >> >> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> >> >> Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h >> =================================================================== >> --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h >> +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h >> @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) >> _pcp_protect_return(xchg_relaxed, pcp, val) >> >> #define this_cpu_cmpxchg_1(pcp, o, n) \ >> - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> + _pcp_protect_return(cmpxchg, pcp, o, n) >> #define this_cpu_cmpxchg_2(pcp, o, n) \ >> - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> + _pcp_protect_return(cmpxchg, pcp, o, n) >> #define this_cpu_cmpxchg_4(pcp, o, n) \ >> - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> + _pcp_protect_return(cmpxchg, pcp, o, n) >> #define this_cpu_cmpxchg_8(pcp, o, n) \ >> + _pcp_protect_return(cmpxchg, pcp, o, n) >> + >> +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ >> _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ >> + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ >> + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ >> + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) >> + > > Call me confused (not necessarily your fault :) ). > > We have cmpxchg_local, cmpxchg_relaxed and cmpxchg. > this_cpu_cmpxchg_local_* now calls ... *drumroll* ... cmpxchg_relaxed. > > IIUC, cmpxchg_local is only guaranteed to be atomic WRO the current CPU > (especially, protection against interrupts when the operation is > implemented using multiple instructions). We do have a generic > implementation that disables/enables interrupts. > > IIUC, cmpxchg_relaxed an atomic update without any memory ordering > guarantees (in contrast to cmpxchg, cmpxchg_acquire, cmpxchg_acquire). > We default to arch_cmpxchg if we don't have arch_cmpxchg_relaxed. > arch_cmpxchg defaults to arch_cmpxchg_local, if not supported. > > > Naturally I wonder: > > (a) Should these new variants be rather called > this_cpu_cmpxchg_relaxed_* ? > > (b) Should these new variants rather call the "_local" variant? > > > Shedding some light on this would be great. Nevermind, looking at the other patches I realized that this is arch-specific. Other archs that have _local variants call the _local variants. So I assume we really want the name this_cpu_cmpxchg_local_*, and using _relaxed here is just the aarch64 way of implementing _local via _relaxed. Confusing :)
On Thu, Mar 02, 2023 at 11:42:57AM +0100, David Hildenbrand wrote: > On 09.02.23 16:01, Marcelo Tosatti wrote: > > Goal is to have vmstat_shepherd to transfer from > > per-CPU counters to global counters remotely. For this, > > an atomic this_cpu_cmpxchg is necessary. > > > > Following the kernel convention for cmpxchg/cmpxchg_local, > > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > =================================================================== > > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > > _pcp_protect_return(xchg_relaxed, pcp, val) > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > + > > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + > > Call me confused (not necessarily your fault :) ). > > We have cmpxchg_local, cmpxchg_relaxed and cmpxchg. this_cpu_cmpxchg_local_* > now calls ... *drumroll* ... cmpxchg_relaxed. > IIUC, cmpxchg_local is only guaranteed to be atomic WRO the current CPU > (especially, protection against interrupts when the operation is implemented > using multiple instructions). We do have a generic implementation that > disables/enables interrupts. > > IIUC, cmpxchg_relaxed an atomic update without any memory ordering > guarantees (in contrast to cmpxchg, cmpxchg_acquire, cmpxchg_acquire). We > default to arch_cmpxchg if we don't have arch_cmpxchg_relaxed. arch_cmpxchg > defaults to arch_cmpxchg_local, if not supported. > > > Naturally I wonder: > > (a) Should these new variants be rather called > this_cpu_cmpxchg_relaxed_* ? No: it happens that on ARM-64 cmpxchg_local == cmpxchg_relaxed. See cf10b79a7d88edc689479af989b3a88e9adf07ff. > (b) Should these new variants rather call the "_local" variant? They probably should. But this patchset maintains the current behaviour of this_cpu_cmpxch (for this_cpu_cmpxch_local), which was: #define this_cpu_cmpxchg_1(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_2(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_4(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_8(pcp, o, n) \ + _pcp_protect_return(cmpxchg, pcp, o, n) Thanks.
On Thu, Feb 09, 2023 at 12:01:52PM -0300, Marcelo Tosatti wrote: > Goal is to have vmstat_shepherd to transfer from > per-CPU counters to global counters remotely. For this, > an atomic this_cpu_cmpxchg is necessary. > > Following the kernel convention for cmpxchg/cmpxchg_local, > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > and add this_cpu_cmpxchg_local_ helpers which are not atomic. I can follow on the necessity of having the _local version, however two questions below. > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > =================================================================== > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > _pcp_protect_return(xchg_relaxed, pcp, val) > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_2(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_4(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_8(pcp, o, n) \ > + _pcp_protect_return(cmpxchg, pcp, o, n) This makes this_cpu_cmpxchg_*() not only non-local, but also (especially for arm64) memory barrier implications since cmpxchg() has a strong memory barrier, while the old this_cpu_cmpxchg*() doesn't have, afaiu. Maybe it's not a big deal if the audience of this helper is still limited (e.g. we can add memory barriers if we don't want strict ordering implication), but just to check with you on whether it's intended, and if so whether it may worth some comments. > + > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) I think cmpxchg_relaxed()==cmpxchg_local() here for aarch64, however should we still use cmpxchg_local() to pair with this_cpu_cmpxchg_local_*()? Nothing about your patch along since it was the same before, but I'm wondering whether this is a good time to switchover. The other thing is would it be good to copy arch-list for each arch patch? Maybe it'll help to extend the audience too. Thanks,
On Thu, Mar 02, 2023 at 03:53:12PM -0500, Peter Xu wrote: > On Thu, Feb 09, 2023 at 12:01:52PM -0300, Marcelo Tosatti wrote: > > Goal is to have vmstat_shepherd to transfer from > > per-CPU counters to global counters remotely. For this, > > an atomic this_cpu_cmpxchg is necessary. > > > > Following the kernel convention for cmpxchg/cmpxchg_local, > > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > I can follow on the necessity of having the _local version, however two > questions below. > > > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > =================================================================== > > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > > _pcp_protect_return(xchg_relaxed, pcp, val) > > > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > This makes this_cpu_cmpxchg_*() not only non-local, but also (especially > for arm64) memory barrier implications since cmpxchg() has a strong memory > barrier, while the old this_cpu_cmpxchg*() doesn't have, afaiu. > > Maybe it's not a big deal if the audience of this helper is still limited > (e.g. we can add memory barriers if we don't want strict ordering > implication), but just to check with you on whether it's intended, and if > so whether it may worth some comments. It happens that on ARM-64 cmpxchg_local == cmpxchg_relaxed. See cf10b79a7d88edc689479af989b3a88e9adf07ff. This patchset maintains the current behaviour of this_cpu_cmpxch (for this_cpu_cmpxch_local), which was: #define this_cpu_cmpxchg_1(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_2(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_4(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_8(pcp, o, n) \ + _pcp_protect_return(cmpxchg, pcp, o, n) > > + > > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > I think cmpxchg_relaxed()==cmpxchg_local() here for aarch64, however should > we still use cmpxchg_local() to pair with this_cpu_cmpxchg_local_*()? Since cmpxchg_local = cmpxchg_relaxed, seems like this is not necessary. > Nothing about your patch along since it was the same before, but I'm > wondering whether this is a good time to switchover. I would say that another patch is more appropriate to change this, if desired. > The other thing is would it be good to copy arch-list for each arch patch? > Maybe it'll help to extend the audience too. Yes, should have done that (or CC each individual maintainer). Will do on next version. Thanks.
On Thu, Mar 02, 2023 at 06:04:25PM -0300, Marcelo Tosatti wrote: > On Thu, Mar 02, 2023 at 03:53:12PM -0500, Peter Xu wrote: > > On Thu, Feb 09, 2023 at 12:01:52PM -0300, Marcelo Tosatti wrote: > > > Goal is to have vmstat_shepherd to transfer from > > > per-CPU counters to global counters remotely. For this, > > > an atomic this_cpu_cmpxchg is necessary. > > > > > > Following the kernel convention for cmpxchg/cmpxchg_local, > > > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > > > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > > > I can follow on the necessity of having the _local version, however two > > questions below. > > > > > > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > > =================================================================== > > > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > > > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > > > _pcp_protect_return(xchg_relaxed, pcp, val) > > > > > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > This makes this_cpu_cmpxchg_*() not only non-local, but also (especially > > for arm64) memory barrier implications since cmpxchg() has a strong memory > > barrier, while the old this_cpu_cmpxchg*() doesn't have, afaiu. > > > > Maybe it's not a big deal if the audience of this helper is still limited > > (e.g. we can add memory barriers if we don't want strict ordering > > implication), but just to check with you on whether it's intended, and if > > so whether it may worth some comments. > > It happens that on ARM-64 cmpxchg_local == cmpxchg_relaxed. > > See cf10b79a7d88edc689479af989b3a88e9adf07ff. This is more or less a comment in general, rather than for arm only. Fundamentally starting from this patch it's redefining this_cpu_cmpxchg(). What I meant is whether we should define it properly then implement the arch patches with what is defined. We're adding non-local semantics into it, which is obvious to me. We're (silently, in this patch for aarch64) adding memory barrier semantics too, this is not obvious to me on whether all archs should implement this api the same way. It will make a difference IMHO when the helpers are used in any other code clips, because IIUC proper definition of memory barrier implications will decide whether the callers need explicit barriers when ordering is required. > > This patchset maintains the current behaviour > of this_cpu_cmpxch (for this_cpu_cmpxch_local), which was: > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_2(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_4(pcp, o, n) \ > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > + _pcp_protect_return(cmpxchg, pcp, o, n) > #define this_cpu_cmpxchg_8(pcp, o, n) \ > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > + > > > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > > > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > I think cmpxchg_relaxed()==cmpxchg_local() here for aarch64, however should > > we still use cmpxchg_local() to pair with this_cpu_cmpxchg_local_*()? > > Since cmpxchg_local = cmpxchg_relaxed, seems like this is not necessary. > > > Nothing about your patch along since it was the same before, but I'm > > wondering whether this is a good time to switchover. > > I would say that another patch is more appropriate to change this, > if desired. Sure on this one. Thanks,
On Thu, Mar 02, 2023 at 04:25:08PM -0500, Peter Xu wrote: > On Thu, Mar 02, 2023 at 06:04:25PM -0300, Marcelo Tosatti wrote: > > On Thu, Mar 02, 2023 at 03:53:12PM -0500, Peter Xu wrote: > > > On Thu, Feb 09, 2023 at 12:01:52PM -0300, Marcelo Tosatti wrote: > > > > Goal is to have vmstat_shepherd to transfer from > > > > per-CPU counters to global counters remotely. For this, > > > > an atomic this_cpu_cmpxchg is necessary. > > > > > > > > Following the kernel convention for cmpxchg/cmpxchg_local, > > > > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > > > > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > > > > > I can follow on the necessity of having the _local version, however two > > > questions below. > > > > > > > > > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > > > > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > > > =================================================================== > > > > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > > > > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > > > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > > > > _pcp_protect_return(xchg_relaxed, pcp, val) > > > > > > > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > > > This makes this_cpu_cmpxchg_*() not only non-local, but also (especially > > > for arm64) memory barrier implications since cmpxchg() has a strong memory > > > barrier, while the old this_cpu_cmpxchg*() doesn't have, afaiu. > > > > > > Maybe it's not a big deal if the audience of this helper is still limited > > > (e.g. we can add memory barriers if we don't want strict ordering > > > implication), but just to check with you on whether it's intended, and if > > > so whether it may worth some comments. > > > > It happens that on ARM-64 cmpxchg_local == cmpxchg_relaxed. > > > > See cf10b79a7d88edc689479af989b3a88e9adf07ff. > > This is more or less a comment in general, rather than for arm only. > > Fundamentally starting from this patch it's redefining this_cpu_cmpxchg(). > What I meant is whether we should define it properly then implement the > arch patches with what is defined. > > We're adding non-local semantics into it, which is obvious to me. Which match the cmpxchg() function semantics. > We're (silently, in this patch for aarch64) adding memory barrier semantics > too, this is not obvious to me on whether all archs should implement this > api the same way. Documentation/atomic_t.txt says that _relaxed means "no barriers". So i'd assume: cmpxchg_relaxed: no additional barriers cmpxchg_local: only guarantees atomicity to wrt local CPU. cmpxchg: atomic in SMP context. https://lore.kernel.org/linux-arm-kernel/20180505103550.s7xsnto7tgppkmle@gmail.com/#r There seems to be a lack of clarity in documentation. > It will make a difference IMHO when the helpers are used in any other code > clips, because IIUC proper definition of memory barrier implications will > decide whether the callers need explicit barriers when ordering is required. Trying to limit the scope of changes to solve the problem at hand. More specifically what this patch does is: 1) Add this_cpu_cmpxchg_local, uses arch cmpxchg_local implementation to back it. 2) Add this_cpu_cmpxchg, uses arch cmpxchg implementation to back it. Note that now becomes consistent with cmpxchg and cmpxchg_local semantics. > > This patchset maintains the current behaviour > > of this_cpu_cmpxch (for this_cpu_cmpxch_local), which was: > > > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > > > > > + > > > > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > > > > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > > > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > > > > > I think cmpxchg_relaxed()==cmpxchg_local() here for aarch64, however should > > > we still use cmpxchg_local() to pair with this_cpu_cmpxchg_local_*()? > > > > Since cmpxchg_local = cmpxchg_relaxed, seems like this is not necessary. > > > > > Nothing about your patch along since it was the same before, but I'm > > > wondering whether this is a good time to switchover. > > > > I would say that another patch is more appropriate to change this, > > if desired. > > Sure on this one. Thanks, > > -- > Peter Xu > >
On Thu, Mar 02, 2023 at 03:53:12PM -0500, Peter Xu wrote: > On Thu, Feb 09, 2023 at 12:01:52PM -0300, Marcelo Tosatti wrote: > > Goal is to have vmstat_shepherd to transfer from > > per-CPU counters to global counters remotely. For this, > > an atomic this_cpu_cmpxchg is necessary. > > > > Following the kernel convention for cmpxchg/cmpxchg_local, > > change ARM's this_cpu_cmpxchg_ helpers to be atomic, > > and add this_cpu_cmpxchg_local_ helpers which are not atomic. > > I can follow on the necessity of having the _local version, however two > questions below. > > > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > =================================================================== > > --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h > > +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h > > @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) > > _pcp_protect_return(xchg_relaxed, pcp, val) > > > > #define this_cpu_cmpxchg_1(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_2(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_4(pcp, o, n) \ > > - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > #define this_cpu_cmpxchg_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg, pcp, o, n) > > This makes this_cpu_cmpxchg_*() not only non-local, but also (especially > for arm64) memory barrier implications since cmpxchg() has a strong memory > barrier, while the old this_cpu_cmpxchg*() doesn't have, afaiu. A later patch changes users of this_cpu_cmpxchg to this_cpu_cmpxchg_local, which maintains behaviour. > Maybe it's not a big deal if the audience of this helper is still limited > (e.g. we can add memory barriers if we don't want strict ordering > implication), but just to check with you on whether it's intended, and if > so whether it may worth some comments. > > > + > > +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ > > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ > > + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > I think cmpxchg_relaxed()==cmpxchg_local() here for aarch64, however should > we still use cmpxchg_local() to pair with this_cpu_cmpxchg_local_*()? > > Nothing about your patch along since it was the same before, but I'm > wondering whether this is a good time to switchover. > > The other thing is would it be good to copy arch-list for each arch patch? > Maybe it'll help to extend the audience too. > > Thanks, > > -- > Peter Xu > >
On Thu, 9 Feb 2023, Marcelo Tosatti wrote: > Goal is to have vmstat_shepherd to transfer from > per-CPU counters to global counters remotely. For this, > an atomic this_cpu_cmpxchg is necessary. The definition for this_cpu_functionality is that it is *not* incurring atomic overhead and it was introduced to *avoid* the overhead of atomic operations. This sabotages this_cpu functionality,
On Thu, Mar 16, 2023 at 12:56:20AM +0100, Christoph Lameter wrote: > On Thu, 9 Feb 2023, Marcelo Tosatti wrote: > > > Goal is to have vmstat_shepherd to transfer from > > per-CPU counters to global counters remotely. For this, > > an atomic this_cpu_cmpxchg is necessary. > > The definition for this_cpu_functionality is that it is *not* incurring > atomic overhead and it was introduced to *avoid* the overhead of atomic > operations. > > This sabotages this_cpu functionality, Christoph, Two points: 1) If you look at patch 6, users of this_cpu_cmpxchg are converted to this_cpu_cmpxchg_local (except per-CPU vmstat counters). Its up to the user of the interface, depending on its requirements, to decide whether or not atomic operations are necessary (atomic with reference to other processors). this_cpu_cmpxchg still has the benefits of use of segment registers: :Author: Christoph Lameter, August 4th, 2014 :Author: Pranith Kumar, Aug 2nd, 2014 this_cpu operations are a way of optimizing access to per cpu variables associated with the *currently* executing processor. This is done through the use of segment registers (or a dedicated register where the cpu permanently stored the beginning of the per cpu area for a specific processor). this_cpu operations add a per cpu variable offset to the processor specific per cpu base and encode that operation in the instruction operating on the per cpu variable. This means that there are no atomicity issues between the calculation of the offset and the operation on the data. Therefore it is not necessary to disable preemption or interrupts to ensure that the processor is not changed between the calculation of the address and the operation on the data. 2) The performance results seem to indicate that cache locking is effective on modern processors (on this particular case and others as well): 4b23a68f953628eb4e4b7fe1294ebf93d4b8ceee mm/page_alloc: protect PCP lists with a spinlock As preparation for dealing with both of those problems, protect the lists with a spinlock. The IRQ-unsafe version of the lock is used because IRQs are already disabled by local_lock_irqsave. spin_trylock is used in combination with local_lock_irqsave() but later will be replaced with a spin_trylock_irqsave when the local_lock is removed. The per_cpu_pages still fits within the same number of cache lines after this patch relative to before the series. struct per_cpu_pages { spinlock_t lock; /* 0 4 */ int count; /* 4 4 */ int high; /* 8 4 */ int batch; /* 12 4 */ short int free_factor; /* 16 2 */ short int expire; /* 18 2 */ /* XXX 4 bytes hole, try to pack */ struct list_head lists[13]; /* 24 208 */ /* size: 256, cachelines: 4, members: 7 */ /* sum members: 228, holes: 1, sum holes: 4 */ /* padding: 24 */ } __attribute__((__aligned__(64))); There is overhead in the fast path due to acquiring the spinlock even though the spinlock is per-cpu and uncontended in the common case. Page Fault Test (PFT) running on a 1-socket reported the following results on a 1 socket machine. 5.19.0-rc3 5.19.0-rc3 vanilla mm-pcpspinirq-v5r16 Hmean faults/sec-1 869275.7381 ( 0.00%) 874597.5167 * 0.61%* Hmean faults/sec-3 2370266.6681 ( 0.00%) 2379802.0362 * 0.40%* Hmean faults/sec-5 2701099.7019 ( 0.00%) 2664889.7003 * -1.34%* Hmean faults/sec-7 3517170.9157 ( 0.00%) 3491122.8242 * -0.74%* Hmean faults/sec-8 3965729.6187 ( 0.00%) 3939727.0243 * -0.66%* And for this case: To test the performance difference, a page allocator microbenchmark: https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/bench/page_bench01.c with loops=1000000 was used, on Intel Core i7-11850H @ 2.50GHz. For the single_page_alloc_free test, which does /** Loop to measure **/ for (i = 0; i < rec->loops; i++) { my_page = alloc_page(gfp_mask); if (unlikely(my_page == NULL)) return 0; __free_page(my_page); } Unit is cycles. Vanilla Patched Diff 115.25 117 1.4% (to be honest, the results are in the noise as well, during the tests the "LOCK cmpxchg" shows no significant difference to the "cmpxchg" version for the page allocator benchmark).
Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h =================================================================== --- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h +++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h @@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd) _pcp_protect_return(xchg_relaxed, pcp, val) #define this_cpu_cmpxchg_1(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_2(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_4(pcp, o, n) \ - _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + _pcp_protect_return(cmpxchg, pcp, o, n) #define this_cpu_cmpxchg_8(pcp, o, n) \ + _pcp_protect_return(cmpxchg, pcp, o, n) + +#define this_cpu_cmpxchg_local_1(pcp, o, n) \ _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) +#define this_cpu_cmpxchg_local_2(pcp, o, n) \ + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) +#define this_cpu_cmpxchg_local_4(pcp, o, n) \ + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) +#define this_cpu_cmpxchg_local_8(pcp, o, n) \ + _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) + #ifdef __KVM_NVHE_HYPERVISOR__ extern unsigned long __hyp_per_cpu_offset(unsigned int cpu);