Message ID | 20231023112452.6290-1-wuqiang.matt@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1223872vqx; Mon, 23 Oct 2023 04:26:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFfyjjUDcbyfWZkuemFwBAIj02h8jDdT9C6d2hzFs+WETY2Q+ESa0siXMAJEZbIkhJP1HD4 X-Received: by 2002:a05:6358:72a3:b0:168:e306:ff72 with SMTP id w35-20020a05635872a300b00168e306ff72mr1825842rwf.8.1698060367414; Mon, 23 Oct 2023 04:26:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698060367; cv=none; d=google.com; s=arc-20160816; b=XGHsva7DaYJxj+2PYZihKhgyeBGAskOCuzFPrgQv5wGY23lbYxhqRGkyioKiy/MtD9 dbMTdd6kdne820uIxY+jxtAd6fdy/2ImAghgN/fzfQp5GHn9dfpTv8PBpTLhTY60kbec AlgwVzcMg2+nmG44oaWhsfLw/dmM6AElQy+VAk2oHESOp3ztWD01RnGVNpobBgejlrkF ILWh9erDEYJHmjSk81dHw/D2bXFZETGOFapKfP0lNK1LmNqZxHLAKlhkiUrhl4ccijmK d0QIO0J6BU9LgDCZihoECSPKpK+kOtN4VEyU0YwtiLkqO1bKjYGJ89Mg9a8xRzSQIotW 4zXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=2192OsjXmgrld0RFvNELnPp3O3hEpjh1EJLRBwaLwjg=; fh=IpZUYbsqKid0r4pbePo9OTESEiKTs4Y7HhR29dJG614=; b=XgufY07lCDiFOqkLYRcXul9+6H/voeGOqNNncy0U73eevuLPjKcAhkhLV3LkwMaKFV EGdnhF9xMReeVhvmo80sFTbybVHhIldt7mmsxQHoq8QsBJcsPtTBk9ZLOEs6gmT184Ce bxEpn5yE3X+LNyDuH/gOI2Ut84lkkQHWFImFC0npxHDZZ2J9fpm5I4OBeV7aEFfYS11B eUWV3GyFGqOKL/Ecrc6KvzFqhvRJq9LttytnI+W9Rw7cT8lh+wqeONLGN6Cx81y3RuF6 OvDf/YXsn1HgfSXGU4HQ0Jjcm2KDe3UnAVshVIsZh0zIp64Y/7Hx0nJEey4Doix3WR5+ todw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=aEmrYdKm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id f2-20020a655902000000b005b2d044af30si6135654pgu.480.2023.10.23.04.26.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 04:26:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=aEmrYdKm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 2142C805F2D1; Mon, 23 Oct 2023 04:25:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233916AbjJWLZJ (ORCPT <rfc822;aposhian.dev@gmail.com> + 27 others); Mon, 23 Oct 2023 07:25:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233786AbjJWLZF (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 23 Oct 2023 07:25:05 -0400 Received: from mail-oo1-xc31.google.com (mail-oo1-xc31.google.com [IPv6:2607:f8b0:4864:20::c31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63F1CDB for <linux-kernel@vger.kernel.org>; Mon, 23 Oct 2023 04:25:03 -0700 (PDT) Received: by mail-oo1-xc31.google.com with SMTP id 006d021491bc7-5842c251d7cso1516894eaf.1 for <linux-kernel@vger.kernel.org>; Mon, 23 Oct 2023 04:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1698060302; x=1698665102; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2192OsjXmgrld0RFvNELnPp3O3hEpjh1EJLRBwaLwjg=; b=aEmrYdKmYJ5Gd/MO4IJE4sGEP48zfF+hPRpu9ZXzTzX2Wk5Q5S5X3dT+05vjCMdV2E QmYf2QJIdrvHLpI442ERjDKidOjxYWY3IS9d0QYzlghSuCofMq230eSRprA0yW115xRO a3zjQNCUUesJ+ImFcvSyjUz8in4bUqrIQRCC7JeyvodWbkLvdkDjAjBpm8UF008QiJNv 4ii93PqqL8C8ZM8pol/GNNqltTA7toe6Ng8TccIyx+UMVvkeb2d2AgqMFb3cLItWMv5j dNdg1FTJVCWZGPZKNVK5QJbKgLXtKgplGQXMja7hq5pz68jrVNwJw7OPVqtm/90iEiH7 K1Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698060302; x=1698665102; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2192OsjXmgrld0RFvNELnPp3O3hEpjh1EJLRBwaLwjg=; b=QihG8gLfRzsJinaIIIUmmhibnDaBAShU/ismhN1KcqXb73gMXIBEPOXc2ZvKlnqBWN dqf9feu1117Jh8L/3ahWdw4aBc772PZT225vdbWMp1dejrEVWzXmCqhfsr5TYsr7XQ9S M6ubAs+fHWb5x0OTafdUMMHPp0o0NORvIFIPWN2GTRXvVhxee7HlEY643DRaSCAVAlo2 CWlOUu6yn2fTUtmVIPbcJyG4kuUOAY05NeJJ08rlrObJent2QxqA5yspiXN4uPWq6Bhe 3radN36ugcNuIxC+3KdMTBEEDMayLy9y3ZDj/2YNaWVBhfD0nRRhSaOp76MCanBzty35 vPEQ== X-Gm-Message-State: AOJu0YxFFbskgSzje85dzAJD2Tvp241Ij4QDnjA6crUG02T8U6U2WC3/ SVeET7MaiDE1VgdA8hc4IUn1Lg== X-Received: by 2002:a05:6359:6c11:b0:168:e697:ce0d with SMTP id tc17-20020a0563596c1100b00168e697ce0dmr916303rwb.31.1698060302535; Mon, 23 Oct 2023 04:25:02 -0700 (PDT) Received: from devz1.bytedance.net ([203.208.167.147]) by smtp.gmail.com with ESMTPSA id s10-20020a65644a000000b005781e026905sm4823298pgv.56.2023.10.23.04.24.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 04:25:01 -0700 (PDT) From: "wuqiang.matt" <wuqiang.matt@bytedance.com> To: linux-trace-kernel@vger.kernel.org, mhiramat@kernel.org, davem@davemloft.net, anil.s.keshavamurthy@intel.com, naveen.n.rao@linux.ibm.com, rostedt@goodmis.org, peterz@infradead.org, akpm@linux-foundation.org, sander@svanheule.net, ebiggers@google.com, dan.j.williams@intel.com, jpoimboe@kernel.org Cc: linux-kernel@vger.kernel.org, lkp@intel.com, mattwu@163.com, "wuqiang.matt" <wuqiang.matt@bytedance.com> Subject: [PATCH v1] lib,kprobes: using try_cmpxchg_local in objpool_push Date: Mon, 23 Oct 2023 19:24:52 +0800 Message-Id: <20231023112452.6290-1-wuqiang.matt@bytedance.com> X-Mailer: git-send-email 2.40.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 23 Oct 2023 04:25:33 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780545347783481081 X-GMAIL-MSGID: 1780545347783481081 |
Series |
[v1] lib,kprobes: using try_cmpxchg_local in objpool_push
|
|
Commit Message
wuqiang.matt
Oct. 23, 2023, 11:24 a.m. UTC
The objpool_push can only happen on local cpu node, so only the local
cpu can touch slot->tail and slot->last, which ensures the correctness
of using cmpxchg without lock prefix (using try_cmpxchg_local instead
of try_cmpxchg_acquire).
Testing with IACA found the lock version of pop/push pair costs 16.46
cycles and local-push version costs 15.63 cycles. Kretprobe throughput
is improved to 1.019 times of the lock version for x86_64 systems.
OS: Debian 10 X86_64, Linux 6.6rc6 with freelist
HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s
1T 2T 4T 8T 16T
lock: 29909085 59865637 119692073 239750369 478005250
local: 30297523 60532376 121147338 242598499 484620355
32T 48T 64T 96T 128T
lock: 957553042 1435814086 1680872925 2043126796 2165424198
local: 968526317 1454991286 1861053557 2059530343 2171732306
Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
---
lib/objpool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On Mon, 23 Oct 2023 19:24:52 +0800 "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: > The objpool_push can only happen on local cpu node, so only the local > cpu can touch slot->tail and slot->last, which ensures the correctness > of using cmpxchg without lock prefix (using try_cmpxchg_local instead > of try_cmpxchg_acquire). > > Testing with IACA found the lock version of pop/push pair costs 16.46 > cycles and local-push version costs 15.63 cycles. Kretprobe throughput > is improved to 1.019 times of the lock version for x86_64 systems. > > OS: Debian 10 X86_64, Linux 6.6rc6 with freelist > HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s > > 1T 2T 4T 8T 16T > lock: 29909085 59865637 119692073 239750369 478005250 > local: 30297523 60532376 121147338 242598499 484620355 > 32T 48T 64T 96T 128T > lock: 957553042 1435814086 1680872925 2043126796 2165424198 > local: 968526317 1454991286 1861053557 2059530343 2171732306 > > Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> > --- > lib/objpool.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/objpool.c b/lib/objpool.c > index ce0087f64400..a032701beccb 100644 > --- a/lib/objpool.c > +++ b/lib/objpool.c > @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) > head = READ_ONCE(slot->head); > /* fault caught: something must be wrong */ > WARN_ON_ONCE(tail - head > pool->nr_objs); > - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); > + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); > > /* now the tail position is reserved for the given obj */ > WRITE_ONCE(slot->entries[tail & slot->mask], obj); I'm good with the change, but I don't like how "cpu" is passed to this function. It currently is only used in one location, which does: rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); Which makes this change fine. But there's nothing here to prevent someone for some reason passing another CPU to that function. If we are to make that change, I would be much more comfortable with removing "int cpu" as a parameter to objpool_try_add_slot() and adding: int cpu = raw_smp_processor_id(); Which now shows that this function *only* deals with the current CPU. -- Steve
On Mon, 23 Oct 2023 19:24:52 +0800 "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: > The objpool_push can only happen on local cpu node, so only the local > cpu can touch slot->tail and slot->last, which ensures the correctness > of using cmpxchg without lock prefix (using try_cmpxchg_local instead > of try_cmpxchg_acquire). > > Testing with IACA found the lock version of pop/push pair costs 16.46 > cycles and local-push version costs 15.63 cycles. Kretprobe throughput > is improved to 1.019 times of the lock version for x86_64 systems. > > OS: Debian 10 X86_64, Linux 6.6rc6 with freelist > HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s > > 1T 2T 4T 8T 16T > lock: 29909085 59865637 119692073 239750369 478005250 > local: 30297523 60532376 121147338 242598499 484620355 > 32T 48T 64T 96T 128T > lock: 957553042 1435814086 1680872925 2043126796 2165424198 > local: 968526317 1454991286 1861053557 2059530343 2171732306 > Yeah, slot->tail is only used on the local CPU. This looks good to me. Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thanks! > Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> > --- > lib/objpool.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/objpool.c b/lib/objpool.c > index ce0087f64400..a032701beccb 100644 > --- a/lib/objpool.c > +++ b/lib/objpool.c > @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) > head = READ_ONCE(slot->head); > /* fault caught: something must be wrong */ > WARN_ON_ONCE(tail - head > pool->nr_objs); > - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); > + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); > > /* now the tail position is reserved for the given obj */ > WRITE_ONCE(slot->entries[tail & slot->mask], obj); > -- > 2.40.1 >
On Mon, 23 Oct 2023 11:43:04 -0400 Steven Rostedt <rostedt@goodmis.org> wrote: > On Mon, 23 Oct 2023 19:24:52 +0800 > "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: > > > The objpool_push can only happen on local cpu node, so only the local > > cpu can touch slot->tail and slot->last, which ensures the correctness > > of using cmpxchg without lock prefix (using try_cmpxchg_local instead > > of try_cmpxchg_acquire). > > > > Testing with IACA found the lock version of pop/push pair costs 16.46 > > cycles and local-push version costs 15.63 cycles. Kretprobe throughput > > is improved to 1.019 times of the lock version for x86_64 systems. > > > > OS: Debian 10 X86_64, Linux 6.6rc6 with freelist > > HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s > > > > 1T 2T 4T 8T 16T > > lock: 29909085 59865637 119692073 239750369 478005250 > > local: 30297523 60532376 121147338 242598499 484620355 > > 32T 48T 64T 96T 128T > > lock: 957553042 1435814086 1680872925 2043126796 2165424198 > > local: 968526317 1454991286 1861053557 2059530343 2171732306 > > > > Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> > > --- > > lib/objpool.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/lib/objpool.c b/lib/objpool.c > > index ce0087f64400..a032701beccb 100644 > > --- a/lib/objpool.c > > +++ b/lib/objpool.c > > @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) > > head = READ_ONCE(slot->head); > > /* fault caught: something must be wrong */ > > WARN_ON_ONCE(tail - head > pool->nr_objs); > > - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); > > + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); > > > > /* now the tail position is reserved for the given obj */ > > WRITE_ONCE(slot->entries[tail & slot->mask], obj); > > I'm good with the change, but I don't like how "cpu" is passed to this > function. It currently is only used in one location, which does: > > rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); > > Which makes this change fine. But there's nothing here to prevent someone > for some reason passing another CPU to that function. > > If we are to make that change, I would be much more comfortable with > removing "int cpu" as a parameter to objpool_try_add_slot() and adding: > > int cpu = raw_smp_processor_id(); > > Which now shows that this function *only* deals with the current CPU. Oh indeed. It used to search all CPUs to push the object, but I asked him to stop that because there should be enough space to push it in the local ring. This is a remnant of that time. Wuqiang, can you make another patch to fix it? Thank you, > > -- Steve
On 2023/10/24 09:01, Masami Hiramatsu (Google) wrote: > On Mon, 23 Oct 2023 11:43:04 -0400 > Steven Rostedt <rostedt@goodmis.org> wrote: > >> On Mon, 23 Oct 2023 19:24:52 +0800 >> "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: >> >>> The objpool_push can only happen on local cpu node, so only the local >>> cpu can touch slot->tail and slot->last, which ensures the correctness >>> of using cmpxchg without lock prefix (using try_cmpxchg_local instead >>> of try_cmpxchg_acquire). >>> >>> Testing with IACA found the lock version of pop/push pair costs 16.46 >>> cycles and local-push version costs 15.63 cycles. Kretprobe throughput >>> is improved to 1.019 times of the lock version for x86_64 systems. >>> >>> OS: Debian 10 X86_64, Linux 6.6rc6 with freelist >>> HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s >>> >>> 1T 2T 4T 8T 16T >>> lock: 29909085 59865637 119692073 239750369 478005250 >>> local: 30297523 60532376 121147338 242598499 484620355 >>> 32T 48T 64T 96T 128T >>> lock: 957553042 1435814086 1680872925 2043126796 2165424198 >>> local: 968526317 1454991286 1861053557 2059530343 2171732306 >>> >>> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> >>> --- >>> lib/objpool.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/lib/objpool.c b/lib/objpool.c >>> index ce0087f64400..a032701beccb 100644 >>> --- a/lib/objpool.c >>> +++ b/lib/objpool.c >>> @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) >>> head = READ_ONCE(slot->head); >>> /* fault caught: something must be wrong */ >>> WARN_ON_ONCE(tail - head > pool->nr_objs); >>> - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); >>> + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); >>> >>> /* now the tail position is reserved for the given obj */ >>> WRITE_ONCE(slot->entries[tail & slot->mask], obj); >> >> I'm good with the change, but I don't like how "cpu" is passed to this >> function. It currently is only used in one location, which does: >> >> rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); >> >> Which makes this change fine. But there's nothing here to prevent someone >> for some reason passing another CPU to that function. >> >> If we are to make that change, I would be much more comfortable with >> removing "int cpu" as a parameter to objpool_try_add_slot() and adding: >> >> int cpu = raw_smp_processor_id(); >> >> Which now shows that this function *only* deals with the current CPU. > > Oh indeed. It used to search all CPUs to push the object, but > I asked him to stop that because there should be enough space to > push it in the local ring. This is a remnant of that time. Yes, good catch. Thanks for the explanation. > Wuqiang, can you make another patch to fix it? I'm thinking of removing the inline function objpool_try_add_slot and merging its functionality to objpool_push, like the followings: /* reclaim an object to object pool */ int objpool_push(void *obj, struct objpool_head *pool) { struct objpool_slot *slot; uint32_t head, tail; unsigned long flags; /* disable local irq to avoid preemption & interruption */ raw_local_irq_save(flags); slot = pool->cpu_slots[raw_smp_processor_id()]; /* loading tail and head as a local snapshot, tail first */ tail = READ_ONCE(slot->tail); do { head = READ_ONCE(slot->head); /* fault caught: something must be wrong */ WARN_ON_ONCE(tail - head > pool->nr_objs); } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); /* now the tail position is reserved for the given obj */ WRITE_ONCE(slot->entries[tail & slot->mask], obj); /* update sequence to make this obj available for pop() */ smp_store_release(&slot->last, tail + 1); raw_local_irq_restore(flags); return 0; } I'll prepare a new patch for this improvement. > Thank you, > >> >> -- Steve > Thanks for your time, wuqiang
On Tue, 24 Oct 2023 09:57:17 +0800 "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: > On 2023/10/24 09:01, Masami Hiramatsu (Google) wrote: > > On Mon, 23 Oct 2023 11:43:04 -0400 > > Steven Rostedt <rostedt@goodmis.org> wrote: > > > >> On Mon, 23 Oct 2023 19:24:52 +0800 > >> "wuqiang.matt" <wuqiang.matt@bytedance.com> wrote: > >> > >>> The objpool_push can only happen on local cpu node, so only the local > >>> cpu can touch slot->tail and slot->last, which ensures the correctness > >>> of using cmpxchg without lock prefix (using try_cmpxchg_local instead > >>> of try_cmpxchg_acquire). > >>> > >>> Testing with IACA found the lock version of pop/push pair costs 16.46 > >>> cycles and local-push version costs 15.63 cycles. Kretprobe throughput > >>> is improved to 1.019 times of the lock version for x86_64 systems. > >>> > >>> OS: Debian 10 X86_64, Linux 6.6rc6 with freelist > >>> HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s > >>> > >>> 1T 2T 4T 8T 16T > >>> lock: 29909085 59865637 119692073 239750369 478005250 > >>> local: 30297523 60532376 121147338 242598499 484620355 > >>> 32T 48T 64T 96T 128T > >>> lock: 957553042 1435814086 1680872925 2043126796 2165424198 > >>> local: 968526317 1454991286 1861053557 2059530343 2171732306 > >>> > >>> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> > >>> --- > >>> lib/objpool.c | 2 +- > >>> 1 file changed, 1 insertion(+), 1 deletion(-) > >>> > >>> diff --git a/lib/objpool.c b/lib/objpool.c > >>> index ce0087f64400..a032701beccb 100644 > >>> --- a/lib/objpool.c > >>> +++ b/lib/objpool.c > >>> @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) > >>> head = READ_ONCE(slot->head); > >>> /* fault caught: something must be wrong */ > >>> WARN_ON_ONCE(tail - head > pool->nr_objs); > >>> - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); > >>> + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); > >>> > >>> /* now the tail position is reserved for the given obj */ > >>> WRITE_ONCE(slot->entries[tail & slot->mask], obj); > >> > >> I'm good with the change, but I don't like how "cpu" is passed to this > >> function. It currently is only used in one location, which does: > >> > >> rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); > >> > >> Which makes this change fine. But there's nothing here to prevent someone > >> for some reason passing another CPU to that function. > >> > >> If we are to make that change, I would be much more comfortable with > >> removing "int cpu" as a parameter to objpool_try_add_slot() and adding: > >> > >> int cpu = raw_smp_processor_id(); > >> > >> Which now shows that this function *only* deals with the current CPU. > > > > Oh indeed. It used to search all CPUs to push the object, but > > I asked him to stop that because there should be enough space to > > push it in the local ring. This is a remnant of that time. > > Yes, good catch. Thanks for the explanation. > > > Wuqiang, can you make another patch to fix it? > > I'm thinking of removing the inline function objpool_try_add_slot and merging > its functionality to objpool_push, like the followings: Looks good. > > > /* reclaim an object to object pool */ > int objpool_push(void *obj, struct objpool_head *pool) > { > struct objpool_slot *slot; > uint32_t head, tail; > unsigned long flags; > > /* disable local irq to avoid preemption & interruption */ > raw_local_irq_save(flags); > > slot = pool->cpu_slots[raw_smp_processor_id()]; > > /* loading tail and head as a local snapshot, tail first */ > tail = READ_ONCE(slot->tail); > > do { > head = READ_ONCE(slot->head); > /* fault caught: something must be wrong */ > WARN_ON_ONCE(tail - head > pool->nr_objs); > } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); > > /* now the tail position is reserved for the given obj */ > WRITE_ONCE(slot->entries[tail & slot->mask], obj); > /* update sequence to make this obj available for pop() */ > smp_store_release(&slot->last, tail + 1); > > raw_local_irq_restore(flags); > > return 0; > } > > I'll prepare a new patch for this improvement. Thanks! > > > Thank you, > > > >> > >> -- Steve > > > > Thanks for your time, > wuqiang
On Mon, Oct 23, 2023 at 07:24:52PM +0800, wuqiang.matt wrote: > The objpool_push can only happen on local cpu node, so only the local > cpu can touch slot->tail and slot->last, which ensures the correctness > of using cmpxchg without lock prefix (using try_cmpxchg_local instead > of try_cmpxchg_acquire). > > Testing with IACA found the lock version of pop/push pair costs 16.46 > cycles and local-push version costs 15.63 cycles. Kretprobe throughput > is improved to 1.019 times of the lock version for x86_64 systems. > > OS: Debian 10 X86_64, Linux 6.6rc6 with freelist > HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s > > 1T 2T 4T 8T 16T > lock: 29909085 59865637 119692073 239750369 478005250 > local: 30297523 60532376 121147338 242598499 484620355 > 32T 48T 64T 96T 128T > lock: 957553042 1435814086 1680872925 2043126796 2165424198 > local: 968526317 1454991286 1861053557 2059530343 2171732306 > > Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> This patch results in lib/objpool.c:169:12: error: implicit declaration of function 'arch_cmpxchg_local' is invalid in C99 or lib/objpool.c: In function 'objpool_try_add_slot': include/linux/atomic/atomic-arch-fallback.h:384:27: error: implicit declaration of function 'arch_cmpxchg_local' for various architectures (I have seen it with arc, hexagon, and openrisc so far). As usual, my apologies for the noise if this has already been reported and/or fixed. Guenter
On 2023/10/30 01:05, Guenter Roeck wrote: > On Mon, Oct 23, 2023 at 07:24:52PM +0800, wuqiang.matt wrote: >> The objpool_push can only happen on local cpu node, so only the local >> cpu can touch slot->tail and slot->last, which ensures the correctness >> of using cmpxchg without lock prefix (using try_cmpxchg_local instead >> of try_cmpxchg_acquire). >> >> Testing with IACA found the lock version of pop/push pair costs 16.46 >> cycles and local-push version costs 15.63 cycles. Kretprobe throughput >> is improved to 1.019 times of the lock version for x86_64 systems. >> >> OS: Debian 10 X86_64, Linux 6.6rc6 with freelist >> HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s >> >> 1T 2T 4T 8T 16T >> lock: 29909085 59865637 119692073 239750369 478005250 >> local: 30297523 60532376 121147338 242598499 484620355 >> 32T 48T 64T 96T 128T >> lock: 957553042 1435814086 1680872925 2043126796 2165424198 >> local: 968526317 1454991286 1861053557 2059530343 2171732306 >> >> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> > > This patch results in > > lib/objpool.c:169:12: error: implicit declaration of function 'arch_cmpxchg_local' is invalid in C99 > > or > > lib/objpool.c: In function 'objpool_try_add_slot': > include/linux/atomic/atomic-arch-fallback.h:384:27: error: implicit declaration of function 'arch_cmpxchg_local' This patch was already reverted from probes/for-next by Masami Hiramatsu. Then we will rework it after the arch_cmpxchg_local issue is resolved. > for various architectures (I have seen it with arc, hexagon, and openrisc > so far). > > As usual, my apologies for the noise if this has already been reported > and/or fixed. We are working on it and the fix is in discussion. > Guenter Regards, wuqiang
diff --git a/lib/objpool.c b/lib/objpool.c index ce0087f64400..a032701beccb 100644 --- a/lib/objpool.c +++ b/lib/objpool.c @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) head = READ_ONCE(slot->head); /* fault caught: something must be wrong */ WARN_ON_ONCE(tail - head > pool->nr_objs); - } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); + } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1)); /* now the tail position is reserved for the given obj */ WRITE_ONCE(slot->entries[tail & slot->mask], obj);