Message ID | 20221108142207.4079521-1-xukuohai@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2726563wru; Tue, 8 Nov 2022 06:06:01 -0800 (PST) X-Google-Smtp-Source: AMsMyM6O6Y8K1/GzpfLKKzkjxAXxM3xr7ZW2bvcO4cQWoboplH9vqgRrVCmBVsGVbbrlQMJtBMvv X-Received: by 2002:a17:90b:378f:b0:213:acf2:13ba with SMTP id mz15-20020a17090b378f00b00213acf213bamr56283454pjb.25.1667916360588; Tue, 08 Nov 2022 06:06:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667916360; cv=none; d=google.com; s=arc-20160816; b=WLjTUP93+oTee7M6yP3cVTtoIsI8tGiwpotTv0tcIq3qDBSiJedK6yPW6j4df14eiI IZJvFHyzGzUpHmtEwYMqfjDbcCQuFSXrd6CxdSXswD6CZgqQ+monC9ce9O1QHVDAVMTi BUW+2dtcRO9i7+9dFh4poF8yKGmIByu1DA2Apx3nMu2kksE+Roz/I1dLybDWAvejaALu IMmHc7489Q6uzIKDGEiCevMBLy4uDxcEXl9InvRhWucsexEig6nkJhM22OGlWpGGX3Ny cxe9KgM5r8yn7V/EPBJ5Sq01e6hQ493ylfHgzKNKyZQvV6GIrPyjwkR4r/21U9GZTbth JD0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=vRk6j5gnHcu1SJzBaTE8t3TM5FURP7TAyWTCb3NfHH8=; b=gBbmQkmKGGp4jQD6EF6SCHAF17TxakmBQhJnRRGtgcv8svChpZ9dQzb8zxBICHB7lC KVCctc4gk23I59Os++ltRtV2Bh385gPoPYhZq/0vRY/VBeMAVQ65AQfU9aF++/LhoLC/ r1/Dm6KKPQgnBJaXkh+QuWWUm/SOiq54vA4uY4+YMegrVTBMxLk6pWSOgcHSHikcdAwT 1BwUP0JttexPG96ZhYQ8xZwnucMPuV5WW5NBKVNk808PgmdVjzoejYk0i56lNvSjb1Gu PSynU7PNi74zvlTqccqgrxpmW1yfk+neiqeaSqPBwsyVbjYCO71m8MJ/punO4jcY5Q4t WZig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v8-20020a634648000000b0046f6d8e7482si14208970pgk.229.2022.11.08.06.05.44; Tue, 08 Nov 2022 06:06:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235048AbiKHOFP (ORCPT <rfc822;david.rheinsberg@gmail.com> + 99 others); Tue, 8 Nov 2022 09:05:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235047AbiKHOFL (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 8 Nov 2022 09:05:11 -0500 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ACFC69DC1; Tue, 8 Nov 2022 06:05:07 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4N68xT54KNz4f3xRM; Tue, 8 Nov 2022 22:05:01 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgDX+9gPYmpjfcbAAA--.7279S2; Tue, 08 Nov 2022 22:05:04 +0800 (CST) From: Xu Kuohai <xukuohai@huaweicloud.com> To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Alexei Starovoitov <ast@kernel.org>, Daniel Borkmann <daniel@iogearbox.net>, Andrii Nakryiko <andrii@kernel.org>, Martin KaFai Lau <martin.lau@linux.dev>, Song Liu <song@kernel.org>, Yonghong Song <yhs@fb.com>, John Fastabend <john.fastabend@gmail.com>, KP Singh <kpsingh@kernel.org>, Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>, Jiri Olsa <jolsa@kernel.org> Subject: [PATCH bpf-next v2] bpf: Initialize same number of free nodes for each pcpu_freelist Date: Tue, 8 Nov 2022 09:22:07 -0500 Message-Id: <20221108142207.4079521-1-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgDX+9gPYmpjfcbAAA--.7279S2 X-Coremail-Antispam: 1UD129KBjvJXoW7uFyxArWrAw15ZF1xWw48WFg_yoW8CF4rpF 4fAw1jqw1vgryrWws5G3W8W343t3yDGw17Gan8Kr15Zry3KFyqqw1vyF4aqFW7WF4Ivr4F kFsI9FZFyayUWFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUkIb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7MxAIw28I cxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2 IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI 42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42 IY6xAIw20EY4v20xvaj40_Wr1j6rW3Jr1lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2 z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU1zuWJUUUUU== X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748826235617592891?= X-GMAIL-MSGID: =?utf-8?q?1748937065563506454?= |
Series |
[bpf-next,v2] bpf: Initialize same number of free nodes for each pcpu_freelist
|
|
Commit Message
Xu Kuohai
Nov. 8, 2022, 2:22 p.m. UTC
From: Xu Kuohai <xukuohai@huawei.com> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1 free nodes for some CPUs, and then possibly one CPU with fewer nodes, followed by remaining cpus with 0 nodes. For example, when nr_elems == 256 and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27 each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free nodes, while in fact each CPU should get 8 nodes equally. This patch initializes nr_elems / num_possible_cpus() free nodes for each CPU firstly, then allocates the remaining free nodes by one for each CPU until no free nodes left. Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Acked-by: Yonghong Song <yhs@fb.com> --- v2: Update commit message and add Yonghong's ack --- kernel/bpf/percpu_freelist.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
Comments
On Tue, Nov 8, 2022 at 6:05 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote: > > From: Xu Kuohai <xukuohai@huawei.com> > > pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1 > free nodes for some CPUs, and then possibly one CPU with fewer nodes, > followed by remaining cpus with 0 nodes. For example, when nr_elems == 256 > and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27 > each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free > nodes, while in fact each CPU should get 8 nodes equally. > > This patch initializes nr_elems / num_possible_cpus() free nodes for each > CPU firstly, then allocates the remaining free nodes by one for each CPU > until no free nodes left. > > Signed-off-by: Xu Kuohai <xukuohai@huawei.com> > Acked-by: Yonghong Song <yhs@fb.com> > --- > v2: Update commit message and add Yonghong's ack > --- > kernel/bpf/percpu_freelist.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c > index b6e7f5c5b9ab..89e84f7381cc 100644 > --- a/kernel/bpf/percpu_freelist.c > +++ b/kernel/bpf/percpu_freelist.c > @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, > u32 nr_elems) > { > struct pcpu_freelist_head *head; > - int i, cpu, pcpu_entries; > + int i, cpu, pcpu_entries, remain_entries; > + > + pcpu_entries = nr_elems / num_possible_cpus(); > + remain_entries = nr_elems % num_possible_cpus(); > > - pcpu_entries = nr_elems / num_possible_cpus() + 1; > i = 0; > > for_each_possible_cpu(cpu) { > + int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0); > again: > head = per_cpu_ptr(s->freelist, cpu); > /* No locking required as this is not visible yet. */ > @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, > buf += elem_size; > if (i == nr_elems) > break; > - if (i % pcpu_entries) > + if (i < j) > goto again; > } this loop's logic is quite hard to follow, if we are fixing it, can we simplify it maybe? something like: int cpu, cpu_idx, i, j, n, m; n = nr_elems / num_possible_cpus(); m = nr_elems % num_possible_cpus(); for_each_possible_cpu(cpu) { i = n + (cpu_idx < m ? 1 : 0); for (j = 0; j < i; j++) { head = per_cpu_ptr(s->freelist, cpu); pcpu_freelist_push_node(head, buf); buf += elem_size; } cpu_idx++; } no gotos, no extra ifs: for each cpu we determine correct number of elements to allocate, then just allocate them in a straightforward loop > } > -- > 2.30.2 >
On 11/10/2022 7:56 AM, Andrii Nakryiko wrote: > On Tue, Nov 8, 2022 at 6:05 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote: >> >> From: Xu Kuohai <xukuohai@huawei.com> >> >> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1 >> free nodes for some CPUs, and then possibly one CPU with fewer nodes, >> followed by remaining cpus with 0 nodes. For example, when nr_elems == 256 >> and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27 >> each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free >> nodes, while in fact each CPU should get 8 nodes equally. >> >> This patch initializes nr_elems / num_possible_cpus() free nodes for each >> CPU firstly, then allocates the remaining free nodes by one for each CPU >> until no free nodes left. >> >> Signed-off-by: Xu Kuohai <xukuohai@huawei.com> >> Acked-by: Yonghong Song <yhs@fb.com> >> --- >> v2: Update commit message and add Yonghong's ack >> --- >> kernel/bpf/percpu_freelist.c | 9 ++++++--- >> 1 file changed, 6 insertions(+), 3 deletions(-) >> >> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c >> index b6e7f5c5b9ab..89e84f7381cc 100644 >> --- a/kernel/bpf/percpu_freelist.c >> +++ b/kernel/bpf/percpu_freelist.c >> @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, >> u32 nr_elems) >> { >> struct pcpu_freelist_head *head; >> - int i, cpu, pcpu_entries; >> + int i, cpu, pcpu_entries, remain_entries; >> + >> + pcpu_entries = nr_elems / num_possible_cpus(); >> + remain_entries = nr_elems % num_possible_cpus(); >> >> - pcpu_entries = nr_elems / num_possible_cpus() + 1; >> i = 0; >> >> for_each_possible_cpu(cpu) { >> + int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0); >> again: >> head = per_cpu_ptr(s->freelist, cpu); >> /* No locking required as this is not visible yet. */ >> @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, >> buf += elem_size; >> if (i == nr_elems) >> break; >> - if (i % pcpu_entries) >> + if (i < j) >> goto again; >> } > > this loop's logic is quite hard to follow, if we are fixing it, can we > simplify it maybe? something like: > > int cpu, cpu_idx, i, j, n, m; > > n = nr_elems / num_possible_cpus(); > m = nr_elems % num_possible_cpus(); > > for_each_possible_cpu(cpu) { > i = n + (cpu_idx < m ? 1 : 0); > for (j = 0; j < i; j++) { > head = per_cpu_ptr(s->freelist, cpu); > pcpu_freelist_push_node(head, buf); > buf += elem_size; > } > cpu_idx++; > } > > > no gotos, no extra ifs: for each cpu we determine correct number of > elements to allocate, then just allocate them in a straightforward > loop > that's great, will update to: int cpu, cpu_idx, i, j, n, m; n = nr_elems / num_possible_cpus(); m = nr_elems % num_possible_cpus(); for_each_possible_cpu(cpu) { j = min(n + (cpu_idx < m ? 1 : 0), nr_elems); for (i = 0; i < j; i++) { head = per_cpu_ptr(s->freelist, cpu); pcpu_freelist_push_node(head, buf); buf += elem_size; } nr_elems -= j; cpu_idx++; } >> } >> -- >> 2.30.2 >>
diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c index b6e7f5c5b9ab..89e84f7381cc 100644 --- a/kernel/bpf/percpu_freelist.c +++ b/kernel/bpf/percpu_freelist.c @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, u32 nr_elems) { struct pcpu_freelist_head *head; - int i, cpu, pcpu_entries; + int i, cpu, pcpu_entries, remain_entries; + + pcpu_entries = nr_elems / num_possible_cpus(); + remain_entries = nr_elems % num_possible_cpus(); - pcpu_entries = nr_elems / num_possible_cpus() + 1; i = 0; for_each_possible_cpu(cpu) { + int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0); again: head = per_cpu_ptr(s->freelist, cpu); /* No locking required as this is not visible yet. */ @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, buf += elem_size; if (i == nr_elems) break; - if (i % pcpu_entries) + if (i < j) goto again; } }