Message ID | 20230404014206.3752945-3-yebin@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2714438vqo; Mon, 3 Apr 2023 19:07:28 -0700 (PDT) X-Google-Smtp-Source: AKy350bXxzykzKUO6kAQJ7fHuP3+NUYeciwZnNiyvu5Lv5cjNYtQngxbOm3ZMMaV6v2hheh4or/B X-Received: by 2002:a17:906:76d6:b0:93f:5170:c3d7 with SMTP id q22-20020a17090676d600b0093f5170c3d7mr489141ejn.47.1680574048241; Mon, 03 Apr 2023 19:07:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680574048; cv=none; d=google.com; s=arc-20160816; b=olwnSRNpxBjyYYYAt9Qwug2JGjM9tAxmls5GVwGNWTcflAJxm7GOXDawK+IoYdXfUs 5lZv2bjX/V0vOzXhjSM7DBzpllZlAXkcON9T3UUAKg26hFo/svTU2jGOGdeUVlY0M/gd 6tY3wV4VyUG8nPY6Aci0uLvy+iro6ZiFOxDFzT4qwd4np2W02rYy4nfNHLeEqIRTquVl wpBttlr5A3LAYECh5iMAWi7Ah8q9u9EZgJSqUeesEJ2rIQcmvyJmqsbDRcOXXM/Bjfsd oeZlQM2nRaSDTBQDQq2mMryHlO8mc5XF9m/zvlHAQAwr6/048QXD+knC2lTVxByZCe38 tFyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7os4YebrsB+qVnoAs8abz+sYpG6SziiRn/vgzbQG0e8=; b=ieifzh5SncU6SW+rxfTvBAm1NJsBdFmnDLXuS6z17s1M9TGIGG7K3Wnb8YJmHCPd5j j3Av4Vub3AcyC8crraGqpg7YCzKPWo70WPJm+OaVcS6/5T5RI2/I1NhC06theHt3dgeU RXAzbvikJy2XXSlywy6QXZ6dj2XRdVb4duNWh4Oe8Brp728j5a/KgWkS8/bKIef/x5qH V40AeqeBwW6oaAhOMY3+c/EH74tVya9/IrBnVKtJnn510jdUOl2G3iG1vkv+2DClP1s1 Znuyaqi/+A5/qLQ7rz3zsujOdOcrYJf7DSxxwfwprf9s/i0ZICyiitIa/wgp/pStPlby aIkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j26-20020a170906051a00b008d7f2747fd8si769733eja.161.2023.04.03.19.07.04; Mon, 03 Apr 2023 19:07:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232955AbjDDBmo (ORCPT <rfc822;zwp10758@gmail.com> + 99 others); Mon, 3 Apr 2023 21:42:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232221AbjDDBmh (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 3 Apr 2023 21:42:37 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 908B6B8 for <linux-kernel@vger.kernel.org>; Mon, 3 Apr 2023 18:42:36 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Pr9Vw49dmz4f3w0Y for <linux-kernel@vger.kernel.org>; Tue, 4 Apr 2023 09:42:32 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgBH9CGHgCtkmLh6GA--.26422S6; Tue, 04 Apr 2023 09:42:33 +0800 (CST) From: Ye Bin <yebin@huaweicloud.com> To: dennis@kernel.org, tj@kernel.org, cl@linux.com, linux-mm@kvack.org, yury.norov@gmail.com, andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk Cc: linux-kernel@vger.kernel.org, dchinner@redhat.com, yebin10@huawei.com Subject: [PATCH 2/2] lib/percpu_counter: fix dying cpu compare race Date: Tue, 4 Apr 2023 09:42:06 +0800 Message-Id: <20230404014206.3752945-3-yebin@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230404014206.3752945-1-yebin@huaweicloud.com> References: <20230404014206.3752945-1-yebin@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: _Ch0CgBH9CGHgCtkmLh6GA--.26422S6 X-Coremail-Antispam: 1UD129KBjvJXoW7ZFW7ArWDXw1ktFy7KFyfZwb_yoW8WFy5pr 4UKry5Jr18AF92k343Kw1vqF9I9r1kAF4rKwnrGF1fAFnxZa45urW0yrs8JF109rn7Wrya qryjgF4xCa4Yv3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUB0b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IYc2Ij64 vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK 8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I 0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2GYLDUUUU X-CM-SenderInfo: p1hex046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762209612928776942?= X-GMAIL-MSGID: =?utf-8?q?1762209612928776942?= |
Series |
fix dying cpu compare race
|
|
Commit Message
Ye Bin
April 4, 2023, 1:42 a.m. UTC
From: Ye Bin <yebin10@huawei.com> In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race condition between a cpu dying and percpu_counter_sum() iterating online CPUs was identified. Acctually, there's the same race condition between a cpu dying and __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment. But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()', then maybe return incorrect result. To solve above issue, also need to add dying CPUs count when do quick judgment in __percpu_counter_compare(). Signed-off-by: Ye Bin <yebin10@huawei.com> --- lib/percpu_counter.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
Comments
On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote: > From: Ye Bin <yebin10@huawei.com> > > In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race > condition between a cpu dying and percpu_counter_sum() iterating online CPUs > was identified. > Acctually, there's the same race condition between a cpu dying and > __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment. > But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()', > then maybe return incorrect result. > To solve above issue, also need to add dying CPUs count when do quick judgment > in __percpu_counter_compare(). Not sure I completely understood the race you are describing. All CPU accounting is protected with percpu_counters_lock. Is it a real race that you've faced, or hypothetical? If it's real, can you share stack traces? > Signed-off-by: Ye Bin <yebin10@huawei.com> > --- > lib/percpu_counter.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c > index 5004463c4f9f..399840cb0012 100644 > --- a/lib/percpu_counter.c > +++ b/lib/percpu_counter.c > @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu) > return 0; > } > > +static __always_inline unsigned int num_count_cpus(void) This doesn't look like a good name. Maybe num_offline_cpus? > +{ > +#ifdef CONFIG_HOTPLUG_CPU > + return (num_online_cpus() + num_dying_cpus()); ^ ^ 'return' is not a function. Braces are not needed Generally speaking, a sequence of atomic operations is not an atomic operation, so the above doesn't look correct. I don't think that it would be possible to implement raceless accounting based on 2 separate counters. Most probably, you'd have to use the same approach as in 8b57b11cca88: lock(); for_each_cpu_or(cpu, cpu_online_mask, cpu_dying_mask) cnt++; unlock(); And if so, I'd suggest to implement cpumask_weight_or() for that. > +#else > + return num_online_cpus(); > +#endif > +} > + > /* > * Compare counter against given value. > * Return 1 if greater, 0 if equal and -1 if less > @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) > > count = percpu_counter_read(fbc); > /* Check to see if rough count will be sufficient for comparison */ > - if (abs(count - rhs) > (batch * num_online_cpus())) { > + if (abs(count - rhs) > (batch * num_count_cpus())) { > if (count > rhs) > return 1; > else > -- > 2.31.1
On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote: > From: Ye Bin <yebin10@huawei.com> > > In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race > condition between a cpu dying and percpu_counter_sum() iterating online CPUs > was identified. > Acctually, there's the same race condition between a cpu dying and > __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment. > But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()', > then maybe return incorrect result. > To solve above issue, also need to add dying CPUs count when do quick judgment > in __percpu_counter_compare(). > > Signed-off-by: Ye Bin <yebin10@huawei.com> > --- > lib/percpu_counter.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c > index 5004463c4f9f..399840cb0012 100644 > --- a/lib/percpu_counter.c > +++ b/lib/percpu_counter.c > @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu) > return 0; > } > > +static __always_inline unsigned int num_count_cpus(void) > +{ > +#ifdef CONFIG_HOTPLUG_CPU > + return (num_online_cpus() + num_dying_cpus()); > +#else > + return num_online_cpus(); > +#endif > +} > + > /* > * Compare counter against given value. > * Return 1 if greater, 0 if equal and -1 if less > @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) > > count = percpu_counter_read(fbc); > /* Check to see if rough count will be sufficient for comparison */ > - if (abs(count - rhs) > (batch * num_online_cpus())) { > + if (abs(count - rhs) > (batch * num_count_cpus())) { What problem is this actually fixing? You haven't explained how the problem you are fixing manifests in the commit message or the cover letter. We generally don't care about the accuracy of the comparison here because we've used percpu_counter_read() which is completely racy against on-going updates. e.g. we can get preempted between percpu_counter_read() and the check and so the value can be completely wrong by the time we actually check it. Hence checking online vs online+dying really doesn't fix any of the common race conditions that occur here. Even if we fall through to using percpu_counter_sum() for the comparison value, that is still not accurate in the face of racing updates to the counter because percpu_counter_sum only prevents the percpu counter from being folded back into the global sum while it is running. The comparison is still not precise or accurate. IOWs, the result of this whole function is not guaranteed to be precise or accurate; percpu counters cannot ever be relied on for exact threshold detection unless there is some form of external global counter synchronisation being used for those comparisons (e.g. a global spinlock held around all the percpu_counter_add() modifications as well as the __percpu_counter_compare() call). That's always been the issue with unsynchronised percpu counters - cpus dying just don't matter here because there are many other more common race conditions that prevent accurate, race free comparison of per-cpu counters. Cheers, Dave.
On Tue, Apr 04, 2023 at 02:54:25PM +0800, yebin (H) wrote: > > > On 2023/4/4 10:50, Yury Norov wrote: > > On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote: > > > From: Ye Bin <yebin10@huawei.com> > > > > > > In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race > > > condition between a cpu dying and percpu_counter_sum() iterating online CPUs > > > was identified. > > > Acctually, there's the same race condition between a cpu dying and > > > __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment. > > > But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()', > > > then maybe return incorrect result. > > > To solve above issue, also need to add dying CPUs count when do quick judgment > > > in __percpu_counter_compare(). > > Not sure I completely understood the race you are describing. All CPU > > accounting is protected with percpu_counters_lock. Is it a real race > > that you've faced, or hypothetical? If it's real, can you share stack > > traces? > > > Signed-off-by: Ye Bin <yebin10@huawei.com> > > > --- > > > lib/percpu_counter.c | 11 ++++++++++- > > > 1 file changed, 10 insertions(+), 1 deletion(-) > > > > > > diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c > > > index 5004463c4f9f..399840cb0012 100644 > > > --- a/lib/percpu_counter.c > > > +++ b/lib/percpu_counter.c > > > @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu) > > > return 0; > > > } > > > +static __always_inline unsigned int num_count_cpus(void) > > This doesn't look like a good name. Maybe num_offline_cpus? > > > > > +{ > > > +#ifdef CONFIG_HOTPLUG_CPU > > > + return (num_online_cpus() + num_dying_cpus()); > > ^ ^ > > 'return' is not a function. Braces are not needed > > > > Generally speaking, a sequence of atomic operations is not an atomic > > operation, so the above doesn't look correct. I don't think that it > > would be possible to implement raceless accounting based on 2 separate > > counters. > Yes, there is indeed a concurrency issue with doing so here. But I saw that > the process was first > set up dying_mask and then reduce the number of online CPUs. The total > quantity maybe is larger > than the actual value and may fall back to a slow path.But this won't cause > any problems. This sounds like an implementation detail. If it will change in future, your accounting will get broken. If you think it's a consistent behavior and will be preserved in future, then it must be properly commented in your patch. Thanks, Yury
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index 5004463c4f9f..399840cb0012 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu) return 0; } +static __always_inline unsigned int num_count_cpus(void) +{ +#ifdef CONFIG_HOTPLUG_CPU + return (num_online_cpus() + num_dying_cpus()); +#else + return num_online_cpus(); +#endif +} + /* * Compare counter against given value. * Return 1 if greater, 0 if equal and -1 if less @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) count = percpu_counter_read(fbc); /* Check to see if rough count will be sufficient for comparison */ - if (abs(count - rhs) > (batch * num_online_cpus())) { + if (abs(count - rhs) > (batch * num_count_cpus())) { if (count > rhs) return 1; else