Message ID | 20231212042108.682072-4-yury.norov@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp7505045vqy; Mon, 11 Dec 2023 20:22:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IHNdae+EdIuiN/yDoJzLxh2Fbgm5tctKikVHDU60eHjDLspFubvS3Q0kqsCd/NHyP8l/oyX X-Received: by 2002:a05:6808:211d:b0:3b9:dc51:c9ef with SMTP id r29-20020a056808211d00b003b9dc51c9efmr7611537oiw.31.1702354924292; Mon, 11 Dec 2023 20:22:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702354924; cv=none; d=google.com; s=arc-20160816; b=gtF+MS4CFGhvzQ6gFMUgYhdtaUDE73O7p9yG5/rzklHdjebEB6CH1BK47d5ljugpUK NAh2N9ppZno/iKXz5XAgTKLSo2dKxCbVfhd0Wx9aGWjf3pwgDWo1+jmEk7dd9ixKgLqi 8UHX0KiBHXQJDa7gU0kfmNGAQwv8/abOirvwEKp8ZCX3m2uDmoF7jPTkwIeDNLDpqq8u Yrc4YrbG6DNVwtoXENQOFJCrRasmjuIQazQqjYUbHtcn4eaMy/Qs078G1wHbEh8hFLko sj7dMs5rOYZ/ANcm8/0SoGWVPDgtoLWill1HUlP/cKbLMOr6gxQPnze2+h4/swL9fPX+ 0C2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pcK8BvqeLnlnGvCjlzdUPO//8WS/BTL6dVTgbP4I0qA=; fh=kL/RA2Gd9NuQcovU0+7My8qgZ6xDrReyLO6zCGgffAQ=; b=e4VyUZbCjIh+OOwB+B4fVTALFqoGm6Dlz/PhWaGXLvrrT1ShcMrcguh35gXtFoVa1f jxjraLt07r9f2iUsWY6ypo5jt3/DM+I754IuN8bBltjamPKndKBgxTbcUaY7355iwQJ3 N/kJvN0ATYE04qRguYWyvhdFuV2IeDc0g+zwySx2/mAbcVaTYzeax2dcm1rkzt92ow0h tm4lUk2mDBIVByNeZoo5K5+947xD+Rd4ke3m8N8Y3WOvbvS0FaAjXhLto83VFVW4UMas IhdwTKAqYu56gOjmbMHXHudCHeIH1Uaxepw1MXiDlx5FVO3Z0FIg0Y81yZK7NGf7jDDd BpHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=JN28sTpk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id y9-20020a63e249000000b005b9377ee20csi7100826pgj.701.2023.12.11.20.22.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 20:22:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=JN28sTpk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 549E98076499; Mon, 11 Dec 2023 20:21:37 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229644AbjLLEVO (ORCPT <rfc822;dexuan.linux@gmail.com> + 99 others); Mon, 11 Dec 2023 23:21:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbjLLEVJ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 11 Dec 2023 23:21:09 -0500 Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [IPv6:2607:f8b0:4864:20::b36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3656EBC for <linux-kernel@vger.kernel.org>; Mon, 11 Dec 2023 20:21:16 -0800 (PST) Received: by mail-yb1-xb36.google.com with SMTP id 3f1490d57ef6-da077db5145so4288155276.0 for <linux-kernel@vger.kernel.org>; Mon, 11 Dec 2023 20:21:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702354875; x=1702959675; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pcK8BvqeLnlnGvCjlzdUPO//8WS/BTL6dVTgbP4I0qA=; b=JN28sTpkGz87XAw+tUM9cVcgd2bhlaa4enrddq2IMAE07lKigzYXHEYGMDemtNEemd +TQlEoTK4LlmCI04xOAiSdnLWfPzSMkCy60Zss/dQQHuLuE8EYZkpR+hSzPnbNXHAg/A tmcpl8PxdVGaCnwddloNM+WsbqUETlXKsyRbZR2n/JR9ltIOLuYywepi1P9NeZibeZol oBqhYGH0QVZR9U1LfC6za8Sw3BSLWeKeF44aqUf+g7jXHvyymzlNF8GpyseQEIU1KwGM PXtqcEDaHPY86IxW1j5p9kx8NbhiLFyj+OKgLTOa2weJsh3FBPwXQU2/qJ2LVuJgM5WY RAAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702354875; x=1702959675; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pcK8BvqeLnlnGvCjlzdUPO//8WS/BTL6dVTgbP4I0qA=; b=fN7xgkb8cKOnKRIlmNgwaMNm/t43z32ZQaWZAs1HIu2FnX534vaQYi6ZYZfsy3WNUU /H5b+b3obobNv4m9YuVOVK2633ZC5Cq7sKoG8+fS3bCnmkh9wMGgvBNXTbzDzmYmlq8r whRGUZUpWHmuvWZnRP9CVlIvG1l0FmZLqs5z1BiQPuWq2sMg7DlKpXPS7RPIzE5tcnk0 PwyjoRV49j/oiha5bTgjxj5LGaUcul+hRILt762Qps86IX2MsySb/V+tJ+GTdXjY4Yo/ g7ltYOY3zXhS2sTOCViuBwyGNlBcHqfqmwOQ1EbZC3UKfbTX1zsyanPf5cyjzjZJ6oC4 3N0Q== X-Gm-Message-State: AOJu0YxPOBxXGrTydHgQVgAH0FY9U5zO/P7RHiUDJNP+rA+ql3cd0hgw QqQqlKhyjWUjDuuh0/vPKF8= X-Received: by 2002:a5b:309:0:b0:db7:dad0:76c8 with SMTP id j9-20020a5b0309000000b00db7dad076c8mr3671333ybp.100.1702354875225; Mon, 11 Dec 2023 20:21:15 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id 13-20020a25040d000000b00d7f06aa25c5sm3008349ybe.58.2023.12.11.20.21.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 20:21:14 -0800 (PST) From: Yury Norov <yury.norov@gmail.com> To: Andrew Morton <akpm@linux-foundation.org>, Thomas Gleixner <tglx@linutronix.de>, Ming Lei <ming.lei@redhat.com>, linux-kernel@vger.kernel.org Cc: Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Rasmus Villemoes <linux@rasmusvillemoes.dk> Subject: [PATCH v3 3/7] lib/group_cpus: relax atomicity requirement in grp_spread_init_one() Date: Mon, 11 Dec 2023 20:21:03 -0800 Message-Id: <20231212042108.682072-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212042108.682072-1-yury.norov@gmail.com> References: <20231212042108.682072-1-yury.norov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 11 Dec 2023 20:21:37 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785048517313166958 X-GMAIL-MSGID: 1785048517313166958 |
Series |
lib/group_cpus: rework grp_spread_init_one() and make it O(1)
|
|
Commit Message
Yury Norov
Dec. 12, 2023, 4:21 a.m. UTC
Because nmsk and irqmsk are stable, extra atomicity is not required.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
lib/group_cpus.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
Comments
On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote: > Because nmsk and irqmsk are stable, extra atomicity is not required. > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > --- > lib/group_cpus.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > index 10dead3ab0e0..7ac94664230f 100644 > --- a/lib/group_cpus.c > +++ b/lib/group_cpus.c > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > if (cpu >= nr_cpu_ids) > return; > > - cpumask_clear_cpu(cpu, nmsk); > - cpumask_set_cpu(cpu, irqmsk); > + __cpumask_clear_cpu(cpu, nmsk); > + __cpumask_set_cpu(cpu, irqmsk); > cpus_per_grp--; > > /* If the cpu has siblings, use them first */ > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > sibl = cpu + 1; > > for_each_cpu_and_from(sibl, siblmsk, nmsk) { > - cpumask_clear_cpu(sibl, nmsk); > - cpumask_set_cpu(sibl, irqmsk); > + __cpumask_clear_cpu(sibl, nmsk); > + __cpumask_set_cpu(sibl, irqmsk); I think this kind of change should be avoided, here the code is absolutely in slow path, and we care code cleanness and readability much more than the saved cycle from non atomicity. Thanks, Ming
On Tue, Dec 12, 2023 at 05:50:04PM +0800, Ming Lei wrote: > On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote: > > Because nmsk and irqmsk are stable, extra atomicity is not required. > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > > --- > > lib/group_cpus.c | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > > index 10dead3ab0e0..7ac94664230f 100644 > > --- a/lib/group_cpus.c > > +++ b/lib/group_cpus.c > > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > if (cpu >= nr_cpu_ids) > > return; > > > > - cpumask_clear_cpu(cpu, nmsk); > > - cpumask_set_cpu(cpu, irqmsk); > > + __cpumask_clear_cpu(cpu, nmsk); > > + __cpumask_set_cpu(cpu, irqmsk); > > cpus_per_grp--; > > > > /* If the cpu has siblings, use them first */ > > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > sibl = cpu + 1; > > > > for_each_cpu_and_from(sibl, siblmsk, nmsk) { > > - cpumask_clear_cpu(sibl, nmsk); > > - cpumask_set_cpu(sibl, irqmsk); > > + __cpumask_clear_cpu(sibl, nmsk); > > + __cpumask_set_cpu(sibl, irqmsk); > > I think this kind of change should be avoided, here the code is > absolutely in slow path, and we care code cleanness and readability > much more than the saved cycle from non atomicity. Atomic ops have special meaning and special function. This 'atomic' way of moving a bit from one bitmap to another looks completely non-trivial and puzzling to me. A sequence of atomic ops is not atomic itself. Normally it's a sing of a bug. But in this case, both masks are stable, and we don't need atomicity at all. It's not about performance, it's about readability. Thanks, Yury
On Tue, Dec 12, 2023 at 08:52:14AM -0800, Yury Norov wrote: > On Tue, Dec 12, 2023 at 05:50:04PM +0800, Ming Lei wrote: > > On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote: > > > Because nmsk and irqmsk are stable, extra atomicity is not required. > > > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > > > --- > > > lib/group_cpus.c | 8 ++++---- > > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > > > index 10dead3ab0e0..7ac94664230f 100644 > > > --- a/lib/group_cpus.c > > > +++ b/lib/group_cpus.c > > > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > if (cpu >= nr_cpu_ids) > > > return; > > > > > > - cpumask_clear_cpu(cpu, nmsk); > > > - cpumask_set_cpu(cpu, irqmsk); > > > + __cpumask_clear_cpu(cpu, nmsk); > > > + __cpumask_set_cpu(cpu, irqmsk); > > > cpus_per_grp--; > > > > > > /* If the cpu has siblings, use them first */ > > > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > sibl = cpu + 1; > > > > > > for_each_cpu_and_from(sibl, siblmsk, nmsk) { > > > - cpumask_clear_cpu(sibl, nmsk); > > > - cpumask_set_cpu(sibl, irqmsk); > > > + __cpumask_clear_cpu(sibl, nmsk); > > > + __cpumask_set_cpu(sibl, irqmsk); > > > > I think this kind of change should be avoided, here the code is > > absolutely in slow path, and we care code cleanness and readability > > much more than the saved cycle from non atomicity. > > Atomic ops have special meaning and special function. This 'atomic' way > of moving a bit from one bitmap to another looks completely non-trivial > and puzzling to me. > > A sequence of atomic ops is not atomic itself. Normally it's a sing of > a bug. But in this case, both masks are stable, and we don't need > atomicity at all. Here we don't care the atomicity. > > It's not about performance, it's about readability. __cpumask_clear_cpu() and __cpumask_set_cpu() are more like private helper, and more hard to follow. [@linux]$ git grep -n -w -E "cpumask_clear_cpu|cpumask_set_cpu" ./ | wc 674 2055 53954 [@linux]$ git grep -n -w -E "__cpumask_clear_cpu|__cpumask_set_cpu" ./ | wc 21 74 1580 I don't object to comment the current usage, but NAK for this change. Thanks, Ming
On Wed, Dec 13, 2023 at 08:14:45AM +0800, Ming Lei wrote: > On Tue, Dec 12, 2023 at 08:52:14AM -0800, Yury Norov wrote: > > On Tue, Dec 12, 2023 at 05:50:04PM +0800, Ming Lei wrote: > > > On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote: > > > > Because nmsk and irqmsk are stable, extra atomicity is not required. > > > > > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > > > > --- > > > > lib/group_cpus.c | 8 ++++---- > > > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > > > > index 10dead3ab0e0..7ac94664230f 100644 > > > > --- a/lib/group_cpus.c > > > > +++ b/lib/group_cpus.c > > > > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > > if (cpu >= nr_cpu_ids) > > > > return; > > > > > > > > - cpumask_clear_cpu(cpu, nmsk); > > > > - cpumask_set_cpu(cpu, irqmsk); > > > > + __cpumask_clear_cpu(cpu, nmsk); > > > > + __cpumask_set_cpu(cpu, irqmsk); > > > > cpus_per_grp--; > > > > > > > > /* If the cpu has siblings, use them first */ > > > > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > > sibl = cpu + 1; > > > > > > > > for_each_cpu_and_from(sibl, siblmsk, nmsk) { > > > > - cpumask_clear_cpu(sibl, nmsk); > > > > - cpumask_set_cpu(sibl, irqmsk); > > > > + __cpumask_clear_cpu(sibl, nmsk); > > > > + __cpumask_set_cpu(sibl, irqmsk); > > > > > > I think this kind of change should be avoided, here the code is > > > absolutely in slow path, and we care code cleanness and readability > > > much more than the saved cycle from non atomicity. > > > > Atomic ops have special meaning and special function. This 'atomic' way > > of moving a bit from one bitmap to another looks completely non-trivial > > and puzzling to me. > > > > A sequence of atomic ops is not atomic itself. Normally it's a sing of > > a bug. But in this case, both masks are stable, and we don't need > > atomicity at all. > > Here we don't care the atomicity. > > > > > It's not about performance, it's about readability. > > __cpumask_clear_cpu() and __cpumask_set_cpu() are more like private > helper, and more hard to follow. No that's not true. Non-atomic version of the function is not a private helper of course. > [@linux]$ git grep -n -w -E "cpumask_clear_cpu|cpumask_set_cpu" ./ | wc > 674 2055 53954 > [@linux]$ git grep -n -w -E "__cpumask_clear_cpu|__cpumask_set_cpu" ./ | wc > 21 74 1580 > > I don't object to comment the current usage, but NAK for this change. No problem, I'll add you NAK.
On Wed, Dec 13, 2023 at 09:03:17AM -0800, Yury Norov wrote: > On Wed, Dec 13, 2023 at 08:14:45AM +0800, Ming Lei wrote: > > On Tue, Dec 12, 2023 at 08:52:14AM -0800, Yury Norov wrote: > > > On Tue, Dec 12, 2023 at 05:50:04PM +0800, Ming Lei wrote: > > > > On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote: > > > > > Because nmsk and irqmsk are stable, extra atomicity is not required. > > > > > > > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > > > > > --- > > > > > lib/group_cpus.c | 8 ++++---- > > > > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > > > > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > > > > > index 10dead3ab0e0..7ac94664230f 100644 > > > > > --- a/lib/group_cpus.c > > > > > +++ b/lib/group_cpus.c > > > > > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > > > if (cpu >= nr_cpu_ids) > > > > > return; > > > > > > > > > > - cpumask_clear_cpu(cpu, nmsk); > > > > > - cpumask_set_cpu(cpu, irqmsk); > > > > > + __cpumask_clear_cpu(cpu, nmsk); > > > > > + __cpumask_set_cpu(cpu, irqmsk); > > > > > cpus_per_grp--; > > > > > > > > > > /* If the cpu has siblings, use them first */ > > > > > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, > > > > > sibl = cpu + 1; > > > > > > > > > > for_each_cpu_and_from(sibl, siblmsk, nmsk) { > > > > > - cpumask_clear_cpu(sibl, nmsk); > > > > > - cpumask_set_cpu(sibl, irqmsk); > > > > > + __cpumask_clear_cpu(sibl, nmsk); > > > > > + __cpumask_set_cpu(sibl, irqmsk); > > > > > > > > I think this kind of change should be avoided, here the code is > > > > absolutely in slow path, and we care code cleanness and readability > > > > much more than the saved cycle from non atomicity. > > > > > > Atomic ops have special meaning and special function. This 'atomic' way > > > of moving a bit from one bitmap to another looks completely non-trivial > > > and puzzling to me. > > > > > > A sequence of atomic ops is not atomic itself. Normally it's a sing of > > > a bug. But in this case, both masks are stable, and we don't need > > > atomicity at all. > > > > Here we don't care the atomicity. > > > > > > > > It's not about performance, it's about readability. > > > > __cpumask_clear_cpu() and __cpumask_set_cpu() are more like private > > helper, and more hard to follow. > > No that's not true. Non-atomic version of the function is not a > private helper of course. > > > [@linux]$ git grep -n -w -E "cpumask_clear_cpu|cpumask_set_cpu" ./ | wc > > 674 2055 53954 > > [@linux]$ git grep -n -w -E "__cpumask_clear_cpu|__cpumask_set_cpu" ./ | wc > > 21 74 1580 > > > > I don't object to comment the current usage, but NAK for this change. > > No problem, I'll add you NAK. You can add the following words meantime: __cpumask_clear_cpu() and __cpumask_set_cpu() are added in commit 6c8557bdb28d ("smp, cpumask: Use non-atomic cpumask_{set,clear}_cpu()") for fast code path( smp_call_function_many()). We have ~670 users of cpumask_clear_cpu & cpumask_set_cpu, lots of them fall into same category with group_cpus.c(doesn't care atomicity, not in fast code path), and needn't change to __cpumask_clear_cpu() and __cpumask_set_cpu(). Otherwise, this way may encourage to update others into the __cpumask_* version. Thanks, Ming
diff --git a/lib/group_cpus.c b/lib/group_cpus.c index 10dead3ab0e0..7ac94664230f 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, if (cpu >= nr_cpu_ids) return; - cpumask_clear_cpu(cpu, nmsk); - cpumask_set_cpu(cpu, irqmsk); + __cpumask_clear_cpu(cpu, nmsk); + __cpumask_set_cpu(cpu, irqmsk); cpus_per_grp--; /* If the cpu has siblings, use them first */ @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, sibl = cpu + 1; for_each_cpu_and_from(sibl, siblmsk, nmsk) { - cpumask_clear_cpu(sibl, nmsk); - cpumask_set_cpu(sibl, irqmsk); + __cpumask_clear_cpu(sibl, nmsk); + __cpumask_set_cpu(sibl, irqmsk); if (cpus_per_grp-- == 0) return; }