Message ID | 20230925020528.777578-2-yury.norov@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp1055321vqu; Mon, 25 Sep 2023 01:23:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFRBNBbx5EF7eb0LDwmPpalOoA4uPdpBcyhXEnSH8elCmGT60cyVhCln+O9ix0Z6W9/gVGy X-Received: by 2002:a05:6a20:8409:b0:153:8754:8a7f with SMTP id c9-20020a056a20840900b0015387548a7fmr5482743pzd.4.1695630209705; Mon, 25 Sep 2023 01:23:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695630209; cv=none; d=google.com; s=arc-20160816; b=WXgepVwmCBCBfqNHQQnbqgFVXRDlZ6EcAgRpLgX3QjQjQboEzmgGf9IkvZlq53uHGy zj2puM/aUO88pPRoE7WNLXsQ1qLv1T5SZc3xfg8bIbJ1vQiL62MhkxKK4dKiK4ZeUQO3 +MaJigaM4RdUDRexYSM7wxZnnmE1iEDaNPa3zGtVwOi8GZ+U3FMTPABNZQmIw245uWJ4 0IfppHBOAT4BJH+IjKyxML/fNClA3klSc+oEfGIr/cpKwnoBMfclvepQ/k5NW7KQLrA7 RdziToSxcFxpozp85VRI2g4f8n07klhT+jcr41rI1pOebpf1zv6+d6oPoVaXMx3vjWEO XLXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XPAXd9AbKmHdNRTfsq4M17cXn9dGkWOfPxW2lTik09I=; fh=Vh9RxN0PFG+ubDo7tRHLWjvF26T+zSFG6b8Rw8sy3lQ=; b=F/7gBINTpUQJEkHO6p1FwQqvbt26XKdvSHiqX0GVcV3rw3lr7T+bMdWJrvWOdYwBxw S50lCQGEC7gKM8d1NkmRquwp80VbWxt0kursWo/4ZEu4kKmOeOLG+RSzDDWJiUTZKU/y uWDjKkyKSKhKosMRtp7mu+tgMBeiU30wcopgTKfvBGz/pmSuUo2DTEG5JEjC6A44fZdo udmH/i+YD2witn5UGU4L5wtzYSe9Az7ohfwsZ6VCZTTeSPFPjEuYJwA9z908hkOhu3Yw k7PHx8OTKFvTSj2mvWXc0uJtiAWdDzY040r1qqX3cvYeF5nPBXbnHT4pbuSbHwFI+Sx4 sBNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=e1+qZ34p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id b2-20020a170902650200b001bc92dc57bbsi8786621plk.633.2023.09.25.01.23.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 01:23:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=e1+qZ34p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id B29AA82723B8; Sun, 24 Sep 2023 19:05:44 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231640AbjIYCFo (ORCPT <rfc822;ezelljr.billy@gmail.com> + 30 others); Sun, 24 Sep 2023 22:05:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjIYCFn (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 24 Sep 2023 22:05:43 -0400 Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1967BD; Sun, 24 Sep 2023 19:05:36 -0700 (PDT) Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-59f7cc71e2eso10037637b3.0; Sun, 24 Sep 2023 19:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695607535; x=1696212335; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XPAXd9AbKmHdNRTfsq4M17cXn9dGkWOfPxW2lTik09I=; b=e1+qZ34p+Y1xPs7h14RDeJHKNxvyO8f2v9ikIxlr83HVPY2A6kLYhCxO2VmCG1WjWN JQiFuyqmVYQc0MTJwiFDKLehAIzYrZl+tgHIOBD4uQMhClPQwF/lePevDe+gMlxuZ+Nl mH4VAeg/LIkFMxexYSuNYh0oAjmhhhYxxY0iWFVgmY9JH4rUk1JOHCl/wVX2p/FccJxj kv1dqLayPse0P0iHJJj/MieBosfJsJIPLIv1V4atulaYXyr3k3sWMwZl+LvhoaGlmPAi KlkLd/OJPUah5wvirYfC+j72gBfRH+rpWU1vCSnspYgJhDleymFg3x+kJijV5Hyy0on2 s4uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695607535; x=1696212335; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XPAXd9AbKmHdNRTfsq4M17cXn9dGkWOfPxW2lTik09I=; b=DMmMxyBu27TfhERpyCS2ReHAgh3W1yzbABBTK6lmUGO1cU/j7uNNdqkcUZrEbt6hkm YLH7l3POFZ/lWHVNiFSlqDmJBIHNoq7ByX8U3AJlGuErAKi2nbnubcU+uYZPlkzafEBH elqKrsXoPPZ7vPek8bPosAiWz3KINCRmX+xv2Pih1DdxpYPyGbs0Xx1rcDE7qC7Hqxu1 SHIRdLvs9EHWGYpf2JksG4E++38OVo8tT5ED9l9rRNgHlpLaKy67OUei4AxDvSeYD+v2 3ts+WwhNR3YOjeXb8iv5Rxuzdppzdi5Cgg5aM0lyrtJCs2E0zkouOsvux4B+Hyq7BoPZ Qhcg== X-Gm-Message-State: AOJu0Yy1VqS4Xdf5ca6ckGO4k5Pud9H8e0r610bZ6L9ImrtHeURf3EUo tGjGxwkeJ8dFcJfVvSU2ccYB0+mJo7s= X-Received: by 2002:a81:4e42:0:b0:59b:14ca:4316 with SMTP id c63-20020a814e42000000b0059b14ca4316mr5342751ywb.43.1695607535532; Sun, 24 Sep 2023 19:05:35 -0700 (PDT) Received: from localhost ([2607:fb90:3eac:cd78:b6b5:ba0f:9e64:f2e1]) by smtp.gmail.com with ESMTPSA id gq10-20020a05690c444a00b0059f5828346csm957265ywb.3.2023.09.24.19.05.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Sep 2023 19:05:35 -0700 (PDT) From: Yury Norov <yury.norov@gmail.com> To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Yury Norov <yury.norov@gmail.com>, Tariq Toukan <ttoukan.linux@gmail.com>, Valentin Schneider <vschneid@redhat.com>, Maher Sanalla <msanalla@nvidia.com>, Ingo Molnar <mingo@kernel.org>, Mel Gorman <mgorman@suse.de>, Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, Pawel Chmielewski <pawel.chmielewski@intel.com>, Jacob Keller <jacob.e.keller@intel.com>, Yury Norov <ynorov@nvidia.com> Subject: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Date: Sun, 24 Sep 2023 19:05:25 -0700 Message-Id: <20230925020528.777578-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230925020528.777578-1-yury.norov@gmail.com> References: <20230925020528.777578-1-yury.norov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Sun, 24 Sep 2023 19:05:44 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777997143060491762 X-GMAIL-MSGID: 1777997143060491762 |
Series |
sched: drop for_each_numa_hop_mask()
|
|
Commit Message
Yury Norov
Sept. 25, 2023, 2:05 a.m. UTC
The function duplicates existing cpumask_local_spread(), and it's O(N), while cpumask_local_spread() implementation is based on bsearch, and thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic cpumask_local_spread(). Signed-off-by: Yury Norov <yury.norov@gmail.com> Signed-off-by: Yury Norov <ynorov@nvidia.com> --- drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------ 1 file changed, 2 insertions(+), 26 deletions(-)
Comments
On 9/24/2023 7:05 PM, Yury Norov wrote: > The function duplicates existing cpumask_local_spread(), and it's O(N), > while cpumask_local_spread() implementation is based on bsearch, and > thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic > cpumask_local_spread(). > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > Signed-off-by: Yury Norov <ynorov@nvidia.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------ > 1 file changed, 2 insertions(+), 26 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > index ea0405e0a43f..bd9f857cc52d 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx) > mlx5_irq_release_vector(irq); > } > Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> > -static int mlx5_cpumask_default_spread(int numa_node, int index) > -{ > - const struct cpumask *prev = cpu_none_mask; > - const struct cpumask *mask; > - int found_cpu = 0; > - int i = 0; > - int cpu; > - > - rcu_read_lock(); > - for_each_numa_hop_mask(mask, numa_node) { > - for_each_cpu_andnot(cpu, mask, prev) { > - if (i++ == index) { > - found_cpu = cpu; > - goto spread_done; > - } > - } > - prev = mask; > - } > - > -spread_done: > - rcu_read_unlock(); > - return found_cpu; > -} > - > static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev) > { > #ifdef CONFIG_RFS_ACCEL > @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx) > int cpu; > > rmap = mlx5_eq_table_get_pci_rmap(dev); > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx); > + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node); > irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap); > if (IS_ERR(irq)) > return PTR_ERR(irq); > @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector) > if (mask) > cpu = cpumask_first(mask); > else > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector); > + cpu = cpumask_local_spread(vector, dev->priv.numa_node); > > return cpu; > }
On Sun, 2023-09-24 at 19:05 -0700, Yury Norov wrote: > The function duplicates existing cpumask_local_spread(), and it's O(N), > while cpumask_local_spread() implementation is based on bsearch, and > thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic > cpumask_local_spread(). > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > Signed-off-by: Yury Norov <ynorov@nvidia.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------ > 1 file changed, 2 insertions(+), 26 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > index ea0405e0a43f..bd9f857cc52d 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx) > mlx5_irq_release_vector(irq); > } > > -static int mlx5_cpumask_default_spread(int numa_node, int index) > -{ > - const struct cpumask *prev = cpu_none_mask; > - const struct cpumask *mask; > - int found_cpu = 0; > - int i = 0; > - int cpu; > - > - rcu_read_lock(); > - for_each_numa_hop_mask(mask, numa_node) { > - for_each_cpu_andnot(cpu, mask, prev) { > - if (i++ == index) { > - found_cpu = cpu; > - goto spread_done; > - } > - } > - prev = mask; > - } > - > -spread_done: > - rcu_read_unlock(); > - return found_cpu; > -} > - > static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev) > { > #ifdef CONFIG_RFS_ACCEL > @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx) > int cpu; > > rmap = mlx5_eq_table_get_pci_rmap(dev); > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx); > + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node); > irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap); > if (IS_ERR(irq)) > return PTR_ERR(irq); > @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector) > if (mask) > cpu = cpumask_first(mask); > else > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector); > + cpu = cpumask_local_spread(vector, dev->priv.numa_node); > > return cpu; > } It looks like this series is going to cause some later conflicts regardless of the target tree. I think the whole series could go via the net-next tree, am I missing any relevant point? Thanks! Paolo
On Tue, Oct 03, 2023 at 12:04:01PM +0200, Paolo Abeni wrote: > On Sun, 2023-09-24 at 19:05 -0700, Yury Norov wrote: > > The function duplicates existing cpumask_local_spread(), and it's O(N), > > while cpumask_local_spread() implementation is based on bsearch, and > > thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic > > cpumask_local_spread(). > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > > Signed-off-by: Yury Norov <ynorov@nvidia.com> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------ > > 1 file changed, 2 insertions(+), 26 deletions(-) > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > index ea0405e0a43f..bd9f857cc52d 100644 > > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx) > > mlx5_irq_release_vector(irq); > > } > > > > -static int mlx5_cpumask_default_spread(int numa_node, int index) > > -{ > > - const struct cpumask *prev = cpu_none_mask; > > - const struct cpumask *mask; > > - int found_cpu = 0; > > - int i = 0; > > - int cpu; > > - > > - rcu_read_lock(); > > - for_each_numa_hop_mask(mask, numa_node) { > > - for_each_cpu_andnot(cpu, mask, prev) { > > - if (i++ == index) { > > - found_cpu = cpu; > > - goto spread_done; > > - } > > - } > > - prev = mask; > > - } > > - > > -spread_done: > > - rcu_read_unlock(); > > - return found_cpu; > > -} > > - > > static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev) > > { > > #ifdef CONFIG_RFS_ACCEL > > @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx) > > int cpu; > > > > rmap = mlx5_eq_table_get_pci_rmap(dev); > > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx); > > + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node); > > irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap); > > if (IS_ERR(irq)) > > return PTR_ERR(irq); > > @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector) > > if (mask) > > cpu = cpumask_first(mask); > > else > > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector); > > + cpu = cpumask_local_spread(vector, dev->priv.numa_node); > > > > return cpu; > > } > > It looks like this series is going to cause some later conflicts > regardless of the target tree. I think the whole series could go via > the net-next tree, am I missing any relevant point? Hi Paolo, Can you elaborate on the conflicts you see? For me it applies cleanly on current master, and with some 3-way merging on latest -next... Thanks, Yury
On Tue, 3 Oct 2023 06:46:17 -0700 Yury Norov wrote: > Can you elaborate on the conflicts you see? For me it applies cleanly > on current master, and with some 3-way merging on latest -next... We're half way thru the release cycle the conflicts can still come in. There's no dependency for the first patch. The most normal way to handle this would be to send patch 1 to the networking tree and send the rest in the subsequent merge window.
On Tue, Oct 03, 2023 at 03:20:30PM -0700, Jakub Kicinski wrote: > On Tue, 3 Oct 2023 06:46:17 -0700 Yury Norov wrote: > > Can you elaborate on the conflicts you see? For me it applies cleanly > > on current master, and with some 3-way merging on latest -next... > > We're half way thru the release cycle the conflicts can still come in. > > There's no dependency for the first patch. The most normal way to > handle this would be to send patch 1 to the networking tree and send > the rest in the subsequent merge window. Ah, I understand now. I didn't plan to move it in current merge window. In fact, I'll be more comfortable to keep it in -next for longer and merge it in v6.7. But it's up to you. If you think it's better, I can resend 1st patch separately. Thanks, Yury
On Tue, 3 Oct 2023 18:31:28 -0700 Yury Norov wrote: > > We're half way thru the release cycle the conflicts can still come in. > > > > There's no dependency for the first patch. The most normal way to > > handle this would be to send patch 1 to the networking tree and send > > the rest in the subsequent merge window. > > Ah, I understand now. I didn't plan to move it in current merge > window. In fact, I'll be more comfortable to keep it in -next for > longer and merge it in v6.7. > > But it's up to you. If you think it's better, I can resend 1st patch > separately. Let's see if Saeed can help us. Saeed, could you pick up patch 1 from this series for the mlx5 tree?
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index ea0405e0a43f..bd9f857cc52d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx) mlx5_irq_release_vector(irq); } -static int mlx5_cpumask_default_spread(int numa_node, int index) -{ - const struct cpumask *prev = cpu_none_mask; - const struct cpumask *mask; - int found_cpu = 0; - int i = 0; - int cpu; - - rcu_read_lock(); - for_each_numa_hop_mask(mask, numa_node) { - for_each_cpu_andnot(cpu, mask, prev) { - if (i++ == index) { - found_cpu = cpu; - goto spread_done; - } - } - prev = mask; - } - -spread_done: - rcu_read_unlock(); - return found_cpu; -} - static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev) { #ifdef CONFIG_RFS_ACCEL @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx) int cpu; rmap = mlx5_eq_table_get_pci_rmap(dev); - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx); + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node); irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap); if (IS_ERR(irq)) return PTR_ERR(irq); @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector) if (mask) cpu = cpumask_first(mask); else - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector); + cpu = cpumask_local_spread(vector, dev->priv.numa_node); return cpu; }