Message ID | 20221112190946.728270-4-yury.norov@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1398744wru; Sat, 12 Nov 2022 11:21:07 -0800 (PST) X-Google-Smtp-Source: AA0mqf4CphhwDkkd+6Sn7/FkKDc+jPcElqnFQYtlXHC16jdd1aeQlw0c3GF9x7ESbFIt9nVUYLJd X-Received: by 2002:aa7:dc16:0:b0:467:7026:515e with SMTP id b22-20020aa7dc16000000b004677026515emr4879460edu.26.1668280866975; Sat, 12 Nov 2022 11:21:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668280866; cv=none; d=google.com; s=arc-20160816; b=GB5tl4EX57hEPat20NerJBhaUytvBE8VybeHoqT4X5h2SEYQCnUjtQcCrIw1Ub7S+f rUxvWr8/o7tctM6nkf+JqI5lJAC+ehKGauQTgIwZycKpff4ntsCQtSBIxuOMiwTtioW4 of8xsa0RjUhn4YHHhyhNVUDFZgvBdbW1qNJK+yKDYg6ciROojIh9MaQadTkNKWxf+sXr yt4iN1SFby5YeSGGL5m9kI9HGzS3j6gOGnvFAcoxNn79DNaK/W4ywC5OTnoL8tJA36Gb LvInlRXVkpZYfLAPkJIoabpkItlkACdgqFCycBOFzq3gINGOl7pSRmCgqDwj3/vnJzpI 8Jwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tqEEjsLXHTNbUPsRH66Vwh3/Ffbwf/TAdluZMVix8HU=; b=TGfZ9mJqZXuYUEL3iRv3pYat5Qx8BYmS4lV2NQaLLPtqonNzrUEMa7261EH8OGaQKf lNIdl6Z7GDlzgno59bPnnxKq1J9JzsSwyFes6E6+3xfAngmC4CowlnxnaEPr3jnLaBZi CMOVCC9Cf65AWNbEP4fkbd9l7NjZNPlLVwM0GSZxXDTIp9/4Rk9lRRGPVg4VITfGKpJk J8z9Y15Pm828SZZeCFsUGI0jaLW6G1cBY3D/ILc22L8l27+yFvzwpKl0L3kNPePIZNnj XX0ZY1z/N4pkFZ29q0tDfTSSHI1FNhX0nQ98+5OGX8hNRfxfScFSG5H2i6ST2ijlJM5K wytw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=OrIPZw6d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xj4-20020a170906db0400b00782f3e3bb71si5653124ejb.912.2022.11.12.11.20.42; Sat, 12 Nov 2022 11:21:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=OrIPZw6d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234007AbiKLTKJ (ORCPT <rfc822;winker.wchi@gmail.com> + 99 others); Sat, 12 Nov 2022 14:10:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235010AbiKLTJy (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 12 Nov 2022 14:09:54 -0500 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3D4918343; Sat, 12 Nov 2022 11:09:53 -0800 (PST) Received: by mail-qv1-xf36.google.com with SMTP id x13so5551917qvn.6; Sat, 12 Nov 2022 11:09:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tqEEjsLXHTNbUPsRH66Vwh3/Ffbwf/TAdluZMVix8HU=; b=OrIPZw6dnMFquB03z7bujIZ4SI0QmPWeWr/QWFL6/agYtrCvFvJrqwUBpjJDm0chv8 8fXqSEPeJifaigMJU8OzNcsqScMfwHgsJFmHfvH/WG/gpiopZqbQcWLVtjGGZe1kVfJD b9yOfiEWqX7WT/nsRQyydtfK7PBBELrmtMEMy2dP+nNjocxIOi05IR0FovguKKVaSwYj +dvVLcb2Vl93FH4WmyqtOC6sEOw1r1/NLDxY7qvylXUvyzy22CDtP9Ip345YFnFdxo8a 7oEc0R4ujegaVJNhUILMXAKkRjoFYPMFxZJ2lv0nORAvLixe1yR+coZ5UQE7HgI4cCFF HV9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tqEEjsLXHTNbUPsRH66Vwh3/Ffbwf/TAdluZMVix8HU=; b=WaoBRAkNmQE4jYeU19hgsYduj+RqUv3BL9XL5kfIXbf7nRB1JyqkVYE8FzI0Cx6Wdx OXMIRVXTv/6lRhTH5/Tr37GTDOICh+rM4EpfGTc8j1/ZSRMW6wXn1gT76uDrNeChK72g 2a8M1rbGdXUP91u5509iyn7NJ9at3cNfcceMG2VOLcyd7NT0gdN6yt/gLb1kFkpKmMFb ZrwsMa3H4bBw1qwFTEyWFc9YS1UZ4s6WNcAkhVSpVMsgGpO9JUMGkqEgsn4XSREYM29N gSB0Ir3cA4QH49ufeg60IRQSNsionHR7zK4NDYphg70IcSo1JLshwmv45Nb0X+RbcNAM 711w== X-Gm-Message-State: ANoB5pmhQzGhVDCRVAUFf5lo9F9RTG08aSmt5jN+L7d1F8UZb/hy7CTb MgQCEeZWmeEnQEhMGHnxm2k45NRY6C0= X-Received: by 2002:a05:6214:932:b0:4bb:cb21:df19 with SMTP id dk18-20020a056214093200b004bbcb21df19mr6772654qvb.85.1668280192703; Sat, 12 Nov 2022 11:09:52 -0800 (PST) Received: from localhost (user-24-236-74-177.knology.net. [24.236.74.177]) by smtp.gmail.com with ESMTPSA id r1-20020a05620a298100b006ecf030ef15sm3570207qkp.65.2022.11.12.11.09.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 12 Nov 2022 11:09:52 -0800 (PST) From: Yury Norov <yury.norov@gmail.com> To: linux-kernel@vger.kernel.org, "David S. Miller" <davem@davemloft.net>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Barry Song <baohua@kernel.org>, Ben Segall <bsegall@google.com>, haniel Bristot de Oliveira <bristot@redhat.com>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Gal Pressman <gal@nvidia.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Heiko Carstens <hca@linux.ibm.com>, Ingo Molnar <mingo@redhat.com>, Jakub Kicinski <kuba@kernel.org>, Jason Gunthorpe <jgg@nvidia.com>, Jesse Brandeburg <jesse.brandeburg@intel.com>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Juri Lelli <juri.lelli@redhat.com>, Leon Romanovsky <leonro@nvidia.com>, Mel Gorman <mgorman@suse.de>, Peter Zijlstra <peterz@infradead.org>, Rasmus Villemoes <linux@rasmusvillemoes.dk>, Saeed Mahameed <saeedm@nvidia.com>, Steven Rostedt <rostedt@goodmis.org>, Tariq Toukan <tariqt@nvidia.com>, Tariq Toukan <ttoukan.linux@gmail.com>, Tony Luck <tony.luck@intel.com>, Valentin Schneider <vschneid@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org> Cc: Yury Norov <yury.norov@gmail.com>, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org Subject: [PATCH v2 3/4] sched: add sched_numa_find_nth_cpu() Date: Sat, 12 Nov 2022 11:09:45 -0800 Message-Id: <20221112190946.728270-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221112190946.728270-1-yury.norov@gmail.com> References: <20221112190946.728270-1-yury.norov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749319278289731570?= X-GMAIL-MSGID: =?utf-8?q?1749319278289731570?= |
Series | cpumask: improve on cpumask_local_spread() locality | |
Commit Message
Yury Norov
Nov. 12, 2022, 7:09 p.m. UTC
The function finds Nth set CPU in a given cpumask starting from a given
node.
Leveraging the fact that each hop in sched_domains_numa_masks includes the
same or greater number of CPUs than the previous one, we can use binary
search on hops instead of linear walk, which makes the overall complexity
of O(log n) in terms of number of cpumask_weight() calls.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
include/linux/topology.h | 8 ++++++
kernel/sched/topology.c | 55 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 63 insertions(+)
Comments
On Sat, Nov 12, 2022 at 11:09:45AM -0800, Yury Norov wrote: > The function finds Nth set CPU in a given cpumask starting from a given > node. > > Leveraging the fact that each hop in sched_domains_numa_masks includes the > same or greater number of CPUs than the previous one, we can use binary > search on hops instead of linear walk, which makes the overall complexity > of O(log n) in terms of number of cpumask_weight() calls. ... > +struct __cmp_key { > + const struct cpumask *cpus; > + struct cpumask ***masks; > + int node; > + int cpu; > + int w; > +}; > + > +static int cmp(const void *a, const void *b) Calling them key and pivot (as in the caller), would make more sense. > +{ What about const (?) struct cpumask ***masks = (...)pivot; > + struct cpumask **prev_hop = *((struct cpumask ***)b - 1); = masks[-1]; > + struct cpumask **cur_hop = *(struct cpumask ***)b; = masks[0]; ? > + struct __cmp_key *k = (struct __cmp_key *)a; > + if (cpumask_weight_and(k->cpus, cur_hop[k->node]) <= k->cpu) > + return 1; > + k->w = (b == k->masks) ? 0 : cpumask_weight_and(k->cpus, prev_hop[k->node]); > + if (k->w <= k->cpu) > + return 0; Can k->cpu be negative? If no, we can rewrite above as k->w = 0; if (b == k->masks) return 0; k->w = cpumask_weight_and(k->cpus, prev_hop[k->node]); > + return -1; > +} ... > +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) > +{ > + struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; You can drop NULL and 0 while using C99 assignments. > + int hop, ret = nr_cpu_ids; > + rcu_read_lock(); + Blank line? > + k.masks = rcu_dereference(sched_domains_numa_masks); > + if (!k.masks) > + goto unlock; > + hop = (struct cpumask ***) > + bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; Strange indentation. I would rather see the split on parameters and maybe '-' operator. sizeof(*k.masks) is a bit shorter, right? Also we may go with struct cpumask ***masks; struct __cmp_key k = { .cpus = cpus, .node = node, .cpu = cpu }; > + ret = hop ? > + cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : > + cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]); > +unlock: out_unlock: shows the intention more clearly, no? > + rcu_read_unlock(); > + return ret; > +}
On Mon, Nov 14, 2022 at 04:32:10PM +0200, Andy Shevchenko wrote: > On Sat, Nov 12, 2022 at 11:09:45AM -0800, Yury Norov wrote: > > The function finds Nth set CPU in a given cpumask starting from a given > > node. > > > > Leveraging the fact that each hop in sched_domains_numa_masks includes the > > same or greater number of CPUs than the previous one, we can use binary > > search on hops instead of linear walk, which makes the overall complexity > > of O(log n) in terms of number of cpumask_weight() calls. > > ... > > > +struct __cmp_key { > > + const struct cpumask *cpus; > > + struct cpumask ***masks; > > + int node; > > + int cpu; > > + int w; > > +}; > > + > > +static int cmp(const void *a, const void *b) > > Calling them key and pivot (as in the caller), would make more sense. > > > +{ > > What about > > const (?) struct cpumask ***masks = (...)pivot; > > > + struct cpumask **prev_hop = *((struct cpumask ***)b - 1); > > = masks[-1]; > > > + struct cpumask **cur_hop = *(struct cpumask ***)b; > > = masks[0]; > > ? > > > + struct __cmp_key *k = (struct __cmp_key *)a; > > > + if (cpumask_weight_and(k->cpus, cur_hop[k->node]) <= k->cpu) > > + return 1; > > > + k->w = (b == k->masks) ? 0 : cpumask_weight_and(k->cpus, prev_hop[k->node]); > > + if (k->w <= k->cpu) > > + return 0; > > Can k->cpu be negative? If no, we can rewrite above as > > k->w = 0; > if (b == k->masks) > return 0; > > k->w = cpumask_weight_and(k->cpus, prev_hop[k->node]); > > > + return -1; > > +} > > ... > > > +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) > > +{ > > + struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; > > You can drop NULL and 0 while using C99 assignments. > > > + int hop, ret = nr_cpu_ids; > > > + rcu_read_lock(); > > + Blank line? > > > + k.masks = rcu_dereference(sched_domains_numa_masks); > > + if (!k.masks) > > + goto unlock; > > > + hop = (struct cpumask ***) > > + bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; > > Strange indentation. I would rather see the split on parameters and > maybe '-' operator. > > sizeof(*k.masks) is a bit shorter, right? > > Also we may go with > > > struct cpumask ***masks; > struct __cmp_key k = { .cpus = cpus, .node = node, .cpu = cpu }; > > > > > + ret = hop ? > > + cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : > > + cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]); > > > +unlock: > > out_unlock: shows the intention more clearly, no? > > > + rcu_read_unlock(); > > + return ret; > > +} Below is a diff I have got on top of your patch, only compile tested: diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 024f1da0e941..e04262578b52 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2070,26 +2070,28 @@ int sched_numa_find_closest(const struct cpumask *cpus, int cpu) } struct __cmp_key { - const struct cpumask *cpus; struct cpumask ***masks; + const struct cpumask *cpus; int node; int cpu; int w; }; -static int cmp(const void *a, const void *b) +static int cmp(const void *key, const void *pivot) { - struct cpumask **prev_hop = *((struct cpumask ***)b - 1); - struct cpumask **cur_hop = *(struct cpumask ***)b; - struct __cmp_key *k = (struct __cmp_key *)a; + struct __cmp_key *k = container_of(key, struct __cmp_key, masks); + const struct cpumask ***masks = (const struct cpumask ***)pivot; + const struct cpumask **prev = masks[-1]; + const struct cpumask **cur = masks[0]; - if (cpumask_weight_and(k->cpus, cur_hop[k->node]) <= k->cpu) + if (cpumask_weight_and(k->cpus, cur[k->node]) <= k->cpu) return 1; - k->w = (b == k->masks) ? 0 : cpumask_weight_and(k->cpus, prev_hop[k->node]); - if (k->w <= k->cpu) + k->w = 0; + if (masks == (const struct cpumask ***)k->masks) return 0; + k->w = cpumask_weight_and(k->cpus, prev[k->node]); return -1; } @@ -2103,17 +2105,17 @@ static int cmp(const void *a, const void *b) */ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) { - struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; + struct __cmp_key k = { .cpus = cpus, .node = node, .cpu = cpu }; int hop, ret = nr_cpu_ids; + struct cpumask ***masks; rcu_read_lock(); k.masks = rcu_dereference(sched_domains_numa_masks); if (!k.masks) goto unlock; - hop = (struct cpumask ***) - bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; - + masks = bsearch(&k.masks, k.masks, sched_domains_numa_levels, sizeof(*k.masks), cmp); + hop = masks - k.masks; ret = hop ? cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]);
On 12/11/22 11:09, Yury Norov wrote: > The function finds Nth set CPU in a given cpumask starting from a given > node. > > Leveraging the fact that each hop in sched_domains_numa_masks includes the > same or greater number of CPUs than the previous one, we can use binary > search on hops instead of linear walk, which makes the overall complexity > of O(log n) in terms of number of cpumask_weight() calls. > So one thing regarding the bsearch and NUMA levels; until not so long ago we couldn't even support 3 hops [1], and this only got detected when such machines started showing up. Your bsearch here operates on NUMA levels, which represent hops, and so far we know of systems that have up to 4 levels. I'd be surprised (and also appalled) if we even doubled that in the next decade, so with that in mind, a linear walk might not be so horrible. [1]: https://lore.kernel.org/all/20210224030944.15232-1-song.bao.hua@hisilicon.com/ > Signed-off-by: Yury Norov <yury.norov@gmail.com> > --- > +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) > +{ > + struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; > + int hop, ret = nr_cpu_ids; > + > + rcu_read_lock(); > + k.masks = rcu_dereference(sched_domains_numa_masks); > + if (!k.masks) > + goto unlock; > + > + hop = (struct cpumask ***) > + bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; > + > + ret = hop ? > + cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : > + cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]); ^^^ wouldn't this always be 0 here? > +unlock: > + rcu_read_unlock(); > + return ret; > +} > +EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu); > #endif /* CONFIG_NUMA */ > > static int __sdt_alloc(const struct cpumask *cpu_map) > -- > 2.34.1
On Mon, Nov 14, 2022 at 04:32:09PM +0200, Andy Shevchenko wrote: > On Sat, Nov 12, 2022 at 11:09:45AM -0800, Yury Norov wrote: > > The function finds Nth set CPU in a given cpumask starting from a given > > node. > > > > Leveraging the fact that each hop in sched_domains_numa_masks includes the > > same or greater number of CPUs than the previous one, we can use binary > > search on hops instead of linear walk, which makes the overall complexity > > of O(log n) in terms of number of cpumask_weight() calls. > > ... > > > +struct __cmp_key { > > + const struct cpumask *cpus; > > + struct cpumask ***masks; > > + int node; > > + int cpu; > > + int w; > > +}; > > + > > +static int cmp(const void *a, const void *b) > > Calling them key and pivot (as in the caller), would make more sense. I think they are named opaque intentionally, so that user (me) would cast them to proper data structures and give meaningful names. So I did. > > +{ > > What about > > const (?) struct cpumask ***masks = (...)pivot; > > > + struct cpumask **prev_hop = *((struct cpumask ***)b - 1); > > = masks[-1]; > > > + struct cpumask **cur_hop = *(struct cpumask ***)b; > > = masks[0]; > > ? It would work as well. Not better neither worse. > > + struct __cmp_key *k = (struct __cmp_key *)a; > > > + if (cpumask_weight_and(k->cpus, cur_hop[k->node]) <= k->cpu) > > + return 1; > > > + k->w = (b == k->masks) ? 0 : cpumask_weight_and(k->cpus, prev_hop[k->node]); > > + if (k->w <= k->cpu) > > + return 0; > > Can k->cpu be negative? User may pass negative value. Currently cpumask_local_spread() will return nr_cpu_ids. After rework, bsearch() will return hop #0, After that cpumask_nth_and() will cast negative cpu to unsigned long, and because it's a too big number, again will return nr_cpu_ids. > If no, we can rewrite above as > > k->w = 0; > if (b == k->masks) > return 0; > > k->w = cpumask_weight_and(k->cpus, prev_hop[k->node]); Here we still need to compare weight of prev_hop against k->cpu. Returning -1 unconditionally is wrong. > > + return -1; > > +} > > ... > > > +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) > > +{ > > + struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; > > You can drop NULL and 0 while using C99 assignments. > > > + int hop, ret = nr_cpu_ids; > > > + rcu_read_lock(); > > + Blank line? > > > + k.masks = rcu_dereference(sched_domains_numa_masks); > > + if (!k.masks) > > + goto unlock; > > > + hop = (struct cpumask ***) > > + bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; > > Strange indentation. I would rather see the split on parameters and > maybe '-' operator. > > sizeof(*k.masks) is a bit shorter, right? > > Also we may go with > > > struct cpumask ***masks; > struct __cmp_key k = { .cpus = cpus, .node = node, .cpu = cpu }; > > > > > + ret = hop ? > > + cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : > > + cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]); > > > +unlock: > > out_unlock: shows the intention more clearly, no? No > > + rcu_read_unlock(); > > + return ret; > > +} > > -- > With Best Regards, > Andy Shevchenko >
diff --git a/include/linux/topology.h b/include/linux/topology.h index 4564faafd0e1..b2e87728caea 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -245,5 +245,13 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu) return cpumask_of_node(cpu_to_node(cpu)); } +#ifdef CONFIG_NUMA +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node); +#else +static inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) +{ + return cpumask_nth(cpu, cpus); +} +#endif /* CONFIG_NUMA */ #endif /* _LINUX_TOPOLOGY_H */ diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8739c2a5a54e..024f1da0e941 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1764,6 +1764,8 @@ bool find_numa_distance(int distance) * there is an intermediary node C, which is < N hops away from both * nodes A and B, the system is a glueless mesh. */ +#include <linux/bsearch.h> + static void init_numa_topology_type(int offline_node) { int a, b, c, n; @@ -2067,6 +2069,59 @@ int sched_numa_find_closest(const struct cpumask *cpus, int cpu) return found; } +struct __cmp_key { + const struct cpumask *cpus; + struct cpumask ***masks; + int node; + int cpu; + int w; +}; + +static int cmp(const void *a, const void *b) +{ + struct cpumask **prev_hop = *((struct cpumask ***)b - 1); + struct cpumask **cur_hop = *(struct cpumask ***)b; + struct __cmp_key *k = (struct __cmp_key *)a; + + if (cpumask_weight_and(k->cpus, cur_hop[k->node]) <= k->cpu) + return 1; + + k->w = (b == k->masks) ? 0 : cpumask_weight_and(k->cpus, prev_hop[k->node]); + if (k->w <= k->cpu) + return 0; + + return -1; +} + +/* + * sched_numa_find_nth_cpu() - given the NUMA topology, find the Nth next cpu + * closest to @cpu from @cpumask. + * cpumask: cpumask to find a cpu from + * cpu: Nth cpu to find + * + * returns: cpu, or nr_cpu_ids when nothing found. + */ +int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) +{ + struct __cmp_key k = { cpus, NULL, node, cpu, 0 }; + int hop, ret = nr_cpu_ids; + + rcu_read_lock(); + k.masks = rcu_dereference(sched_domains_numa_masks); + if (!k.masks) + goto unlock; + + hop = (struct cpumask ***) + bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), cmp) - k.masks; + + ret = hop ? + cpumask_nth_and_andnot(cpu - k.w, cpus, k.masks[hop][node], k.masks[hop-1][node]) : + cpumask_nth_and(cpu - k.w, cpus, k.masks[0][node]); +unlock: + rcu_read_unlock(); + return ret; +} +EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu); #endif /* CONFIG_NUMA */ static int __sdt_alloc(const struct cpumask *cpu_map)