Message ID | 20231118155105.25678-5-yury.norov@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9910:0:b0:403:3b70:6f57 with SMTP id i16csp1241178vqn; Sat, 18 Nov 2023 07:53:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IFVgq+5v+W39czFn4B3xxuLRv6ZiU/jn4uOyOdIdbngAtgYZyRFH9640Crp3tXLiba55NsN X-Received: by 2002:a17:902:82c1:b0:1cc:4985:fbf8 with SMTP id u1-20020a17090282c100b001cc4985fbf8mr2145950plz.59.1700322797056; Sat, 18 Nov 2023 07:53:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700322797; cv=none; d=google.com; s=arc-20160816; b=cnS8JMk6cwnXPsw5XyYa1wwhSWbaX7r8WzP6bB4eaJENuC2UyuZYSvPQkXSLC5/o2w XrQc1qsLDdGFwzcweA0+l6aXnOKKmuiOQ/mVa6Ynm5o53aQek2q4zWrvL8b/+552ErBm i5WzQ7sEJuu9l9wZjHHKVR4ns9tKW1H/jg0HKwq/+gJDT5OUwiAGV4X7Q3wtzx6o7toK Nq5VioJClVWJ9LZqkRYNl2zWewISam/FE4HIAAfmI6wSivDQ0gE1B4+eOmw+ebHY/DAh tP+RvraLc8Mrun7YA+krtHSL0eRDu8pphPS8gvj3olnNUz0SVfujYNp7SYlYj5NE3RPz 9nrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9WwK51pC/Flt9q/I+kAw/5yF9gjhLf/hhE3qbzZj538=; fh=zsoxxqPM3iuQvqy3tykiElgW4ufH/RDoH608+wYmgfo=; b=Sa4pDWAyQSwTWx7Hx8y1MX9uS6LHaBYU4lR6inkA25cTB1no3rLFQu6JW1/wGBsS8F Zrka/NPmxMsECjVjkXAig8zsJtfEj1hfEkIVwfawFVLBqLlNRWuqncCxYQ7T03t7k+Ok 5AeyGQuYsY8DCMqKY0ahhSGZ7F7DC6S0yXHHN2nF5jjzaa/wXnZl/yH8ZGTwb2GNqnzU GUktCQKvgsoEWkjgYjTFmC64kzZO9IQS01JyDIPaj+f2pkIa5Fcvt40fGyTbwS0IPRth N4ZGhxeDy7dLdxa+6eRDHwd79L+aoq65BBizFI/EACpQtAdtBqSif6qgiCO6IZhxzMSD Vvlg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=P5YBDmuk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id k3-20020a170902c40300b001c5de4a5b4esi4822309plk.597.2023.11.18.07.53.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Nov 2023 07:53:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=P5YBDmuk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 056DF80C9298; Sat, 18 Nov 2023 07:52:56 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230334AbjKRPwO (ORCPT <rfc822;jaysivo@gmail.com> + 29 others); Sat, 18 Nov 2023 10:52:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229662AbjKRPv2 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 18 Nov 2023 10:51:28 -0500 Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com [IPv6:2607:f8b0:4864:20::1134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E01BD5C for <linux-kernel@vger.kernel.org>; Sat, 18 Nov 2023 07:51:17 -0800 (PST) Received: by mail-yw1-x1134.google.com with SMTP id 00721157ae682-5a87ac9d245so33492817b3.3 for <linux-kernel@vger.kernel.org>; Sat, 18 Nov 2023 07:51:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700322675; x=1700927475; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9WwK51pC/Flt9q/I+kAw/5yF9gjhLf/hhE3qbzZj538=; b=P5YBDmuk3cEzULVFK5ObOumk7DoixChS5whcVIIVaZsTZzQ3E4EkzC6aSus8c3n3iW u3M+lr1RiR8YxaIlhF+pdjp6kEjqzfwTLj7lCk1iCCJlZFUEOkls+01wzsGF53Ynr0Fr KfMkEbnUWLv+HKS8A/3+dMaWJNNONMb62c/wMq1l7aaQ9HeKc2Joc1pInvQfRwnY8g7M xAMfC8BO+2HLwNZe5OXx/RFn3S1qZp2VkmSAqSk/YZ/1kRn3nPplYsC1ppf4VS8Iji01 L3rBCl5KCh3n9v9WW2uLvpFGjcW9Ymy13ES5tshCsb7jgoGUy3VDx+1tnq8vfnroFXWC ftHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700322675; x=1700927475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9WwK51pC/Flt9q/I+kAw/5yF9gjhLf/hhE3qbzZj538=; b=U3GynlV4UiSK6JIBHCnMwX4lbGgB7JuryjUSdbXykE+RjOH8QihYd2ONB2BMwSFC+K b2SZj/JDu4KF/xee/+BS/94h59k+pGuonI8/t9sQLxOIqy2Hje6WPuuTl0pOGMWxXE8c sIgEmYGPnTFkqA5tj36yB7ba3Sgh99gMMJ6iUsnpAfxkxaWnzqFPL59abjeYTxpwg9DT pV83uN+k9JrsnEG1t465O9XCVGyqMjhNmorhpP+pWgtsru0oSw+M5DnDUK+l1swdod49 gOtg3DQjno1tPTD+uJDUAvDX3LZVsjNuLIfJnIYL09McXgIvRJtU6IWi7ZAc4Jkn3M4B SgpQ== X-Gm-Message-State: AOJu0Yy+rd0/yCexXKsvAKMB1G3T+SGxcpnZB+F/diANtUfgk4ycXMci S3PEC9Perq/youdP6i8EnAO9kfiTkjWlhY+6 X-Received: by 2002:a0d:d556:0:b0:59b:c0a8:2882 with SMTP id x83-20020a0dd556000000b0059bc0a82882mr2805662ywd.46.1700322675422; Sat, 18 Nov 2023 07:51:15 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:48a9:bd4c:868d:dc97]) by smtp.gmail.com with ESMTPSA id w8-20020a816208000000b00597e912e67esm1183639ywb.131.2023.11.18.07.51.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Nov 2023 07:51:14 -0800 (PST) From: Yury Norov <yury.norov@gmail.com> To: linux-kernel@vger.kernel.org, Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Rasmus Villemoes <linux@rasmusvillemoes.dk>, Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <vschneid@redhat.com> Cc: Jan Kara <jack@suse.cz>, Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>, Matthew Wilcox <willy@infradead.org>, Maxim Kuvyrkov <maxim.kuvyrkov@linaro.org>, Alexey Klimov <klimov.linux@gmail.com> Subject: [PATCH 04/34] sched: add cpumask_find_and_set() and use it in __mm_cid_get() Date: Sat, 18 Nov 2023 07:50:35 -0800 Message-Id: <20231118155105.25678-5-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231118155105.25678-1-yury.norov@gmail.com> References: <20231118155105.25678-1-yury.norov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sat, 18 Nov 2023 07:52:56 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782917677121522570 X-GMAIL-MSGID: 1782917677121522570 |
Series |
biops: add atomig find_bit() operations
|
|
Commit Message
Yury Norov
Nov. 18, 2023, 3:50 p.m. UTC
__mm_cid_get() uses a __mm_cid_try_get() helper to atomically acquire a
bit in mm cid mask. Now that we have atomic find_and_set_bit(), we can
easily extend it to cpumasks and use in the scheduler code.
__mm_cid_try_get() has an infinite loop, which may delay forward
progress of __mm_cid_get() when the mask is dense. The
cpumask_find_and_set() doesn't poll the mask infinitely, and returns as
soon as nothing has found after the first iteration, allowing to acquire
the lock, and set use_cid_lock faster, if needed.
cpumask_find_and_set() considers cid mask as a volatile region of memory,
as it actually is in this case. So, if it's changed while search is in
progress, KCSAN wouldn't fire warning on it.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
include/linux/cpumask.h | 12 ++++++++++
kernel/sched/sched.h | 52 ++++++++++++-----------------------------
2 files changed, 27 insertions(+), 37 deletions(-)
Comments
On Sat, Nov 18, 2023 at 07:50:35AM -0800, Yury Norov wrote: > __mm_cid_get() uses a __mm_cid_try_get() helper to atomically acquire a > bit in mm cid mask. Now that we have atomic find_and_set_bit(), we can > easily extend it to cpumasks and use in the scheduler code. > > __mm_cid_try_get() has an infinite loop, which may delay forward > progress of __mm_cid_get() when the mask is dense. The > cpumask_find_and_set() doesn't poll the mask infinitely, and returns as > soon as nothing has found after the first iteration, allowing to acquire > the lock, and set use_cid_lock faster, if needed. Methieu, I forgot again, but the comment delete seems to suggest you did this on purpose... > cpumask_find_and_set() considers cid mask as a volatile region of memory, > as it actually is in this case. So, if it's changed while search is in > progress, KCSAN wouldn't fire warning on it. > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > --- > include/linux/cpumask.h | 12 ++++++++++ > kernel/sched/sched.h | 52 ++++++++++++----------------------------- > 2 files changed, 27 insertions(+), 37 deletions(-) > > diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h > index cfb545841a2c..c2acced8be4e 100644 > --- a/include/linux/cpumask.h > +++ b/include/linux/cpumask.h > @@ -271,6 +271,18 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, > small_cpumask_bits, n + 1); > } > > +/** > + * cpumask_find_and_set - find the first unset cpu in a cpumask and > + * set it atomically > + * @srcp: the cpumask pointer > + * > + * Return: >= nr_cpu_ids if nothing is found. > + */ > +static inline unsigned int cpumask_find_and_set(volatile struct cpumask *srcp) > +{ > + return find_and_set_bit(cpumask_bits(srcp), small_cpumask_bits); > +} > + > /** > * for_each_cpu - iterate over every cpu in a mask > * @cpu: the (optionally unsigned) integer iterator > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 2e5a95486a42..b2f095a9fc40 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -3345,28 +3345,6 @@ static inline void mm_cid_put(struct mm_struct *mm) > __mm_cid_put(mm, mm_cid_clear_lazy_put(cid)); > } > > -static inline int __mm_cid_try_get(struct mm_struct *mm) > -{ > - struct cpumask *cpumask; > - int cid; > - > - cpumask = mm_cidmask(mm); > - /* > - * Retry finding first zero bit if the mask is temporarily > - * filled. This only happens during concurrent remote-clear > - * which owns a cid without holding a rq lock. > - */ > - for (;;) { > - cid = cpumask_first_zero(cpumask); > - if (cid < nr_cpu_ids) > - break; > - cpu_relax(); > - } > - if (cpumask_test_and_set_cpu(cid, cpumask)) > - return -1; > - return cid; > -} > - > /* > * Save a snapshot of the current runqueue time of this cpu > * with the per-cpu cid value, allowing to estimate how recently it was used. > @@ -3381,25 +3359,25 @@ static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm) > > static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) > { > + struct cpumask *cpumask = mm_cidmask(mm); > int cid; > > - /* > - * All allocations (even those using the cid_lock) are lock-free. If > - * use_cid_lock is set, hold the cid_lock to perform cid allocation to > - * guarantee forward progress. > - */ > + /* All allocations (even those using the cid_lock) are lock-free. */ > if (!READ_ONCE(use_cid_lock)) { > - cid = __mm_cid_try_get(mm); > - if (cid >= 0) > + cid = cpumask_find_and_set(cpumask); > + if (cid < nr_cpu_ids) > goto end; > - raw_spin_lock(&cid_lock); > - } else { > - raw_spin_lock(&cid_lock); > - cid = __mm_cid_try_get(mm); > - if (cid >= 0) > - goto unlock; > } > > + /* > + * If use_cid_lock is set, hold the cid_lock to perform cid > + * allocation to guarantee forward progress. > + */ > + raw_spin_lock(&cid_lock); > + cid = cpumask_find_and_set(cpumask); > + if (cid < nr_cpu_ids) > + goto unlock; > + > /* > * cid concurrently allocated. Retry while forcing following > * allocations to use the cid_lock to ensure forward progress. > @@ -3415,9 +3393,9 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) > * all newcoming allocations observe the use_cid_lock flag set. > */ > do { > - cid = __mm_cid_try_get(mm); > + cid = cpumask_find_and_set(cpumask); > cpu_relax(); > - } while (cid < 0); > + } while (cid >= nr_cpu_ids); > /* > * Allocate before clearing use_cid_lock. Only care about > * program order because this is for forward progress. > -- > 2.39.2 >
On 2023-11-20 06:31, Peter Zijlstra wrote: > On Sat, Nov 18, 2023 at 07:50:35AM -0800, Yury Norov wrote: >> __mm_cid_get() uses a __mm_cid_try_get() helper to atomically acquire a >> bit in mm cid mask. Now that we have atomic find_and_set_bit(), we can >> easily extend it to cpumasks and use in the scheduler code. >> >> __mm_cid_try_get() has an infinite loop, which may delay forward >> progress of __mm_cid_get() when the mask is dense. The >> cpumask_find_and_set() doesn't poll the mask infinitely, and returns as >> soon as nothing has found after the first iteration, allowing to acquire >> the lock, and set use_cid_lock faster, if needed. > > Methieu, I forgot again, but the comment delete seems to suggest you did > this on purpose... See comments below. > >> cpumask_find_and_set() considers cid mask as a volatile region of memory, >> as it actually is in this case. So, if it's changed while search is in >> progress, KCSAN wouldn't fire warning on it. >> >> Signed-off-by: Yury Norov <yury.norov@gmail.com> >> --- >> include/linux/cpumask.h | 12 ++++++++++ >> kernel/sched/sched.h | 52 ++++++++++++----------------------------- >> 2 files changed, 27 insertions(+), 37 deletions(-) >> >> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h >> index cfb545841a2c..c2acced8be4e 100644 >> --- a/include/linux/cpumask.h >> +++ b/include/linux/cpumask.h >> @@ -271,6 +271,18 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, >> small_cpumask_bits, n + 1); >> } >> >> +/** >> + * cpumask_find_and_set - find the first unset cpu in a cpumask and >> + * set it atomically >> + * @srcp: the cpumask pointer >> + * >> + * Return: >= nr_cpu_ids if nothing is found. >> + */ >> +static inline unsigned int cpumask_find_and_set(volatile struct cpumask *srcp) >> +{ >> + return find_and_set_bit(cpumask_bits(srcp), small_cpumask_bits); >> +} >> + >> /** >> * for_each_cpu - iterate over every cpu in a mask >> * @cpu: the (optionally unsigned) integer iterator >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h >> index 2e5a95486a42..b2f095a9fc40 100644 >> --- a/kernel/sched/sched.h >> +++ b/kernel/sched/sched.h >> @@ -3345,28 +3345,6 @@ static inline void mm_cid_put(struct mm_struct *mm) >> __mm_cid_put(mm, mm_cid_clear_lazy_put(cid)); >> } >> >> -static inline int __mm_cid_try_get(struct mm_struct *mm) >> -{ >> - struct cpumask *cpumask; >> - int cid; >> - >> - cpumask = mm_cidmask(mm); >> - /* >> - * Retry finding first zero bit if the mask is temporarily >> - * filled. This only happens during concurrent remote-clear >> - * which owns a cid without holding a rq lock. >> - */ >> - for (;;) { >> - cid = cpumask_first_zero(cpumask); >> - if (cid < nr_cpu_ids) >> - break; >> - cpu_relax(); >> - } >> - if (cpumask_test_and_set_cpu(cid, cpumask)) >> - return -1; This was split in find / test_and_set on purpose because following patches I have (implementing numa-aware mm_cid) have a scan which needs to scan sets of two cpumasks in parallel (with "and" and and_not" operators). Moreover, the "mask full" scenario only happens while a concurrent remote-clear temporarily owns a cid without rq lock. See sched_mm_cid_remote_clear(): /* * The cid is unused, so it can be unset. * Disable interrupts to keep the window of cid ownership without rq * lock small. */ local_irq_save(flags); if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) __mm_cid_put(mm, cid); local_irq_restore(flags); The proposed patch here turns this scenario into something heavier (setting the use_cid_lock) rather than just retrying. I guess the question to ask here is whether it is theoretically possible to cause __mm_cid_try_get() to fail to have forward progress if we have a high rate of sched_mm_cid_remote_clear. If we decide that this is indeed a possible progress-failure scenario, then it makes sense to fallback to use_cid_lock as soon as a full mask is encountered. However, removing the __mm_cid_try_get() helper will make it harder to integrate the following numa-awareness patches I have on top. I am not against using cpumask_find_and_set, but can we keep the __mm_cid_try_get() helper to facilitate integration of future work ? We just have to make it use cpumask_find_and_set, which should be easy. >> - return cid; >> -} >> - >> /* >> * Save a snapshot of the current runqueue time of this cpu >> * with the per-cpu cid value, allowing to estimate how recently it was used. >> @@ -3381,25 +3359,25 @@ static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm) >> >> static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) >> { >> + struct cpumask *cpumask = mm_cidmask(mm); >> int cid; >> >> - /* >> - * All allocations (even those using the cid_lock) are lock-free. If >> - * use_cid_lock is set, hold the cid_lock to perform cid allocation to >> - * guarantee forward progress. >> - */ >> + /* All allocations (even those using the cid_lock) are lock-free. */ >> if (!READ_ONCE(use_cid_lock)) { >> - cid = __mm_cid_try_get(mm); >> - if (cid >= 0) >> + cid = cpumask_find_and_set(cpumask); >> + if (cid < nr_cpu_ids) >> goto end; >> - raw_spin_lock(&cid_lock); >> - } else { >> - raw_spin_lock(&cid_lock); >> - cid = __mm_cid_try_get(mm); >> - if (cid >= 0) >> - goto unlock; >> } >> >> + /* >> + * If use_cid_lock is set, hold the cid_lock to perform cid >> + * allocation to guarantee forward progress. >> + */ >> + raw_spin_lock(&cid_lock); >> + cid = cpumask_find_and_set(cpumask); >> + if (cid < nr_cpu_ids) >> + goto unlock; In the !use_cid_lock case where we already failed a lookup above, this change ends up doing another attempt at lookup before setting the use_cid_lock and attempting again until success. I am not sure what is the motivation for changing the code flow here ? General comment about the rest of the series: please review code comments for typos. Thanks, Mathieu >> + >> /* >> * cid concurrently allocated. Retry while forcing following >> * allocations to use the cid_lock to ensure forward progress. >> @@ -3415,9 +3393,9 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) >> * all newcoming allocations observe the use_cid_lock flag set. >> */ >> do { >> - cid = __mm_cid_try_get(mm); >> + cid = cpumask_find_and_set(cpumask); >> cpu_relax(); >> - } while (cid < 0); >> + } while (cid >= nr_cpu_ids); >> /* >> * Allocate before clearing use_cid_lock. Only care about >> * program order because this is for forward progress. >> -- >> 2.39.2 >>
On Mon, Nov 20, 2023 at 11:17:32AM -0500, Mathieu Desnoyers wrote: ... > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > > index 2e5a95486a42..b2f095a9fc40 100644 > > > --- a/kernel/sched/sched.h > > > +++ b/kernel/sched/sched.h > > > @@ -3345,28 +3345,6 @@ static inline void mm_cid_put(struct mm_struct *mm) > > > __mm_cid_put(mm, mm_cid_clear_lazy_put(cid)); > > > } > > > -static inline int __mm_cid_try_get(struct mm_struct *mm) > > > -{ > > > - struct cpumask *cpumask; > > > - int cid; > > > - > > > - cpumask = mm_cidmask(mm); > > > - /* > > > - * Retry finding first zero bit if the mask is temporarily > > > - * filled. This only happens during concurrent remote-clear > > > - * which owns a cid without holding a rq lock. > > > - */ > > > - for (;;) { > > > - cid = cpumask_first_zero(cpumask); > > > - if (cid < nr_cpu_ids) > > > - break; > > > - cpu_relax(); > > > - } > > > - if (cpumask_test_and_set_cpu(cid, cpumask)) > > > - return -1; > > This was split in find / test_and_set on purpose because following > patches I have (implementing numa-aware mm_cid) have a scan which > needs to scan sets of two cpumasks in parallel (with "and" and > and_not" operators). > > Moreover, the "mask full" scenario only happens while a concurrent > remote-clear temporarily owns a cid without rq lock. See > sched_mm_cid_remote_clear(): > > /* > * The cid is unused, so it can be unset. > * Disable interrupts to keep the window of cid ownership without rq > * lock small. > */ > local_irq_save(flags); > if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) > __mm_cid_put(mm, cid); > local_irq_restore(flags); > > The proposed patch here turns this scenario into something heavier > (setting the use_cid_lock) rather than just retrying. I guess the > question to ask here is whether it is theoretically possible to cause > __mm_cid_try_get() to fail to have forward progress if we have a high > rate of sched_mm_cid_remote_clear. If we decide that this is indeed > a possible progress-failure scenario, then it makes sense to fallback > to use_cid_lock as soon as a full mask is encountered. > > However, removing the __mm_cid_try_get() helper will make it harder to > integrate the following numa-awareness patches I have on top. > > I am not against using cpumask_find_and_set, but can we keep the > __mm_cid_try_get() helper to facilitate integration of future work ? > We just have to make it use cpumask_find_and_set, which should be > easy. Sure, I can. Can you point me to the work you mention here?
On 2023-11-21 08:31, Yury Norov wrote: > On Mon, Nov 20, 2023 at 11:17:32AM -0500, Mathieu Desnoyers wrote: > [...] > > Sure, I can. Can you point me to the work you mention here? It would have to be updated now, but here is the last version that was posted: https://lore.kernel.org/lkml/20221122203932.231377-1-mathieu.desnoyers@efficios.com/ Especially those patches: 2022-11-22 20:39 ` [PATCH 22/30] lib: Implement find_{first,next,nth}_notandnot_bit, find_first_andnot_bit Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 23/30] cpumask: Implement cpumask_{first,next}_{not,}andnot Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 24/30] sched: NUMA-aware per-memory-map concurrency ID Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 25/30] rseq: Extend struct rseq with per-memory-map NUMA-aware Concurrency ID Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 26/30] selftests/rseq: x86: Implement rseq_load_u32_u32 Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 27/30] selftests/rseq: Implement mm_numa_cid accessors in headers Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 28/30] selftests/rseq: Implement numa node id vs mm_numa_cid invariant test Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 29/30] selftests/rseq: Implement mm_numa_cid tests Mathieu Desnoyers 2022-11-22 20:39 ` [PATCH 30/30] tracing/rseq: Add mm_numa_cid field to rseq_update Mathieu Desnoyers Thanks, Mathieu
On Tue, Nov 21, 2023 at 08:44:17AM -0500, Mathieu Desnoyers wrote: > On 2023-11-21 08:31, Yury Norov wrote: > > On Mon, Nov 20, 2023 at 11:17:32AM -0500, Mathieu Desnoyers wrote: > > > [...] > > > > Sure, I can. Can you point me to the work you mention here? > > It would have to be updated now, but here is the last version that was posted: > > https://lore.kernel.org/lkml/20221122203932.231377-1-mathieu.desnoyers@efficios.com/ > > Especially those patches: > > 2022-11-22 20:39 ` [PATCH 22/30] lib: Implement find_{first,next,nth}_notandnot_bit, find_first_andnot_bit Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 23/30] cpumask: Implement cpumask_{first,next}_{not,}andnot Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 24/30] sched: NUMA-aware per-memory-map concurrency ID Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 25/30] rseq: Extend struct rseq with per-memory-map NUMA-aware Concurrency ID Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 26/30] selftests/rseq: x86: Implement rseq_load_u32_u32 Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 27/30] selftests/rseq: Implement mm_numa_cid accessors in headers Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 28/30] selftests/rseq: Implement numa node id vs mm_numa_cid invariant test Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 29/30] selftests/rseq: Implement mm_numa_cid tests Mathieu Desnoyers > 2022-11-22 20:39 ` [PATCH 30/30] tracing/rseq: Add mm_numa_cid field to rseq_update Mathieu Desnoyers OK, I'll take a look. Thanks, Yury
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index cfb545841a2c..c2acced8be4e 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -271,6 +271,18 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, small_cpumask_bits, n + 1); } +/** + * cpumask_find_and_set - find the first unset cpu in a cpumask and + * set it atomically + * @srcp: the cpumask pointer + * + * Return: >= nr_cpu_ids if nothing is found. + */ +static inline unsigned int cpumask_find_and_set(volatile struct cpumask *srcp) +{ + return find_and_set_bit(cpumask_bits(srcp), small_cpumask_bits); +} + /** * for_each_cpu - iterate over every cpu in a mask * @cpu: the (optionally unsigned) integer iterator diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 2e5a95486a42..b2f095a9fc40 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3345,28 +3345,6 @@ static inline void mm_cid_put(struct mm_struct *mm) __mm_cid_put(mm, mm_cid_clear_lazy_put(cid)); } -static inline int __mm_cid_try_get(struct mm_struct *mm) -{ - struct cpumask *cpumask; - int cid; - - cpumask = mm_cidmask(mm); - /* - * Retry finding first zero bit if the mask is temporarily - * filled. This only happens during concurrent remote-clear - * which owns a cid without holding a rq lock. - */ - for (;;) { - cid = cpumask_first_zero(cpumask); - if (cid < nr_cpu_ids) - break; - cpu_relax(); - } - if (cpumask_test_and_set_cpu(cid, cpumask)) - return -1; - return cid; -} - /* * Save a snapshot of the current runqueue time of this cpu * with the per-cpu cid value, allowing to estimate how recently it was used. @@ -3381,25 +3359,25 @@ static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm) static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) { + struct cpumask *cpumask = mm_cidmask(mm); int cid; - /* - * All allocations (even those using the cid_lock) are lock-free. If - * use_cid_lock is set, hold the cid_lock to perform cid allocation to - * guarantee forward progress. - */ + /* All allocations (even those using the cid_lock) are lock-free. */ if (!READ_ONCE(use_cid_lock)) { - cid = __mm_cid_try_get(mm); - if (cid >= 0) + cid = cpumask_find_and_set(cpumask); + if (cid < nr_cpu_ids) goto end; - raw_spin_lock(&cid_lock); - } else { - raw_spin_lock(&cid_lock); - cid = __mm_cid_try_get(mm); - if (cid >= 0) - goto unlock; } + /* + * If use_cid_lock is set, hold the cid_lock to perform cid + * allocation to guarantee forward progress. + */ + raw_spin_lock(&cid_lock); + cid = cpumask_find_and_set(cpumask); + if (cid < nr_cpu_ids) + goto unlock; + /* * cid concurrently allocated. Retry while forcing following * allocations to use the cid_lock to ensure forward progress. @@ -3415,9 +3393,9 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm) * all newcoming allocations observe the use_cid_lock flag set. */ do { - cid = __mm_cid_try_get(mm); + cid = cpumask_find_and_set(cpumask); cpu_relax(); - } while (cid < 0); + } while (cid >= nr_cpu_ids); /* * Allocate before clearing use_cid_lock. Only care about * program order because this is for forward progress.