From patchwork Tue Oct 10 20:04:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 150940 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp111452vqb; Tue, 10 Oct 2023 13:09:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGKk+nYOw70wsgmaLwUxZ7rvHPViiEM/9BbfeWJszb4YBrA1GcUbfIjeBePrDu718am/hSZ X-Received: by 2002:a92:d9c4:0:b0:349:4e1f:e9a0 with SMTP id n4-20020a92d9c4000000b003494e1fe9a0mr18655930ilq.2.1696968596235; Tue, 10 Oct 2023 13:09:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696968596; cv=none; d=google.com; s=arc-20160816; b=Mv92E3h7EfUv4lDjLjN7BFUwxbhwV5DJEv4cXbNbYEBLbSo4Nxbsyc+Bk1zDsgl1jV v40dFB0wRl7C+OJeIqdN3pRUMKJ21H3jhUPHSn5u9LlnHz8Z4vCr3MmIYzdvyKYE4Jmm lhMuEy6siz/GlHJpyWUXu6IKvYPrETVVVdDzqEcyFYiGLvzfE1PNhSwnl0dxB3LdHaSH ePmmxvAs5jtHaZ6X10Bm7xexL0jg2bhprCtwM7tEYQMMjYD5cDrEIJSm1woyPjEyxcmE zWs1vcGznaE57SEGHgDyUXKWqPB7izNBsWccJb8fMZClO04aLtvRUzjaODnu0craA2+A eibQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=3oQwwKYwoiZ0SHY/ewVGJIxzoCy1EpL15euKfoG+Nm4=; fh=Pxpu0U98iKLUR4W9ONMcJDCXIHVemDtEcL6w+EAnbhw=; b=Y0H002vOQ5BTlctQ8pqCUuppgl/hLpDOULPF/4vjA40OGCf73B1Xq7vGiGSpgMDLg5 0Er/pSNMN8VFV/fGUr+gvC2SRujoRNrRl+rxtaSknFcuE/IHxCGBpToIngzPaRkmT0/1 4zuT/ecHY38MoXwzcB96Tuh6Gt29XHPN/4LDGhSkndZJ+C5O4clHnI0UjKXkClv7walU erMzrMZUa57h8tyPDTZ/SDdcNtX6xqvLeQ1s3K5SWBvu6G4T8q2qHZ+CPHy+sDYEECCM oug8kZ9YNRnDzA6VT957VDtpGZRbv2vLRe21bd5/7o508XNOrjysfzz9bOZDQta91Msl 80mg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=nxi6ki9e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id 23-20020a630f57000000b0057759a5b7ccsi9745816pgp.305.2023.10.10.13.09.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Oct 2023 13:09:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=nxi6ki9e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 8121780D6A2C; Tue, 10 Oct 2023 13:09:43 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343800AbjJJUHV (ORCPT + 19 others); Tue, 10 Oct 2023 16:07:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229437AbjJJUHG (ORCPT ); Tue, 10 Oct 2023 16:07:06 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32263135 for ; Tue, 10 Oct 2023 13:04:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=3oQwwKYwoiZ0SHY/ewVGJIxzoCy1EpL15euKfoG+Nm4=; b=nxi6ki9eV37suWsQbArzTWN5mq glkierP26zkTuu3TneEwh+vLLif+J+JUOQjIwvfjvmbZ2t3lHsUI1QvpsZ/Q8Wc9DffoUYmTO0CFl 7jV6RY8nuwENh5D7inZnNa1rROwNs9rImi4QgioqXpF4EzzKbR1nd5gzfrFomqA6A7yZSgCcDwOhc H3ETW0vty5I53RP0RYb36cggPiwx33fDh8P47Tzu0cIYvd0CStWflEDH+RlYO0arSoQ5fSea1uPDj mXHSeNHcByNcXmMM8PItN11/P4DDcFXfD0do1T9oENp1qh44BVxUT2X+fJpIMBLcYrcADqFBhbZJE DhdWwCpw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qqIyA-006Y9B-U1; Tue, 10 Oct 2023 20:04:43 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id 3A292300392; Tue, 10 Oct 2023 22:04:42 +0200 (CEST) Date: Tue, 10 Oct 2023 22:04:42 +0200 From: Peter Zijlstra To: Kuyo Chang =?utf-8?b?KOW8teW7uuaWhyk=?= Cc: "dietmar.eggemann@arm.com" , "linux-kernel@vger.kernel.org" , "linux-mediatek@lists.infradead.org" , "rostedt@goodmis.org" , wsd_upstream , "vschneid@redhat.com" , "bristot@redhat.com" , "juri.lelli@redhat.com" , "mingo@redhat.com" , "linux-arm-kernel@lists.infradead.org" , "bsegall@google.com" , "mgorman@suse.de" , "matthias.bgg@gmail.com" , "vincent.guittot@linaro.org" , "angelogioacchino.delregno@collabora.com" Subject: [PATCH] sched: Fix stop_one_cpu_nowait() vs hotplug Message-ID: <20231010200442.GA16515@noisy.programming.kicks-ass.net> References: <20230927033431.12406-1-kuyo.chang@mediatek.com> <20230927080850.GB21824@noisy.programming.kicks-ass.net> <20230929102135.GD6282@noisy.programming.kicks-ass.net> <8ad1b617a1040ce4cc56a5d04e8219b5313a9a6e.camel@mediatek.com> <20231010145747.GQ377@noisy.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231010145747.GQ377@noisy.programming.kicks-ass.net> X-Spam-Status: No, score=2.7 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 10 Oct 2023 13:09:43 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779400542535515020 X-GMAIL-MSGID: 1779400542535515020 On Tue, Oct 10, 2023 at 04:57:47PM +0200, Peter Zijlstra wrote: > On Tue, Oct 10, 2023 at 02:40:22PM +0000, Kuyo Chang (張建文) wrote: > > It is running good so far(more than a week)on hotplug/set affinity > > stress test. I will keep it testing and report back if it happens > > again. > > OK, I suppose I should look at writing a coherent Changelog for this > then... Something like the below... ? --- Subject: sched: Fix stop_one_cpu_nowait() vs hotplug From: Peter Zijlstra Date: Tue Oct 10 20:57:39 CEST 2023 Kuyo reported sporadic failures on a sched_setaffinity() vs CPU hotplug stress-test -- notably affine_move_task() remains stuck in wait_for_completion(), leading to a hung-task detector warning. Specifically, it was reported that stop_one_cpu_nowait(.fn = migration_cpu_stop) returns false -- this stopper is responsible for the matching complete(). The race scenario is: CPU0 CPU1 // doing _cpu_down() __set_cpus_allowed_ptr() task_rq_lock(); takedown_cpu() stop_machine_cpuslocked(take_cpu_down..) ack_state() MULTI_STOP_RUN take_cpu_down() __cpu_disable(); stop_machine_park(); stopper->enabled = false; /> /> stop_one_cpu_nowait(.fn = migration_cpu_stop); if (stopper->enabled) // false!!! That is, by doing stop_one_cpu_nowait() after dropping rq-lock, the stopper thread gets a chance to preempt and allows the cpu-down for the target CPU to complete. OTOH, since stop_one_cpu_nowait() / cpu_stop_queue_work() needs to issue a wakeup, it must not be ran under the scheduler locks. Solve this apparent contradiction by keeping preemption disabled over the unlock + queue_stopper combination: preempt_disable(); task_rq_unlock(...); if (!stop_pending) stop_one_cpu_nowait(...) preempt_enable(); This respects the lock ordering contraints while still avoiding the above race. That is, if we find the CPU is online under rq-lock, the targeted stop_one_cpu_nowait() must succeed. Apply this pattern to all similar stop_one_cpu_nowait() invocations. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Reported-by: "Kuyo Chang (張建文)" Signed-off-by: Peter Zijlstra (Intel) Tested-by: "Kuyo Chang (張建文)" --- kernel/sched/core.c | 10 ++++++++-- kernel/sched/deadline.c | 2 ++ kernel/sched/fair.c | 4 +++- 3 files changed, 13 insertions(+), 3 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2645,9 +2645,11 @@ static int migration_cpu_stop(void *data * it. */ WARN_ON_ONCE(!pending->stop_pending); + preempt_disable(); task_rq_unlock(rq, p, &rf); stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop, &pending->arg, &pending->stop_work); + preempt_enable(); return 0; } out: @@ -2967,12 +2969,13 @@ static int affine_move_task(struct rq *r complete = true; } + preempt_disable(); task_rq_unlock(rq, p, rf); - if (push_task) { stop_one_cpu_nowait(rq->cpu, push_cpu_stop, p, &rq->push_work); } + preempt_enable(); if (complete) complete_all(&pending->done); @@ -3038,12 +3041,13 @@ static int affine_move_task(struct rq *r if (flags & SCA_MIGRATE_ENABLE) p->migration_flags &= ~MDF_PUSH; + preempt_disable(); task_rq_unlock(rq, p, rf); - if (!stop_pending) { stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop, &pending->arg, &pending->stop_work); } + preempt_enable(); if (flags & SCA_MIGRATE_ENABLE) return 0; @@ -9459,9 +9463,11 @@ static void balance_push(struct rq *rq) * Temporarily drop rq->lock such that we can wake-up the stop task. * Both preemption and IRQs are still disabled. */ + preempt_disable(); raw_spin_rq_unlock(rq); stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task, this_cpu_ptr(&push_work)); + preempt_enable(); /* * At this point need_resched() is true and we'll take the loop in * schedule(). The next pick is obviously going to be the stop task --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2420,9 +2420,11 @@ static void pull_dl_task(struct rq *this double_unlock_balance(this_rq, src_rq); if (push_task) { + preempt_disable(); raw_spin_rq_unlock(this_rq); stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop, push_task, &src_rq->push_work); + preempt_enable(); raw_spin_rq_lock(this_rq); } } --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11299,13 +11299,15 @@ static int load_balance(int this_cpu, st busiest->push_cpu = this_cpu; active_balance = 1; } - raw_spin_rq_unlock_irqrestore(busiest, flags); + preempt_disable(); + raw_spin_rq_unlock_irqrestore(busiest, flags); if (active_balance) { stop_one_cpu_nowait(cpu_of(busiest), active_load_balance_cpu_stop, busiest, &busiest->active_balance_work); } + preempt_enable(); } } else { sd->nr_balance_failed = 0;