From patchwork Thu Apr 27 11:19:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 88207 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp194554vqo; Thu, 27 Apr 2023 04:25:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4dwBJQR+IUKj4A9BhLIsDaJeck6+UglnJcAeFwyWfGkZZJnlTFdcoN5i1keb6rSKbQKAQZ X-Received: by 2002:a05:6a00:99b:b0:63b:7489:f77 with SMTP id u27-20020a056a00099b00b0063b74890f77mr7936557pfg.0.1682594722916; Thu, 27 Apr 2023 04:25:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682594722; cv=none; d=google.com; s=arc-20160816; b=07E3Vnz8wj8YH9/Tg6MJqNBNpSTF/t2OfYR3GsspTs12hEjctKcf95bzbSGW8qq4e+ Er5W+B9uE0jBzXIjlfXyBzUbFHKeeR4uC3UR7+964yJVcHtuEZc+Ma6CVOD7fU79HRlA /9sPJg/p/UYnYJrIRbEaCSaHu/SmU92eE0HlxD/lEKX6DSYTRCQitZo6zKFRqV/Iik88 2/3p4Xx6xSOQQqikEWZlt3n2lNTctYVNQQx/mvFn7/05Wp7aOV10j4KGlBptSKK8WD9t 23xfjZSmavVC6i845GG065JYlKyrsSxBZQqZVsrJLU+ohZsn5R+p5q9SA8YiDbm2DFQY QkPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=nAuLWDQMB55lcnDvQBz27uhWcaqTSuhCvrx4iirveNU=; b=Ht5NxkyiJNGWfPGa9X/k82wtfCn9ymxNGWGjnM3iGHLaGWnWXit9zdqMYw6J/SEen3 644LbxWTGLfcXYX6zZbZRoOi+Aerm3A04aInOc9QkRpU21xqPg+HXeLuSOPByNXW6UOv PmnUpbAytiulsXtyPSLGdMAuamXeKdqfD+jcnbjZIb1LP0HqtMUzkbC5XYgQHXEU71x2 ebgvz7l+cVHFmnRed1BP1CbL01wmi6Y9XlMtIPdBIfLgWeBEiJ4Oj0JrZluDkubOYcpk 57SAKO33fi4ES7j51GPKE8K+9ajZJRbp3KB44mfvMqJIk9hNcrPUN63E36fMzqo0dAgC 4gLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=BmkA1tCR; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r14-20020aa79ece000000b0063b834cf20esi14657071pfq.91.2023.04.27.04.25.07; Thu, 27 Apr 2023 04:25:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=BmkA1tCR; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243551AbjD0LUL (ORCPT + 99 others); Thu, 27 Apr 2023 07:20:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243620AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69F7B4C2B for ; Thu, 27 Apr 2023 04:19:45 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nAuLWDQMB55lcnDvQBz27uhWcaqTSuhCvrx4iirveNU=; b=BmkA1tCRKTr0kyPsvHU5Dx2g8M0ALymErOFL9wJnFzO+sFQX9zAW50FDQ6L+v5BlNr180F h/yyXzQs3uUjMuEpDYrNzEJMXW9k33YY3s6f/HN+9dj+yfr+6CZicnl82w8xi1IINnwqZo KynDhU/i5Sl6SrfL8yh7366jKIoQ6zeAnzqB4MzH2i6fhBH/w/Alk/0IZA/gLB1seLiuHS M7eRYEVwI7wieHXdEH6c7CyySRdzp70Jm4oTJvb2i7wpW1p07San6NlgxBhYUqBIHSMBgA sYSI9VDdjVS3rG4v1zEPM9dWBRhBh/U/i8slHSCDU/Ce+3prrQ5YKELIKVjfng== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nAuLWDQMB55lcnDvQBz27uhWcaqTSuhCvrx4iirveNU=; b=bs2EIpLBPRPQy4OQl6bI+zwRt6XERNPrd+Q1YXGQ6JTJocNYKkDZVwlZMK047VgIn5/kja YCoZjm/To83YB5BQ== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 1/4] sched/core: Provide sched_rtmutex() and expose sched work helpers Date: Thu, 27 Apr 2023 13:19:34 +0200 Message-Id: <20230427111937.2745231-2-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764328444237906371?= X-GMAIL-MSGID: =?utf-8?q?1764328444237906371?= From: Thomas Gleixner schedule() invokes sched_submit_work() before scheduling and sched_update_worker() afterwards to ensure that queued block requests are flushed and the (IO)worker machineries can instantiate new workers if required. This avoids deadlocks and starvation. With rt_mutexes this can lead to subtle problem: When rtmutex blocks current::pi_blocked_on points to the rtmutex it blocks on. When one of the functions in sched_submit/resume_work() contends on a rtmutex based lock then that would corrupt current::pi_blocked_on. Make it possible to let rtmutex issue the calls outside of the slowpath, i.e. when it is guaranteed that current::pi_blocked_on is NULL, by: - Exposing sched_submit_work() and moving the task_running() condition into schedule() - Renamimg sched_update_worker() to sched_resume_work() and exposing it too. - Providing sched_rtmutex() which just does the inner loop of scheduling until need_resched() is not longer set. Split out the loop so this does not create yet another copy. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- include/linux/sched.h | 5 +++++ kernel/sched/core.c | 40 ++++++++++++++++++++++------------------ 2 files changed, 27 insertions(+), 18 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 675298d6eb362..ff1ce66d8b6e3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -304,6 +304,11 @@ extern long schedule_timeout_idle(long timeout); asmlinkage void schedule(void); extern void schedule_preempt_disabled(void); asmlinkage void preempt_schedule_irq(void); + +extern void sched_submit_work(void); +extern void sched_resume_work(void); +extern void schedule_rtmutex(void); + #ifdef CONFIG_PREEMPT_RT extern void schedule_rtlock(void); #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c415418b0b847..7c5cfae086c78 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6690,14 +6690,11 @@ void __noreturn do_task_dead(void) cpu_relax(); } -static inline void sched_submit_work(struct task_struct *tsk) +void sched_submit_work(void) { - unsigned int task_flags; + struct task_struct *tsk = current; + unsigned int task_flags = tsk->flags; - if (task_is_running(tsk)) - return; - - task_flags = tsk->flags; /* * If a worker goes to sleep, notify and ask workqueue whether it * wants to wake up a task to maintain concurrency. @@ -6723,8 +6720,10 @@ static inline void sched_submit_work(struct task_struct *tsk) blk_flush_plug(tsk->plug, true); } -static void sched_update_worker(struct task_struct *tsk) +void sched_resume_work(void) { + struct task_struct *tsk = current; + if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) { if (tsk->flags & PF_WQ_WORKER) wq_worker_running(tsk); @@ -6733,20 +6732,29 @@ static void sched_update_worker(struct task_struct *tsk) } } -asmlinkage __visible void __sched schedule(void) +static void schedule_loop(unsigned int sched_mode) { - struct task_struct *tsk = current; - - sched_submit_work(tsk); do { preempt_disable(); - __schedule(SM_NONE); + __schedule(sched_mode); sched_preempt_enable_no_resched(); } while (need_resched()); - sched_update_worker(tsk); +} + +asmlinkage __visible void __sched schedule(void) +{ + if (!task_is_running(current)) + sched_submit_work(); + schedule_loop(SM_NONE); + sched_resume_work(); } EXPORT_SYMBOL(schedule); +void schedule_rtmutex(void) +{ + schedule_loop(SM_NONE); +} + /* * synchronize_rcu_tasks() makes sure that no task is stuck in preempted * state (have scheduled out non-voluntarily) by making sure that all @@ -6806,11 +6814,7 @@ void __sched schedule_preempt_disabled(void) #ifdef CONFIG_PREEMPT_RT void __sched notrace schedule_rtlock(void) { - do { - preempt_disable(); - __schedule(SM_RTLOCK_WAIT); - sched_preempt_enable_no_resched(); - } while (need_resched()); + schedule_loop(SM_RTLOCK_WAIT); } NOKPROBE_SYMBOL(schedule_rtlock); #endif From patchwork Thu Apr 27 11:19:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 88223 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp215056vqo; Thu, 27 Apr 2023 05:04:56 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4kQE6I9Q5u9od5Dq4S9m3sLaAkNraxgMmIwnNvB2i/CR6c2TlnaHNwDNcfhdd1pn0NwiO7 X-Received: by 2002:a17:902:e841:b0:1a6:7b66:3bc9 with SMTP id t1-20020a170902e84100b001a67b663bc9mr1464166plg.27.1682597096044; Thu, 27 Apr 2023 05:04:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682597096; cv=none; d=google.com; s=arc-20160816; b=TY6cZWomyfMdRBBXgrOwh1zyvxyOoetXkwv4pzGpOJN3HxMaEvjIZDrbekACzL1W23 JQsRWpuXmTJ/qVTv0x2Y2hjg9LVSBt+EWi6b5IewJOFwL3izu8IUfpfLrVUTDn9iVVN6 B5gzJP5T/hcqLGe3e5wfOzILLd8edmJp5e9IJZi0iYUjQOLFKOLAUboo8qsiUf/F2Xni dc7c/dd0CfCTdSlt8pGL35EpaapIV2OPQQmVl7NHTmit9skYnSxspuOCN5G2UDzvc0uc A/tFQayoaJloZEM2IIk11BvQQkoyeuQZoP/e5JcmPy+JM8q2fHoGnrDC2aN5e6L2xDXd 3asw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=ORxxMGTCqHCTuORmt0+70Mqp063X5vrWfREdzEDLbVo=; b=MXkZt+SHE5+9ATsmIbPMmsOhlA6YqKZNcQnQiZjZgt+mH/zQlZ2Shp3WgBz+2EP2wN hthOw4eKeTI0TQ5KCB/9qehblZDfyAXTrupBjIgAgf5nS2/I2IvwkCnRJNAPXEg91ldR 6XDCBEt51MbxD5dloiS7uGvrkBxQsDlXbXH5KyOYg58NwhNRYJkc2IoP4r28+6fssD8Z Dktfv/wRsqHiDahxdjOIuaPwjE5CRlzBECB+ObK0IIDtkJNH2ZEbfwTExko0LGKSovhu QLSCKoXJVvxVStzF2hNvt85oHHsDsluxt8OpMNtvs3FmTL9rHMYVghbGbkKAQo7Uy8e/ Wrug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=Gck4aSMK; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=kTs6IfxY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ij1-20020a170902ab4100b001a812cbb21csi18538718plb.139.2023.04.27.05.04.34; Thu, 27 Apr 2023 05:04:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=Gck4aSMK; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=kTs6IfxY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243638AbjD0LUB (ORCPT + 99 others); Thu, 27 Apr 2023 07:20:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243559AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A81A85244 for ; Thu, 27 Apr 2023 04:19:45 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ORxxMGTCqHCTuORmt0+70Mqp063X5vrWfREdzEDLbVo=; b=Gck4aSMK7OncrecOxwvKthriuTS5Inm5bndSjBt2etXMraZlGX5LwTrbipYyQ6g+FnQu7h D6VO9eIYi36JjwEB3OCkJEUB+hzWOrMTB9oFx68JZ0joHQ/j1+OXUA23QKOJD/Irt90G4m AJ1/SGpsJ8EzpzsVkIInAWp1z1bKq3hyaegCCjfuUiaPcd7mKKHZACmSqT7tg26AeLB58k GIp+cL1HPMrZgUpwvV3i4rZhC1h1KNAqecJ+RWhFLIyfm+s2Eztv1AUGFfvWjdxVEFewXi f1H2Nnqw3ZVdehzB5XEfdU1SKHPeX65QMa4VXdOe+kbrNXEmJiIXPJt/dJXczw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ORxxMGTCqHCTuORmt0+70Mqp063X5vrWfREdzEDLbVo=; b=kTs6IfxYm3EWHR6adfgx1jpJcNw3wEw7YFL0cGjAZtgwI9WLXPj1tvs402v/43ghvrNppo dLNHVFDlqOO0g8BQ== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 2/4] locking/rtmutex: Submit/resume work explicitly before/after blocking Date: Thu, 27 Apr 2023 13:19:35 +0200 Message-Id: <20230427111937.2745231-3-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764330932544837817?= X-GMAIL-MSGID: =?utf-8?q?1764330932544837817?= schedule() invokes sched_submit_work() before scheduling and sched_resume_work() afterwards to ensure that queued block requests are flushed and the (IO)worker machineries can instantiate new workers if required. This avoids deadlocks and starvation. With rt_mutexes this can lead to a subtle problem: When rtmutex blocks current::pi_blocked_on points to the rtmutex it blocks on. When one of the functions in sched_submit/resume_work() contends on a rtmutex based lock then that would corrupt current::pi_blocked_on. Let rtmutex and the RT lock variants which are based on it invoke sched_submit/resume_work() explicitly before and after the slowpath so it's guaranteed that current::pi_blocked_on cannot be corrupted by blocking on two locks. This does not apply to the PREEMPT_RT variants of spinlock_t and rwlock_t as their scheduling slowpath is separate and cannot invoke the work related functions due to potential deadlocks anyway. [ tglx: Make it explicit and symmetric. Massage changelog ] Fixes: e17ba59b7e8e1 ("locking/rtmutex: Guard regular sleeping locks specific functions") Reported-by: Crystal Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/4b4ab374d3e24e6ea8df5cadc4297619a6d945af.camel@redhat.com Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/rtmutex.c | 11 +++++++++-- kernel/locking/rwbase_rt.c | 18 ++++++++++++++++-- kernel/locking/rwsem.c | 6 ++++++ kernel/locking/spinlock_rt.c | 3 +++ 4 files changed, 34 insertions(+), 4 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 728f434de2bbf..aa66a3c5950a7 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1555,7 +1555,7 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock, raw_spin_unlock_irq(&lock->wait_lock); if (!owner || !rtmutex_spin_on_owner(lock, waiter, owner)) - schedule(); + schedule_rtmutex(); raw_spin_lock_irq(&lock->wait_lock); set_current_state(state); @@ -1584,7 +1584,7 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock, WARN(1, "rtmutex deadlock detected\n"); while (1) { set_current_state(TASK_INTERRUPTIBLE); - schedule(); + schedule_rtmutex(); } } @@ -1679,6 +1679,12 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, unsigned long flags; int ret; + /* + * The task is about to sleep. Invoke sched_submit_work() before + * blocking as that might take locks and corrupt tsk::pi_blocked_on. + */ + sched_submit_work(); + /* * Technically we could use raw_spin_[un]lock_irq() here, but this can * be called in early boot if the cmpxchg() fast path is disabled @@ -1691,6 +1697,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + sched_resume_work(); return ret; } diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 25ec0239477c2..945d474f5d27f 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -131,10 +131,21 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, static __always_inline int rwbase_read_lock(struct rwbase_rt *rwb, unsigned int state) { + int ret; + if (rwbase_read_trylock(rwb)) return 0; - return __rwbase_read_lock(rwb, state); + /* + * The task is about to sleep. For rwsems this submits work as that + * might take locks and corrupt tsk::pi_blocked_on. Must be + * explicit here because __rwbase_read_lock() cannot invoke + * rt_mutex_slowlock(). NOP for rwlocks. + */ + rwbase_sched_submit_work(); + ret = __rwbase_read_lock(rwb, state); + rwbase_sched_resume_work(); + return ret; } static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb, @@ -230,7 +241,10 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb, struct rt_mutex_base *rtm = &rwb->rtmutex; unsigned long flags; - /* Take the rtmutex as a first step */ + /* + * Take the rtmutex as a first step. For rwsem this will also + * invoke sched_submit_work() to flush IO and workers. + */ if (rwbase_rtmutex_lock_state(rtm, state)) return -EINTR; diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acb5a50309a18..aca266006ad47 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1415,6 +1415,12 @@ static inline void __downgrade_write(struct rw_semaphore *sem) #define rwbase_rtmutex_lock_state(rtm, state) \ __rt_mutex_lock(rtm, state) +#define rwbase_sched_submit_work() \ + sched_submit_work() + +#define rwbase_sched_resume_work() \ + sched_resume_work() + #define rwbase_rtmutex_slowlock_locked(rtm, state) \ __rt_mutex_slowlock_locked(rtm, NULL, state) diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 48a19ed8486d8..62c4a6866087a 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -159,6 +159,9 @@ rwbase_rtmutex_lock_state(struct rt_mutex_base *rtm, unsigned int state) return 0; } +static __always_inline void rwbase_sched_submit_work(void) { } +static __always_inline void rwbase_sched_resume_work(void) { } + static __always_inline int rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state) { From patchwork Thu Apr 27 11:19:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 88221 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp213800vqo; Thu, 27 Apr 2023 05:03:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ49FwN7E1mmHOhMb8xaKkAa3lAri8hJ4od3KaWpGiojT+z8982Ib2U7NbqD512RlW2q22E9 X-Received: by 2002:a05:6a20:a59f:b0:f0:a8e7:bbee with SMTP id bc31-20020a056a20a59f00b000f0a8e7bbeemr1362446pzb.1.1682597006700; Thu, 27 Apr 2023 05:03:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682597006; cv=none; d=google.com; s=arc-20160816; b=Su0eE2fP0iJipobMk4nbdDzXqp7R8SC9ptBwIVMxpTH4WHurXTUdoISsgAAi4UNn/5 N79OLSLVQFnODBXPt7RG4r4tkfItf5gQrwEFkOTg/izrR7esJkg+nr0GKLJASPXnRvMz XNafbBgnC8Lpuzu0Cd7qGMr2onMtVPVJ4C16Szec1Qf2QObE0QUXrBk01pTYsJtqKrd3 qTLfNHIffE+NXu1zVFCQQSGm5JVD8lUbRc+l7rfGDq+XVb7FHIMKimgGc3PtLkdlLFAU T1B2q25WCH/2QvD67DbpUCRJXo5slUfphRo+xFKTmEExIvqQIy6E1ELWESk+373qFFAS omHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=EpMifFFNDQbnk+O0h0r8iIqLnuZqoXb9cdjvvNNkuZG1IWnTkX3cnK9SgzrmNhQZIQ AYYJIkuWuKmLq2+RdlGt/+f9oWhTjpcAgJ1UcokWbtktyH0N7/4xH2aJY7+DddLy4tlF YFFzpLyHq7U556ZniY3KpBXqF/vD34p1N7JNh4D69Mra0LCSIejYWMAUyEyEPiJP8MRT B2pLAZlWUR5gRzDPv/7l59kUPuuEASUOmGnmwmN2jl8La/n7/eY+jzmV6KwFxOxr7YbZ lFs179fo5GnAyhs0A9c7Up9KZ+WoS6aHq8mz9JrISRgfGYy36xRjQfccshGyYMnMlNO/ jN1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b="eKuk40/R"; dkim=neutral (no key) header.i=@linutronix.de header.b=odiuxUs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e8-20020a63ee08000000b0051357734521si18606744pgi.604.2023.04.27.05.03.00; Thu, 27 Apr 2023 05:03:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b="eKuk40/R"; dkim=neutral (no key) header.i=@linutronix.de header.b=odiuxUs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243678AbjD0LUE (ORCPT + 99 others); Thu, 27 Apr 2023 07:20:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243563AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 599F55263 for ; Thu, 27 Apr 2023 04:19:46 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=eKuk40/REqcGKS0ii2ASevLY3qsTGyQpKqrPZwA4rFyEtiPLBKFamvgwzZkSxqe7rqHJW4 V7/MswqlsuXFxfaXz+WGA8BnoIoKM0NWYcRsY4uUScr1eHt/XaoltlRlkQHJmv71ER4MpW Y4yRirWQjv0+Bp/p6rJymqKaruezQ0uaq36fumnq5Gr2iTQ6zqwhLrGMV0LZWfvnJVZDqn 870vy+aipzwSfn/RWuhqJ52LM8KuvjglSYoQ8uMOv8Z7P+y1qMYXsWNOUEsRwebiQZSCjw 1vCPylGEcln7uR18jlws4iC9N/tvEzwV4OobbW8in4VPgrhbef9vmrVSVvqneg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=odiuxUs1bodRYWfNKXGv8oSn53TaRU36lEYWWoaUUgo6tmXnDI/rNSWy8xH4gAaRrZ0T6h 4d7RhFEJLodYovAg== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 3/4] locking/rtmutex: Avoid pointless blk_flush_plug() invocations Date: Thu, 27 Apr 2023 13:19:36 +0200 Message-Id: <20230427111937.2745231-4-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764330838702495204?= X-GMAIL-MSGID: =?utf-8?q?1764330838702495204?= With DEBUG_RT_MUTEXES enabled the fast-path rt_mutex_cmpxchg_acquire() always fails and all lock operations take the slow path, which leads to the invocation of blk_flush_plug() even if the lock is not contended which is unnecessary and avoids batch processing of requests. Provide a new helper inline rt_mutex_try_acquire() which maps to rt_mutex_cmpxchg_acquire() in the non-debug case. For the debug case it invokes rt_mutex_slowtrylock() which can acquire a non-contended rtmutex under full debug coverage. Replace the rt_mutex_cmpxchg_acquire() invocations in __rt_mutex_lock() and __ww_rt_mutex_lock() with the new helper function, which avoid the blk_flush_plug() for the non-contended case and preserves the debug mechanism. [ tglx: Created a new helper and massaged changelog ] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/rtmutex.c | 25 ++++++++++++++++++++++++- kernel/locking/ww_rt_mutex.c | 2 +- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index aa66a3c5950a7..dd76c1b9b7d21 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -218,6 +218,11 @@ static __always_inline bool rt_mutex_cmpxchg_acquire(struct rt_mutex_base *lock, return try_cmpxchg_acquire(&lock->owner, &old, new); } +static __always_inline bool rt_mutex_try_acquire(struct rt_mutex_base *lock) +{ + return rt_mutex_cmpxchg_acquire(lock, NULL, current); +} + static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock, struct task_struct *old, struct task_struct *new) @@ -297,6 +302,24 @@ static __always_inline bool rt_mutex_cmpxchg_acquire(struct rt_mutex_base *lock, } +static int __sched rt_mutex_slowtrylock(struct rt_mutex_base *lock); + +static __always_inline bool rt_mutex_try_acquire(struct rt_mutex_base *lock) +{ + /* + * With debug enabled rt_mutex_cmpxchg trylock() will always fail, + * which will unconditionally invoke sched_submit/resume_work() in + * the slow path of __rt_mutex_lock() and __ww_rt_mutex_lock() even + * in the non-contended case. + * + * Avoid that by using rt_mutex_slow_trylock() which is covered by + * the debug code and can acquire a non-contended rtmutex. On + * success the callsite avoids the sched_submit/resume_work() + * dance. + */ + return rt_mutex_slowtrylock(lock); +} + static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock, struct task_struct *old, struct task_struct *new) @@ -1704,7 +1727,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock, unsigned int state) { - if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) + if (likely(rt_mutex_try_acquire(lock))) return 0; return rt_mutex_slowlock(lock, NULL, state); diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c index d1473c624105c..c7196de838edc 100644 --- a/kernel/locking/ww_rt_mutex.c +++ b/kernel/locking/ww_rt_mutex.c @@ -62,7 +62,7 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, } mutex_acquire_nest(&rtm->dep_map, 0, 0, nest_lock, ip); - if (likely(rt_mutex_cmpxchg_acquire(&rtm->rtmutex, NULL, current))) { + if (likely(rt_mutex_try_acquire(&rtm->rtmutex))) { if (ww_ctx) ww_mutex_set_context_fastpath(lock, ww_ctx); return 0; From patchwork Thu Apr 27 11:19:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 88218 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp212444vqo; Thu, 27 Apr 2023 05:01:56 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6qnpOaO2g2ZroqwoaL+M22PcioEaVmtD/N2PYeLsAa4sdN6ywTVd7SPuizhdRC6ISQRUek X-Received: by 2002:a05:6a20:4425:b0:ec:60b9:c724 with SMTP id ce37-20020a056a20442500b000ec60b9c724mr1823779pzb.33.1682596915317; Thu, 27 Apr 2023 05:01:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682596915; cv=none; d=google.com; s=arc-20160816; b=ZEPIg1q1LkoM2jzA+LTUe7V3pIzyP/0I3qlhD1ObRNg8V8l/YAHazvppNUt4rV1Mjo Vm8zppkCjkla3X5FA2a2yy2wrJsqztL75ErMAz4i6VVITPTBl7rY6VvNeYjEep0Yj1nK ffIhRawAmXrib2jA+XcmoI/oAVjbDAH72cIkIcj2aPG4v41Irt1SqyHMk9ef+bsugshg GW2IJTjiAZxxS7XsNOhv7r89r/4fcSKDrBvEiKsygN+POg86+LbHZFn+7x4h28gQO0YV 6HtyoFLKE8SyQOwgHmXQKB0Eq//wKd3hNwE7IM7H3iXYBZr0N0eeSfA2YC8EZy5UgCdS pb9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=w7SiOThIQ863K6oJvVNAv7De2opm8Uk3DJjr8iS90qc=; b=RnSR37iDm9zklZCQJ41ROAoACKzF0k5UWavmf7JHZcYQFKkiU8+kPxANRAowbu3FiJ 3dI9AieG/ai1uCo4L+bimZjtdas+vQBDWMdY9d382yFG3xr6gSru6vUySKiIDlXwT6jx Ll/VS0e7+0IWRvVH1aC2zg5qYljiM2uNycE1IkkAi8bmaBkzP1lf1+9TY7qIsDRqosI0 FG1KGOTLGqslM5el81XeGFrdT1VHh8X/AYEMbsu78jGBE77Rd/bE2G3+E2iNgLjUsUIu snSyntNM2zoDRTn8F418X/h7sxGqs9/W6MLWimX79fkUC2uLo+zLa//SKb8uubczdAAi Sk8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=oZYijA09; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 72-20020a63074b000000b0051b71bd43b9si19531659pgh.784.2023.04.27.05.01.39; Thu, 27 Apr 2023 05:01:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=oZYijA09; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243420AbjD0LT4 (ORCPT + 99 others); Thu, 27 Apr 2023 07:19:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243561AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A52E5277 for ; Thu, 27 Apr 2023 04:19:46 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w7SiOThIQ863K6oJvVNAv7De2opm8Uk3DJjr8iS90qc=; b=oZYijA09gs1W+HixBhBUTywY91zPkNMZlMSfNHnaMbMWBseV8AYEnY8zYx9B5isQZHVMCX wpkZtNYaHOVxJUJv+gSd7PBNmFI0uW3UmwnF1ZQNVLN1D7Cdqtzy9kGlOC88T2wMKycg7D +GpP+cfi5IupFM7/tyvniKKryq9KV/+B7ln8iCcSZMM+W4JOa4P9PqDljCFZpoYTwkPLYk Rv0dTfXJSG5EcrXcXTpJUowENq6TFP8DN/zpXH3Wz0SUJYCX7oEelnKQQh58fcNMIJvYFV 4EsLWM8uF6VqbSolkXfIdmakwlFazEBCDC+UX1LNc2POR4Z/m/ge3vFtdSIKag== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w7SiOThIQ863K6oJvVNAv7De2opm8Uk3DJjr8iS90qc=; b=YOAwTPHx6SRktxKl/lL048FMWg95bASatMmM3bNYK79/tErexC/Gtrfjto2afxjuVJNGwx TnjHTOIEoStIXICQ== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 4/4] locking/rtmutex: Add a lockdep assert to catch potential nested blocking Date: Thu, 27 Apr 2023 13:19:37 +0200 Message-Id: <20230427111937.2745231-5-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764330743444237579?= X-GMAIL-MSGID: =?utf-8?q?1764330743444237579?= From: Thomas Gleixner There used to be a BUG_ON(current->pi_blocked_on) in the lock acquisition functions, but that vanished in one of the rtmutex overhauls. Bring it back in form of a lockdep assert to catch code paths which take rtmutex based locks with current::pi_blocked_on != NULL. Reported-by: Crystal Wood Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/rtmutex.c | 2 ++ kernel/locking/rwbase_rt.c | 2 ++ kernel/locking/spinlock_rt.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index dd76c1b9b7d21..479a9487edcc2 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1727,6 +1727,8 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock, unsigned int state) { + lockdep_assert(!current->pi_blocked_on); + if (likely(rt_mutex_try_acquire(lock))) return 0; diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 945d474f5d27f..5be92ca5afabc 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -133,6 +133,8 @@ static __always_inline int rwbase_read_lock(struct rwbase_rt *rwb, { int ret; + lockdep_assert(!current->pi_blocked_on); + if (rwbase_read_trylock(rwb)) return 0; diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 62c4a6866087a..9fe282cd145d9 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -37,6 +37,8 @@ static __always_inline void rtlock_lock(struct rt_mutex_base *rtm) { + lockdep_assert(!current->pi_blocked_on); + if (unlikely(!rt_mutex_cmpxchg_acquire(rtm, NULL, current))) rtlock_slowlock(rtm); }