From patchwork Thu Apr 27 11:19:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 88221 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp213800vqo; Thu, 27 Apr 2023 05:03:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ49FwN7E1mmHOhMb8xaKkAa3lAri8hJ4od3KaWpGiojT+z8982Ib2U7NbqD512RlW2q22E9 X-Received: by 2002:a05:6a20:a59f:b0:f0:a8e7:bbee with SMTP id bc31-20020a056a20a59f00b000f0a8e7bbeemr1362446pzb.1.1682597006700; Thu, 27 Apr 2023 05:03:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682597006; cv=none; d=google.com; s=arc-20160816; b=Su0eE2fP0iJipobMk4nbdDzXqp7R8SC9ptBwIVMxpTH4WHurXTUdoISsgAAi4UNn/5 N79OLSLVQFnODBXPt7RG4r4tkfItf5gQrwEFkOTg/izrR7esJkg+nr0GKLJASPXnRvMz XNafbBgnC8Lpuzu0Cd7qGMr2onMtVPVJ4C16Szec1Qf2QObE0QUXrBk01pTYsJtqKrd3 qTLfNHIffE+NXu1zVFCQQSGm5JVD8lUbRc+l7rfGDq+XVb7FHIMKimgGc3PtLkdlLFAU T1B2q25WCH/2QvD67DbpUCRJXo5slUfphRo+xFKTmEExIvqQIy6E1ELWESk+373qFFAS omHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=EpMifFFNDQbnk+O0h0r8iIqLnuZqoXb9cdjvvNNkuZG1IWnTkX3cnK9SgzrmNhQZIQ AYYJIkuWuKmLq2+RdlGt/+f9oWhTjpcAgJ1UcokWbtktyH0N7/4xH2aJY7+DddLy4tlF YFFzpLyHq7U556ZniY3KpBXqF/vD34p1N7JNh4D69Mra0LCSIejYWMAUyEyEPiJP8MRT B2pLAZlWUR5gRzDPv/7l59kUPuuEASUOmGnmwmN2jl8La/n7/eY+jzmV6KwFxOxr7YbZ lFs179fo5GnAyhs0A9c7Up9KZ+WoS6aHq8mz9JrISRgfGYy36xRjQfccshGyYMnMlNO/ jN1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b="eKuk40/R"; dkim=neutral (no key) header.i=@linutronix.de header.b=odiuxUs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e8-20020a63ee08000000b0051357734521si18606744pgi.604.2023.04.27.05.03.00; Thu, 27 Apr 2023 05:03:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b="eKuk40/R"; dkim=neutral (no key) header.i=@linutronix.de header.b=odiuxUs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243678AbjD0LUE (ORCPT + 99 others); Thu, 27 Apr 2023 07:20:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243563AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 599F55263 for ; Thu, 27 Apr 2023 04:19:46 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=eKuk40/REqcGKS0ii2ASevLY3qsTGyQpKqrPZwA4rFyEtiPLBKFamvgwzZkSxqe7rqHJW4 V7/MswqlsuXFxfaXz+WGA8BnoIoKM0NWYcRsY4uUScr1eHt/XaoltlRlkQHJmv71ER4MpW Y4yRirWQjv0+Bp/p6rJymqKaruezQ0uaq36fumnq5Gr2iTQ6zqwhLrGMV0LZWfvnJVZDqn 870vy+aipzwSfn/RWuhqJ52LM8KuvjglSYoQ8uMOv8Z7P+y1qMYXsWNOUEsRwebiQZSCjw 1vCPylGEcln7uR18jlws4iC9N/tvEzwV4OobbW8in4VPgrhbef9vmrVSVvqneg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ss/NEuQeuI1dqjAHKhSBAWsmc81HxasKaGCODSbI4ek=; b=odiuxUs1bodRYWfNKXGv8oSn53TaRU36lEYWWoaUUgo6tmXnDI/rNSWy8xH4gAaRrZ0T6h 4d7RhFEJLodYovAg== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 3/4] locking/rtmutex: Avoid pointless blk_flush_plug() invocations Date: Thu, 27 Apr 2023 13:19:36 +0200 Message-Id: <20230427111937.2745231-4-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764330838702495204?= X-GMAIL-MSGID: =?utf-8?q?1764330838702495204?= With DEBUG_RT_MUTEXES enabled the fast-path rt_mutex_cmpxchg_acquire() always fails and all lock operations take the slow path, which leads to the invocation of blk_flush_plug() even if the lock is not contended which is unnecessary and avoids batch processing of requests. Provide a new helper inline rt_mutex_try_acquire() which maps to rt_mutex_cmpxchg_acquire() in the non-debug case. For the debug case it invokes rt_mutex_slowtrylock() which can acquire a non-contended rtmutex under full debug coverage. Replace the rt_mutex_cmpxchg_acquire() invocations in __rt_mutex_lock() and __ww_rt_mutex_lock() with the new helper function, which avoid the blk_flush_plug() for the non-contended case and preserves the debug mechanism. [ tglx: Created a new helper and massaged changelog ] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/rtmutex.c | 25 ++++++++++++++++++++++++- kernel/locking/ww_rt_mutex.c | 2 +- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index aa66a3c5950a7..dd76c1b9b7d21 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -218,6 +218,11 @@ static __always_inline bool rt_mutex_cmpxchg_acquire(struct rt_mutex_base *lock, return try_cmpxchg_acquire(&lock->owner, &old, new); } +static __always_inline bool rt_mutex_try_acquire(struct rt_mutex_base *lock) +{ + return rt_mutex_cmpxchg_acquire(lock, NULL, current); +} + static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock, struct task_struct *old, struct task_struct *new) @@ -297,6 +302,24 @@ static __always_inline bool rt_mutex_cmpxchg_acquire(struct rt_mutex_base *lock, } +static int __sched rt_mutex_slowtrylock(struct rt_mutex_base *lock); + +static __always_inline bool rt_mutex_try_acquire(struct rt_mutex_base *lock) +{ + /* + * With debug enabled rt_mutex_cmpxchg trylock() will always fail, + * which will unconditionally invoke sched_submit/resume_work() in + * the slow path of __rt_mutex_lock() and __ww_rt_mutex_lock() even + * in the non-contended case. + * + * Avoid that by using rt_mutex_slow_trylock() which is covered by + * the debug code and can acquire a non-contended rtmutex. On + * success the callsite avoids the sched_submit/resume_work() + * dance. + */ + return rt_mutex_slowtrylock(lock); +} + static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock, struct task_struct *old, struct task_struct *new) @@ -1704,7 +1727,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock, unsigned int state) { - if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) + if (likely(rt_mutex_try_acquire(lock))) return 0; return rt_mutex_slowlock(lock, NULL, state); diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c index d1473c624105c..c7196de838edc 100644 --- a/kernel/locking/ww_rt_mutex.c +++ b/kernel/locking/ww_rt_mutex.c @@ -62,7 +62,7 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, } mutex_acquire_nest(&rtm->dep_map, 0, 0, nest_lock, ip); - if (likely(rt_mutex_cmpxchg_acquire(&rtm->rtmutex, NULL, current))) { + if (likely(rt_mutex_try_acquire(&rtm->rtmutex))) { if (ww_ctx) ww_mutex_set_context_fastpath(lock, ww_ctx); return 0;