From patchwork Fri Nov 10 20:41:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 163983 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp1373878vqs; Fri, 10 Nov 2023 12:43:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IF2On5lPIdNR7HQA2Y8QrC6m/lChxTzMBKUYdUF5FIspIL5KmcrKPSckov5Kgf+qW/XFTdF X-Received: by 2002:a05:6a00:4c08:b0:6be:5367:211d with SMTP id ea8-20020a056a004c0800b006be5367211dmr161115pfb.24.1699649036382; Fri, 10 Nov 2023 12:43:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699649036; cv=none; d=google.com; s=arc-20160816; b=w8H59lLjqrPHSw0L5rKY2ibnwDZDoNroCs6Y7cjlHwngMKjSVzf974H6ZYY1bQga3K +MVFTeV1CW6one9aq9X250JGVgeaJqQCZuAmBaFbP9ytq7014VonXLG+v+3R/cXigAi9 E45DAN/Rqvu0/MwRy8B+IzINUOzT5LxzJ+TyaPV4LJFnKZC6sXA0QZZu0rJJmlWK/5rP 6ayZqC1lo2a34sjmgPoWkSI21LyC1jxqX7Q2k1FBD3NSiB+q+IdcirNZ/p7tW/vvIXpU 392iLAKYkKWIr7gH9qbOu7PHTgUvUS5sIHseyyQjIWja7dea6ceCxC4wszETBJuGsOxY fN8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Xki740aAoX9yBxyCdgA26lHDCnTzivKGn1Gv5eBSmHQ=; fh=xOXdQT4SsuXIFM5Yjo81cCn92/LeCLHKO8ZW4BZqIMk=; b=I8MdjleV5IZIJEca9GZEK2rAhCkjIfvoe1JI9Nh0wS93LgQiAH7DxtfjFRmTivZDZo CZTB5ODL8+Hd1Rf4yDgLUZWX3PGPrj1c2MF7f34jD/Nlui5bzpH6dIwgJNFa73n5yrS2 +/U77pw1+fa+9+pW7HWjBDHDOKed+VEtNHo/WB+lrPNwt+GUDr+Kae2Uu9EyQL1bQud0 +M41DBGJTYiOiUYrhkTZGLiTmX9pobf+CUYJYPu895ceIo8yf+5WNkCV3wlgl7RkMjly 4xDhi+W9lwCE3NCgI8yWr/kAXIHWMWsXhImo+IcJHFz7LbYVXtQEoqJNK8STueE9FYcZ YNHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vZbf4qLU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id q19-20020a056a00151300b006c0e3332528si249728pfu.30.2023.11.10.12.43.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Nov 2023 12:43:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vZbf4qLU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 3616A83CC591; Fri, 10 Nov 2023 12:43:45 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346172AbjKJUmv (ORCPT + 29 others); Fri, 10 Nov 2023 15:42:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346624AbjKJUmG (ORCPT ); Fri, 10 Nov 2023 15:42:06 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAB1647AD; Fri, 10 Nov 2023 12:41:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Xki740aAoX9yBxyCdgA26lHDCnTzivKGn1Gv5eBSmHQ=; b=vZbf4qLUD8BS9Dl7W1IRtLtjat xawi8qeBavSyl8iXANIozdeDZwFbgA/BRLWIdBz/nBloRyrsb6NH178+2p4YNsDS8Z3Hv5/bxkx1n s5fre0aG20Bfw5KLbwnVR2mgyIWjiVt+SWZ7Pqo1uCoo/3bcj6XCO+Y9UAKAMZGbS3QdtcWP3C4T+ vRiGf6JNzq2JDBCoZe2gydqfci30qTZptXtZQ99uKoKjsEiyRxjEjySoVVdYFUaA2VeKfD5riRcNF 3GagWA8b08j58LozOizxsJk6JKYtbKYBlAhkgqBXWrJh9FZDPGeon+ORMuksMkbUP7QUP/z7Unog7 fiqjjzRg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1r1YJd-00FUT4-0t; Fri, 10 Nov 2023 20:41:21 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Mateusz Guzik Subject: [PATCH v3 1/4] locking: Add rwsem_assert_held() and rwsem_assert_held_write() Date: Fri, 10 Nov 2023 20:41:16 +0000 Message-Id: <20231110204119.3692023-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231110204119.3692023-1-willy@infradead.org> References: <20231110204119.3692023-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Fri, 10 Nov 2023 12:43:45 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782211188314239792 X-GMAIL-MSGID: 1782211188314239792 Modelled after lockdep_assert_held() and lockdep_assert_held_write(), but are always active, even when lockdep is disabled. Of course, they don't test that _this_ thread is the owner, but it's sufficient to catch many bugs and doesn't incur the same performance penalty as lockdep. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rwbase_rt.h | 9 ++++++-- include/linux/rwsem.h | 46 ++++++++++++++++++++++++++++++++++----- 2 files changed, 48 insertions(+), 7 deletions(-) diff --git a/include/linux/rwbase_rt.h b/include/linux/rwbase_rt.h index 1d264dd08625..a04acd85705b 100644 --- a/include/linux/rwbase_rt.h +++ b/include/linux/rwbase_rt.h @@ -26,12 +26,17 @@ struct rwbase_rt { } while (0) -static __always_inline bool rw_base_is_locked(struct rwbase_rt *rwb) +static __always_inline bool rw_base_is_locked(const struct rwbase_rt *rwb) { return atomic_read(&rwb->readers) != READER_BIAS; } -static __always_inline bool rw_base_is_contended(struct rwbase_rt *rwb) +static inline void rw_base_assert_held_write(const struct rwbase_rt *rwb) +{ + BUG_ON(atomic_read(&rwb->readers) != WRITER_BIAS); +} + +static __always_inline bool rw_base_is_contended(const struct rwbase_rt *rwb) { return atomic_read(&rwb->readers) > 0; } diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 1dd530ce8b45..b5b34cca86f3 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -66,14 +66,24 @@ struct rw_semaphore { #endif }; -/* In all implementations count != 0 means locked */ +#define RWSEM_UNLOCKED_VALUE 0UL +#define RWSEM_WRITER_LOCKED (1UL << 0) +#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) + static inline int rwsem_is_locked(struct rw_semaphore *sem) { - return atomic_long_read(&sem->count) != 0; + return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE; } -#define RWSEM_UNLOCKED_VALUE 0L -#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) +static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) +{ + WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); +} + +static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) +{ + WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); +} /* Common initializer macros and functions */ @@ -152,11 +162,21 @@ do { \ __init_rwsem((sem), #sem, &__key); \ } while (0) -static __always_inline int rwsem_is_locked(struct rw_semaphore *sem) +static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) { return rw_base_is_locked(&sem->rwbase); } +static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) +{ + BUG_ON(!rwsem_is_locked(sem)); +} + +static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) +{ + rw_base_assert_held_write(sem); +} + static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) { return rw_base_is_contended(&sem->rwbase); @@ -169,6 +189,22 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) * the RT specific variant. */ +static inline void rwsem_assert_held(const struct rw_semaphore *sem) +{ + if (IS_ENABLED(CONFIG_LOCKDEP)) + lockdep_assert_held(sem); + else + rwsem_assert_held_nolockdep(sem); +} + +static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) +{ + if (IS_ENABLED(CONFIG_LOCKDEP)) + lockdep_assert_held_write(sem); + else + rwsem_assert_held_write_nolockdep(sem); +} + /* * lock for reading */ From patchwork Fri Nov 10 20:41:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 163982 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp1373585vqs; Fri, 10 Nov 2023 12:43:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IGIznHmXWhLNNH74/0TXadBcBY0Ti0bYUJb0qS/9nw3pF5j0u4nfWd32mRiH7FEr2m69bHJ X-Received: by 2002:a05:6a00:93a4:b0:6c4:ec71:c1a2 with SMTP id ka36-20020a056a0093a400b006c4ec71c1a2mr180161pfb.2.1699648994589; Fri, 10 Nov 2023 12:43:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699648994; cv=none; d=google.com; s=arc-20160816; b=c3RHWsgviE4hyzzzGQrU9Wz2/r+eeNBh9rDXv0NSiaw7Sn6WLHkRbjh4mdSwuTYJ1P +I8cfQC3UEUXXK8+Z098XaqciII3WS4xFBxvSFP0uQn/HPgQoqlDyc4+JXUZG5K003oT qB5Dq0w9Ax9Lw+IBAXdNGhbaQ0ixktb347WXBblNFMAuO4CjCTLDH+kP2Y5LUYUIHhny 2EitvqyMGltXqpxQH9gg7HKTyGrR+FR/Fd0C1sIdIqIoxKIdrQOq0tP+FPS+3nTwG/1r yXCPAutZOP1nqTLCxnJ4eNPIdNlByKhNylQDR3n/4qEi6eS0V8xwO7D3fDuu57OGOqyv pNmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=25QfWharHbMWrIl6caRn0CZ+MZ75oP4Z9qoseEg2zjc=; fh=xOXdQT4SsuXIFM5Yjo81cCn92/LeCLHKO8ZW4BZqIMk=; b=QDdZ3Hb0IwhIJcXz0yWQ+ClMH4/ypXKAfI1MJ7iHFPUDtg1uEeb7ZyhPiz5EAzwqRs tzfFv3hbInrBVqUKlzDEMHPh8JQMUaNs8DCYWBhUQVPQm5vOJOlLO9bZ7XUA8c4q5Bp9 HxzXB7T0GXO2AbrLCwkAvnDrX3JBjcv4jawoPzEN+JUePsPZbUhyRdGOSJSDweGn2bW8 2QnYC9i9iQej5olqtF10gCvLGH6cBqhd1wLawTYFwDm5XPHoGltPLyUzS12WlknYlyUn L+GqLKoC131BFp2Kpm4THDQqVco0gttGIt6zICqPqrCJG9KkmVLGpMuoJ86KSIiy7x1M uPHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VtlHsuyG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id ka19-20020a056a00939300b006bf2ef1717csi248444pfb.255.2023.11.10.12.43.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Nov 2023 12:43:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VtlHsuyG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 9076B80A91B9; Fri, 10 Nov 2023 12:42:58 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346395AbjKJUmk (ORCPT + 29 others); Fri, 10 Nov 2023 15:42:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346241AbjKJUmA (ORCPT ); Fri, 10 Nov 2023 15:42:00 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0F0446BD; Fri, 10 Nov 2023 12:41:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=25QfWharHbMWrIl6caRn0CZ+MZ75oP4Z9qoseEg2zjc=; b=VtlHsuyG81kZTqbBNIglKsUA0j NhxzGv8c91tUS6E+YAnLht71Pq8PLu7ZDmoMvu106yZEX5Kl7ViDhN7Qx+Cb+X8hODeZldt8xfMd+ bT4PCX2OFJrqE0sUJ0OZ3bx78GDIcbAJ2wfvrTPW0psqAXRuZFkr6WADJUBlQuhRYf3P/sRmjM2xx nQzc1dQq43nkYvCVN2v6thH/fsFm59/CUZOhiEgN8Rc9AlKRmnwqa1pac1ygpGKR007h2pAVu/7VE cxvROJ3Ixoz+pKs7BIRw+5LjUk/0dmGjChbBgW+KE26RME/Nn5uTd0ehSIJ9ltZr47+ZV2c3QZd7C Qp9DOl8g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1r1YJd-00FUT6-51; Fri, 10 Nov 2023 20:41:21 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Mateusz Guzik Subject: [PATCH v3 2/4] mm: Use rwsem assertion macros for mmap_lock Date: Fri, 10 Nov 2023 20:41:17 +0000 Message-Id: <20231110204119.3692023-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231110204119.3692023-1-willy@infradead.org> References: <20231110204119.3692023-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 10 Nov 2023 12:42:58 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782211144088382553 X-GMAIL-MSGID: 1782211144088382553 This slightly strengthens our write assertion when lockdep is disabled. It also downgrades us from BUG_ON to WARN_ON, but I think that's an improvement. I don't think dumping the mm_struct was all that valuable; the call chain is what's important. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mmap_lock.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 8d38dcb6d044..de9dc20b01ba 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -60,16 +60,14 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) #endif /* CONFIG_TRACING */ -static inline void mmap_assert_locked(struct mm_struct *mm) +static inline void mmap_assert_locked(const struct mm_struct *mm) { - lockdep_assert_held(&mm->mmap_lock); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); + rwsem_assert_held(&mm->mmap_lock); } -static inline void mmap_assert_write_locked(struct mm_struct *mm) +static inline void mmap_assert_write_locked(const struct mm_struct *mm) { - lockdep_assert_held_write(&mm->mmap_lock); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); + rwsem_assert_held_write(&mm->mmap_lock); } #ifdef CONFIG_PER_VMA_LOCK From patchwork Fri Nov 10 20:41:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 163985 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp1373957vqs; Fri, 10 Nov 2023 12:44:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IFu7Vne3ZZydQa+siLPX5eX9cdazjUQ2baGCUAgrs1ygD/Ad/7E3V3NejMxg9HxtxtMpQMh X-Received: by 2002:a05:6808:110:b0:3b5:6462:3191 with SMTP id b16-20020a056808011000b003b564623191mr487280oie.48.1699649044357; Fri, 10 Nov 2023 12:44:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699649044; cv=none; d=google.com; s=arc-20160816; b=ibs52Iju6RPrwt/DocTWhH8XsEPA/k+Ns+iyxz6x4SqxgDvjBISTy6fBZx2ReP2fqz jtwwrUn0DRAEEFYmzxOcSgyew+ZA1N6xOZgVVP0Uysrvsk4POPG6XV8BSt0k7/0nzdLZ rbtv4cYxVN4X+FEg5WVlkVEWXhFrZKaug78UH3SVOkVxtz+ZaUZ6WPTD0peoJYyMQvOT I+A0FCj4rs7tZhrkYj5TOrqCiDJ8UQcQHY53UA92roNpiV3HfMFgik4pst9bbQiwarZ9 dahP1yuvUrfQJBIsBa76P7bk5XjjWrOHwaWhD+31rn3J5SNIUMD23Tp7hhCCUn0pICjH nRtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zIWhQPPVGGNsaH/zRjrbtFp7YhWrYOd/f9YxTGaAZsk=; fh=xOXdQT4SsuXIFM5Yjo81cCn92/LeCLHKO8ZW4BZqIMk=; b=LYifDqit8Vmuksn6/JgOX41fBluJKTO9DW5Dql2a1Aa3RpCIZ/1MNWjoT+Qojk01vD 8nXnNkEUn5gPCSg1XIlASkaiw9b60dzS3+rtavv1KIA25eoOL4PFSaNzCuAW9RcvFxO/ 8dkePvlmeWzorZak8X1YfFLhhvQ8vyk1VDZuRdDVkYk8VlNhVXXy+81oqV1OBlCHHIGG ajvRhzg86LWcsLT3Gzx0+cN+aP5Zvc9dBbvldHO4HPPV4h0Dxeu/GKzwwqIxBJgBZUV1 AjpzphTfq2kUvNZFPzaoNYyV4NMvDbPWgQzbtct7OjeUlZWu4MXTtRtAVzufzyXqDyPo YRqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="V/0LsvqH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id g129-20020a636b87000000b005644a9be955si222707pgc.179.2023.11.10.12.44.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Nov 2023 12:44:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="V/0LsvqH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 29FD083CC5B0; Fri, 10 Nov 2023 12:43:54 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345064AbjKJUmn (ORCPT + 29 others); Fri, 10 Nov 2023 15:42:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346608AbjKJUmA (ORCPT ); Fri, 10 Nov 2023 15:42:00 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8252D185; Fri, 10 Nov 2023 12:41:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zIWhQPPVGGNsaH/zRjrbtFp7YhWrYOd/f9YxTGaAZsk=; b=V/0LsvqHZxuH1WQBbyi1o8z9ob khRauoUl0ia4ev2SntjAolsMV+pTeyIM6ND3ENfIiVNwIuBvpGpuTD7BIeJnU01XB4qQlvavyMN56 phj4yqYpnOfE+xivPYnjQqNlh622XeuBNdimGTYJqMtRkJA/0xJfLP/kTna/KjzmzYzRXS8FLf/Jw 6OjsWNF0X+tTLixXMKlA3ysqh4tzhBCE+us7CMebzE1+yPghfsjI2KlmCGkln1pfwyVehm7PICMFg 5CVV3HceK8SJwsjFFcZ4FA85DrpsIR9Oli15msYV3+hxt7Ta1yau9tjNzDZ862EAZL8mtlxO4dw3e aqbnUyHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1r1YJd-00FUT8-7j; Fri, 10 Nov 2023 20:41:21 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Mateusz Guzik Subject: [PATCH v3 3/4] xfs: Replace xfs_isilocked with xfs_assert_ilocked Date: Fri, 10 Nov 2023 20:41:18 +0000 Message-Id: <20231110204119.3692023-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231110204119.3692023-1-willy@infradead.org> References: <20231110204119.3692023-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Fri, 10 Nov 2023 12:43:54 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782211195964140667 X-GMAIL-MSGID: 1782211195964140667 To use the new rwsem_assert_held()/rwsem_assert_held_write(), we can't use the existing ASSERT macro. Add a new xfs_assert_ilocked() and convert all the callers. Fix an apparent bug in xfs_isilocked(): If the caller specifies XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL, xfs_assert_ilocked() will check both the IOLOCK and the ILOCK are held for write. xfs_isilocked() only checked that the ILOCK was held for write. xfs_assert_ilocked() is always on, even if DEBUG or XFS_WARN aren't defined. It's a cheap check, so I don't think it's worth defining it away. Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/libxfs/xfs_attr.c | 2 +- fs/xfs/libxfs/xfs_attr_remote.c | 2 +- fs/xfs/libxfs/xfs_bmap.c | 19 +++++---- fs/xfs/libxfs/xfs_defer.c | 2 +- fs/xfs/libxfs/xfs_inode_fork.c | 2 +- fs/xfs/libxfs/xfs_rtbitmap.c | 2 +- fs/xfs/libxfs/xfs_trans_inode.c | 6 +-- fs/xfs/scrub/readdir.c | 4 +- fs/xfs/xfs_attr_list.c | 2 +- fs/xfs/xfs_bmap_util.c | 10 ++--- fs/xfs/xfs_dir2_readdir.c | 2 +- fs/xfs/xfs_dquot.c | 4 +- fs/xfs/xfs_file.c | 4 +- fs/xfs/xfs_inode.c | 72 +++++++++++---------------------- fs/xfs/xfs_inode.h | 2 +- fs/xfs/xfs_inode_item.c | 4 +- fs/xfs/xfs_iops.c | 3 +- fs/xfs/xfs_qm.c | 10 ++--- fs/xfs/xfs_reflink.c | 2 +- fs/xfs/xfs_rtalloc.c | 4 +- fs/xfs/xfs_symlink.c | 2 +- fs/xfs/xfs_trans_dquot.c | 2 +- 22 files changed, 66 insertions(+), 96 deletions(-) diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c index e28d93d232de..ebf0722d8963 100644 --- a/fs/xfs/libxfs/xfs_attr.c +++ b/fs/xfs/libxfs/xfs_attr.c @@ -224,7 +224,7 @@ int xfs_attr_get_ilocked( struct xfs_da_args *args) { - ASSERT(xfs_isilocked(args->dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(args->dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL); if (!xfs_inode_hasattr(args->dp)) return -ENOATTR; diff --git a/fs/xfs/libxfs/xfs_attr_remote.c b/fs/xfs/libxfs/xfs_attr_remote.c index d440393b40eb..1c007ebf153a 100644 --- a/fs/xfs/libxfs/xfs_attr_remote.c +++ b/fs/xfs/libxfs/xfs_attr_remote.c @@ -545,7 +545,7 @@ xfs_attr_rmtval_stale( struct xfs_buf *bp; int error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (XFS_IS_CORRUPT(mp, map->br_startblock == DELAYSTARTBLOCK) || XFS_IS_CORRUPT(mp, map->br_startblock == HOLESTARTBLOCK)) diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index be62acffad6c..eaa1dbe9a6b3 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -1189,7 +1189,7 @@ xfs_iread_extents( if (!xfs_need_iread_extents(ifp)) return 0; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ir.loaded = 0; xfs_iext_first(ifp, &ir.icur); @@ -3883,7 +3883,7 @@ xfs_bmapi_read( ASSERT(*nmap >= 1); ASSERT(!(flags & ~(XFS_BMAPI_ATTRFORK | XFS_BMAPI_ENTIRE))); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_SHARED|XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_SHARED|XFS_ILOCK_EXCL); if (WARN_ON_ONCE(!ifp)) return -EFSCORRUPTED; @@ -4354,7 +4354,7 @@ xfs_bmapi_write( ASSERT(tp != NULL); ASSERT(len > 0); ASSERT(ifp->if_format != XFS_DINODE_FMT_LOCAL); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(!(flags & XFS_BMAPI_REMAP)); /* zeroing is for currently only for data extents, not metadata */ @@ -4651,7 +4651,7 @@ xfs_bmapi_remap( ifp = xfs_ifork_ptr(ip, whichfork); ASSERT(len > 0); ASSERT(len <= (xfs_filblks_t)XFS_MAX_BMBT_EXTLEN); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(!(flags & ~(XFS_BMAPI_ATTRFORK | XFS_BMAPI_PREALLOC | XFS_BMAPI_NORMAP))); ASSERT((flags & (XFS_BMAPI_ATTRFORK | XFS_BMAPI_PREALLOC)) != @@ -5287,7 +5287,7 @@ __xfs_bunmapi( if (xfs_is_shutdown(mp)) return -EIO; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(len > 0); ASSERT(nexts >= 0); @@ -5631,8 +5631,7 @@ xfs_bmse_merge( blockcount = left->br_blockcount + got->br_blockcount; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL); ASSERT(xfs_bmse_can_merge(left, got, shift)); new = *left; @@ -5760,7 +5759,7 @@ xfs_bmap_collapse_extents( if (xfs_is_shutdown(mp)) return -EIO; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL); error = xfs_iread_extents(tp, ip, whichfork); if (error) @@ -5833,7 +5832,7 @@ xfs_bmap_can_insert_extents( int is_empty; int error = 0; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL); if (xfs_is_shutdown(ip->i_mount)) return -EIO; @@ -5875,7 +5874,7 @@ xfs_bmap_insert_extents( if (xfs_is_shutdown(mp)) return -EIO; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL); error = xfs_iread_extents(tp, ip, whichfork); if (error) diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index bcfb6a4203cd..7927a721dc86 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -744,7 +744,7 @@ xfs_defer_ops_capture( * transaction. */ for (i = 0; i < dfc->dfc_held.dr_inos; i++) { - ASSERT(xfs_isilocked(dfc->dfc_held.dr_ip[i], XFS_ILOCK_EXCL)); + xfs_assert_ilocked(dfc->dfc_held.dr_ip[i], XFS_ILOCK_EXCL); ihold(VFS_I(dfc->dfc_held.dr_ip[i])); } diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 5a2e7ddfa76d..14193e044f3d 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -563,7 +563,7 @@ xfs_iextents_copy( struct xfs_bmbt_irec rec; int64_t copied = 0; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED); ASSERT(ifp->if_bytes > 0); for_each_xfs_iext(ifp, &icur, &rec) { diff --git a/fs/xfs/libxfs/xfs_rtbitmap.c b/fs/xfs/libxfs/xfs_rtbitmap.c index c269d704314d..0b094a44c6e2 100644 --- a/fs/xfs/libxfs/xfs_rtbitmap.c +++ b/fs/xfs/libxfs/xfs_rtbitmap.c @@ -946,7 +946,7 @@ xfs_rtfree_extent( struct timespec64 atime; ASSERT(mp->m_rbmip->i_itemp != NULL); - ASSERT(xfs_isilocked(mp->m_rbmip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(mp->m_rbmip, XFS_ILOCK_EXCL); error = xfs_rtcheck_alloc_range(&args, start, len); if (error) diff --git a/fs/xfs/libxfs/xfs_trans_inode.c b/fs/xfs/libxfs/xfs_trans_inode.c index 70e97ea6eee7..69fc5b981352 100644 --- a/fs/xfs/libxfs/xfs_trans_inode.c +++ b/fs/xfs/libxfs/xfs_trans_inode.c @@ -31,7 +31,7 @@ xfs_trans_ijoin( { struct xfs_inode_log_item *iip; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (ip->i_itemp == NULL) xfs_inode_item_init(ip, ip->i_mount); iip = ip->i_itemp; @@ -60,7 +60,7 @@ xfs_trans_ichgtime( struct timespec64 tv; ASSERT(tp); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); tv = current_time(inode); @@ -90,7 +90,7 @@ xfs_trans_log_inode( struct inode *inode = VFS_I(ip); ASSERT(iip); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(!xfs_iflags_test(ip, XFS_ISTALE)); tp->t_flags |= XFS_TRANS_DIRTY; diff --git a/fs/xfs/scrub/readdir.c b/fs/xfs/scrub/readdir.c index e51c1544be63..0b200bab04c8 100644 --- a/fs/xfs/scrub/readdir.c +++ b/fs/xfs/scrub/readdir.c @@ -283,7 +283,7 @@ xchk_dir_walk( return -EIO; ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); - ASSERT(xfs_isilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL); if (dp->i_df.if_format == XFS_DINODE_FMT_LOCAL) return xchk_dir_walk_sf(sc, dp, dirent_fn, priv); @@ -334,7 +334,7 @@ xchk_dir_lookup( return -EIO; ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); - ASSERT(xfs_isilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL); if (dp->i_df.if_format == XFS_DINODE_FMT_LOCAL) { error = xfs_dir2_sf_lookup(&args); diff --git a/fs/xfs/xfs_attr_list.c b/fs/xfs/xfs_attr_list.c index 99bbbe1a0e44..235dd125c92f 100644 --- a/fs/xfs/xfs_attr_list.c +++ b/fs/xfs/xfs_attr_list.c @@ -505,7 +505,7 @@ xfs_attr_list_ilocked( { struct xfs_inode *dp = context->dp; - ASSERT(xfs_isilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(dp, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL); /* * Decide on what work routines to call based on the inode size. diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 731260a5af6d..0887fb37d84f 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -649,8 +649,8 @@ xfs_can_free_eofblocks( * Caller must either hold the exclusive io lock; or be inactivating * the inode, which guarantees there are no other users of the inode. */ - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL) || - (VFS_I(ip)->i_state & I_FREEING)); + if (!(VFS_I(ip)->i_state & I_FREEING)) + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL); /* prealloc/delalloc exists only on regular files */ if (!S_ISREG(VFS_I(ip)->i_mode)) @@ -1106,8 +1106,7 @@ xfs_collapse_file_space( xfs_fileoff_t shift_fsb = XFS_B_TO_FSB(mp, len); bool done = false; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); - ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); trace_xfs_collapse_file_space(ip); @@ -1176,8 +1175,7 @@ xfs_insert_file_space( xfs_fileoff_t shift_fsb = XFS_B_TO_FSB(mp, len); bool done = false; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); - ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); trace_xfs_insert_file_space(ip); diff --git a/fs/xfs/xfs_dir2_readdir.c b/fs/xfs/xfs_dir2_readdir.c index 9f3ceb461515..95cd8b9cf3dc 100644 --- a/fs/xfs/xfs_dir2_readdir.c +++ b/fs/xfs/xfs_dir2_readdir.c @@ -521,7 +521,7 @@ xfs_readdir( return -EIO; ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); - ASSERT(xfs_isilocked(dp, XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); + xfs_assert_ilocked(dp, XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL); XFS_STATS_INC(dp->i_mount, xs_dir_getdents); args.dp = dp; diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index ac6ba646624d..4b2f1b82badc 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -949,7 +949,7 @@ xfs_qm_dqget_inode( if (error) return error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(xfs_inode_dquot(ip, type) == NULL); id = xfs_qm_id_for_quotatype(ip, type); @@ -1006,7 +1006,7 @@ xfs_qm_dqget_inode( } dqret: - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); trace_xfs_dqget_miss(dqp); *O_dqpp = dqp; return 0; diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index e33e5e13b95f..632653e00906 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -879,7 +879,7 @@ xfs_break_dax_layouts( { struct page *page; - ASSERT(xfs_isilocked(XFS_I(inode), XFS_MMAPLOCK_EXCL)); + xfs_assert_ilocked(XFS_I(inode), XFS_MMAPLOCK_EXCL); page = dax_layout_busy_page(inode->i_mapping); if (!page) @@ -900,7 +900,7 @@ xfs_break_layouts( bool retry; int error; - ASSERT(xfs_isilocked(XFS_I(inode), XFS_IOLOCK_SHARED|XFS_IOLOCK_EXCL)); + xfs_assert_ilocked(XFS_I(inode), XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL); do { retry = false; diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index c0f1c89786c2..1ed6bed19bec 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -333,52 +333,26 @@ xfs_ilock_demote( trace_xfs_ilock_demote(ip, lock_flags, _RET_IP_); } -#if defined(DEBUG) || defined(XFS_WARN) -static inline bool -__xfs_rwsem_islocked( - struct rw_semaphore *rwsem, - bool shared) -{ - if (!debug_locks) - return rwsem_is_locked(rwsem); - - if (!shared) - return lockdep_is_held_type(rwsem, 0); - - /* - * We are checking that the lock is held at least in shared - * mode but don't care that it might be held exclusively - * (i.e. shared | excl). Hence we check if the lock is held - * in any mode rather than an explicit shared mode. - */ - return lockdep_is_held_type(rwsem, -1); -} - -bool -xfs_isilocked( +void +xfs_assert_ilocked( struct xfs_inode *ip, uint lock_flags) { - if (lock_flags & (XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)) { - if (!(lock_flags & XFS_ILOCK_SHARED)) - return !!ip->i_lock.mr_writer; - return rwsem_is_locked(&ip->i_lock.mr_lock); - } - - if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { - return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, - (lock_flags & XFS_MMAPLOCK_SHARED)); - } - - if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) { - return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, - (lock_flags & XFS_IOLOCK_SHARED)); - } - - ASSERT(0); - return false; + if (lock_flags & XFS_ILOCK_SHARED) + rwsem_assert_held(&ip->i_lock.mr_lock); + else if (lock_flags & XFS_ILOCK_EXCL) + ASSERT(ip->i_lock.mr_writer); + + if (lock_flags & XFS_MMAPLOCK_SHARED) + rwsem_assert_held(&VFS_I(ip)->i_mapping->invalidate_lock); + else if (lock_flags & XFS_MMAPLOCK_EXCL) + rwsem_assert_held_write(&VFS_I(ip)->i_mapping->invalidate_lock); + + if (lock_flags & XFS_IOLOCK_SHARED) + rwsem_assert_held(&VFS_I(ip)->i_rwsem); + else if (lock_flags & XFS_IOLOCK_EXCL) + rwsem_assert_held_write(&VFS_I(ip)->i_rwsem); } -#endif /* * xfs_lockdep_subclass_ok() is only used in an ASSERT, so is only called when @@ -1342,9 +1316,9 @@ xfs_itruncate_extents_flags( xfs_filblks_t unmap_len; int error = 0; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); - ASSERT(!atomic_read(&VFS_I(ip)->i_count) || - xfs_isilocked(ip, XFS_IOLOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); + if (atomic_read(&VFS_I(ip)->i_count)) + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL); ASSERT(new_size <= XFS_ISIZE(ip)); ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES); ASSERT(ip->i_itemp != NULL); @@ -1605,7 +1579,7 @@ xfs_inactive_ifree( xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL); error = xfs_ifree(tp, ip); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (error) { /* * If we fail to free the inode, shut down. The cancel @@ -2359,7 +2333,7 @@ xfs_ifree( struct xfs_inode_log_item *iip = ip->i_itemp; int error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(VFS_I(ip)->i_nlink == 0); ASSERT(ip->i_df.if_nextents == 0); ASSERT(ip->i_disk_size == 0 || !S_ISREG(VFS_I(ip)->i_mode)); @@ -2428,7 +2402,7 @@ static void xfs_iunpin( struct xfs_inode *ip) { - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED); trace_xfs_inode_unpin_nowait(ip, _RET_IP_); @@ -3189,7 +3163,7 @@ xfs_iflush( struct xfs_mount *mp = ip->i_mount; int error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED); ASSERT(xfs_iflags_test(ip, XFS_IFLUSHING)); ASSERT(ip->i_df.if_format != XFS_DINODE_FMT_BTREE || ip->i_df.if_nextents > XFS_IFORK_MAXEXT(ip, XFS_DATA_FORK)); diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index 3dc47937da5d..dd8b8339ba1b 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -523,7 +523,7 @@ void xfs_ilock(xfs_inode_t *, uint); int xfs_ilock_nowait(xfs_inode_t *, uint); void xfs_iunlock(xfs_inode_t *, uint); void xfs_ilock_demote(xfs_inode_t *, uint); -bool xfs_isilocked(struct xfs_inode *, uint); +void xfs_assert_ilocked(struct xfs_inode *, uint); uint xfs_ilock_data_map_shared(struct xfs_inode *); uint xfs_ilock_attr_map_shared(struct xfs_inode *); diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index cd7803fda8b1..3e125be356c1 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -649,7 +649,7 @@ xfs_inode_item_pin( { struct xfs_inode *ip = INODE_ITEM(lip)->ili_inode; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(lip->li_buf); trace_xfs_inode_pin(ip, _RET_IP_); @@ -755,7 +755,7 @@ xfs_inode_item_release( unsigned short lock_flags; ASSERT(ip->i_itemp != NULL); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); lock_flags = iip->ili_lock_flags; iip->ili_lock_flags = 0; diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index fdfda4fba12b..e711668d601b 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -796,8 +796,7 @@ xfs_setattr_size( uint lock_flags = 0; bool did_zeroing = false; - ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); - ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); ASSERT(S_ISREG(inode->i_mode)); ASSERT((iattr->ia_valid & (ATTR_UID|ATTR_GID|ATTR_ATIME|ATTR_ATIME_SET| ATTR_MTIME_SET|ATTR_TIMES_SET)) == 0); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 94a7932ac570..45cf9a557eb9 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -254,7 +254,7 @@ xfs_qm_dqattach_one( struct xfs_dquot *dqp; int error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); error = 0; /* @@ -322,7 +322,7 @@ xfs_qm_dqattach_locked( if (!xfs_qm_need_dqattach(ip)) return 0; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (XFS_IS_UQUOTA_ON(mp) && !ip->i_udquot) { error = xfs_qm_dqattach_one(ip, XFS_DQTYPE_USER, @@ -353,7 +353,7 @@ xfs_qm_dqattach_locked( * Don't worry about the dquots that we may have attached before any * error - they'll get detached later if it has not already been done. */ - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); return error; } @@ -1809,7 +1809,7 @@ xfs_qm_vop_chown( XFS_TRANS_DQ_RTBCOUNT : XFS_TRANS_DQ_BCOUNT; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); ASSERT(XFS_IS_QUOTA_ON(ip->i_mount)); /* old dquot */ @@ -1897,7 +1897,7 @@ xfs_qm_vop_create_dqattach( if (!XFS_IS_QUOTA_ON(mp)) return; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (udqp && XFS_IS_UQUOTA_ON(mp)) { ASSERT(ip->i_udquot == NULL); diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index 658edee8381d..d10eba9548dc 100644 --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -527,7 +527,7 @@ xfs_reflink_allocate_cow( int error; bool found; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (!ip->i_cowfp) { ASSERT(!xfs_is_reflink_inode(ip)); xfs_ifork_init_cow(ip); diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 88c48de5c9c8..5e5eedff71c9 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -1182,7 +1182,7 @@ xfs_rtallocate_extent( int error; /* error value */ xfs_rtxnum_t r; /* result allocated rtext */ - ASSERT(xfs_isilocked(args.mp->m_rbmip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(args.mp->m_rbmip, XFS_ILOCK_EXCL); ASSERT(minlen > 0 && minlen <= maxlen); /* @@ -1425,7 +1425,7 @@ xfs_rtpick_extent( uint64_t seq; /* sequence number of file creation */ struct timespec64 ts; /* timespec in inode */ - ASSERT(xfs_isilocked(mp->m_rbmip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(mp->m_rbmip, XFS_ILOCK_EXCL); ts = inode_get_atime(VFS_I(mp->m_rbmip)); if (!(mp->m_rbmip->i_diflags & XFS_DIFLAG_NEWRTBM)) { diff --git a/fs/xfs/xfs_symlink.c b/fs/xfs/xfs_symlink.c index 85e433df6a3f..2ca157de4a73 100644 --- a/fs/xfs/xfs_symlink.c +++ b/fs/xfs/xfs_symlink.c @@ -43,7 +43,7 @@ xfs_readlink_bmap_ilocked( int fsblocks = 0; int offset; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_SHARED | XFS_ILOCK_EXCL); fsblocks = xfs_symlink_blocks(mp, pathlen); error = xfs_bmapi_read(ip, 0, fsblocks, mval, &nmaps, 0); diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c index aa00cf67ad72..9c159d016ecf 100644 --- a/fs/xfs/xfs_trans_dquot.c +++ b/fs/xfs/xfs_trans_dquot.c @@ -796,7 +796,7 @@ xfs_trans_reserve_quota_nblks( return 0; ASSERT(!xfs_is_quota_inode(&mp->m_sb, ip->i_ino)); - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + xfs_assert_ilocked(ip, XFS_ILOCK_EXCL); if (force) qflags |= XFS_QMOPT_FORCE_RES; From patchwork Fri Nov 10 20:41:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 163986 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp1374085vqs; Fri, 10 Nov 2023 12:44:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IEZUZ31BIpoflcknXU9GidSA10SWiy4FyV+vAUpyZvJEBGYbFndnW5r56SyLMWC1wmqdeL5 X-Received: by 2002:a05:6808:8a:b0:3a7:2690:94d5 with SMTP id s10-20020a056808008a00b003a7269094d5mr468262oic.8.1699649060458; Fri, 10 Nov 2023 12:44:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699649060; cv=none; d=google.com; s=arc-20160816; b=QR24XmLYVohG1FbSYzeyVF/z39aO9FZyXl0k1GWXxks8O3n1+oEuOZHXJvHdFbLxcx DVR25tzmtJieJNTTLZPoKbHsgRtuNhVnlpiBOXbP0C+XsUrXBy41k22wOONkJ+VoOQEz tGfShfaWUcuvev9imvDrfCT3LGCD0GyiD5SpI+LKkty4URQxRe36LY6zGwg98eCjilXC TjPo/5EowHxBzZdh+6aYZa5LGaSF46ydJeyE25yjnodLmZ32F6qZA9j8XOZ+UHOdMQuF nwQg+ce6XTwNVDxP3ShSYkodbXWyJXodalrfQeuIOefiYMQe4fy6sHtSzfFYm8szLZ2m YeYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xbcUpDlbQzcZtzSAWcv3Q7LJrTanIECAc3h72m1YlgE=; fh=xOXdQT4SsuXIFM5Yjo81cCn92/LeCLHKO8ZW4BZqIMk=; b=oEG7wCdujngqUJ5LBa4fJYeDF4WYeEtiZkVLQGMa3j3/l+UrpeH814GnMWXUzAFlRb F5+QpX59lZZCfXtSDb5kK2HCLhgjZMNATSbFH4UgwKJ/nEl8jaHqDsgfd/KyhuRQcM3N i668ncDTmXcITS+M+9TdR7sNtOVgHOE30Yr5SipLExXHU3ftX53WtqYXVlx7jVwADlMx jZor6XVelj4Fy81QEaavVjlTnYJ/M1MZ8Dm2pQCIM4Ng/tZlG8cjNfG0W9gKEa1cxZnd hoDdn6tMhU+FDNWf7K7rE/GVtHAzsJy3gswftUqes2/nOiardfkXHeR20Cd0KM68gfRT fCmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=NXjTGPNA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id g4-20020a63f404000000b00578d5a135dasi190298pgi.891.2023.11.10.12.44.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Nov 2023 12:44:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=NXjTGPNA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 59E47837925C; Fri, 10 Nov 2023 12:44:07 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346677AbjKJUmt (ORCPT + 29 others); Fri, 10 Nov 2023 15:42:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346625AbjKJUmG (ORCPT ); Fri, 10 Nov 2023 15:42:06 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECC4149E5; Fri, 10 Nov 2023 12:41:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xbcUpDlbQzcZtzSAWcv3Q7LJrTanIECAc3h72m1YlgE=; b=NXjTGPNA2/J+S0B69M9GryHSZy JEUbI2a0TcAYRnGh16+i9v4DFgBnH83q3GPcVmkFYWeoL2adHr3jmgVG2axEPdWLdI+jor9FpoEB4 WOd9Nh4rje6m4Be73wVz7JajVwB7Jan9En5P94sI1bn6IbzMWrOZvu1ynUfk1Exyo9UtvvRaTOuli UT4d0/S9wkysIlvVu4vAARkoMPnt9Ip/T/fESfI7hCRzWsnPwgaNjQq6wxrjMZc/qLz8aiJ6qT2cc HBY6Q7ud4STl3KlqSrCt/oXDuxacuZuLcBHlCvpCK6j8ogowkjqKuJiYsJ7n9Cjn0Ra/nTJqxskGj aq72S09w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1r1YJd-00FUTA-Bl; Fri, 10 Nov 2023 20:41:21 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Mateusz Guzik Subject: [PATCH v3 4/4] xfs: Remove mrlock wrapper Date: Fri, 10 Nov 2023 20:41:19 +0000 Message-Id: <20231110204119.3692023-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231110204119.3692023-1-willy@infradead.org> References: <20231110204119.3692023-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Fri, 10 Nov 2023 12:44:07 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782211213550734439 X-GMAIL-MSGID: 1782211213550734439 mrlock was an rwsem wrapper that also recorded whether the lock was held for read or write. Now that we can ask the generic code whether the lock is held for read or write, we can remove this wrapper and use an rwsem directly. As the comment says, we can't use lockdep to assert that the ILOCK is held for write, because we might be in a workqueue, and we aren't able to tell lockdep that we do in fact own the lock. Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/mrlock.h | 78 ---------------------------------------------- fs/xfs/xfs_inode.c | 22 +++++++------ fs/xfs/xfs_inode.h | 2 +- fs/xfs/xfs_iops.c | 4 +-- fs/xfs/xfs_linux.h | 2 +- fs/xfs/xfs_super.c | 4 +-- 6 files changed, 18 insertions(+), 94 deletions(-) delete mode 100644 fs/xfs/mrlock.h diff --git a/fs/xfs/mrlock.h b/fs/xfs/mrlock.h deleted file mode 100644 index 79155eec341b..000000000000 --- a/fs/xfs/mrlock.h +++ /dev/null @@ -1,78 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2000-2006 Silicon Graphics, Inc. - * All Rights Reserved. - */ -#ifndef __XFS_SUPPORT_MRLOCK_H__ -#define __XFS_SUPPORT_MRLOCK_H__ - -#include - -typedef struct { - struct rw_semaphore mr_lock; -#if defined(DEBUG) || defined(XFS_WARN) - int mr_writer; -#endif -} mrlock_t; - -#if defined(DEBUG) || defined(XFS_WARN) -#define mrinit(mrp, name) \ - do { (mrp)->mr_writer = 0; init_rwsem(&(mrp)->mr_lock); } while (0) -#else -#define mrinit(mrp, name) \ - do { init_rwsem(&(mrp)->mr_lock); } while (0) -#endif - -#define mrlock_init(mrp, t,n,s) mrinit(mrp, n) -#define mrfree(mrp) do { } while (0) - -static inline void mraccess_nested(mrlock_t *mrp, int subclass) -{ - down_read_nested(&mrp->mr_lock, subclass); -} - -static inline void mrupdate_nested(mrlock_t *mrp, int subclass) -{ - down_write_nested(&mrp->mr_lock, subclass); -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 1; -#endif -} - -static inline int mrtryaccess(mrlock_t *mrp) -{ - return down_read_trylock(&mrp->mr_lock); -} - -static inline int mrtryupdate(mrlock_t *mrp) -{ - if (!down_write_trylock(&mrp->mr_lock)) - return 0; -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 1; -#endif - return 1; -} - -static inline void mrunlock_excl(mrlock_t *mrp) -{ -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 0; -#endif - up_write(&mrp->mr_lock); -} - -static inline void mrunlock_shared(mrlock_t *mrp) -{ - up_read(&mrp->mr_lock); -} - -static inline void mrdemote(mrlock_t *mrp) -{ -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 0; -#endif - downgrade_write(&mrp->mr_lock); -} - -#endif /* __XFS_SUPPORT_MRLOCK_H__ */ diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 1ed6bed19bec..7661084ca568 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -208,9 +208,9 @@ xfs_ilock( } if (lock_flags & XFS_ILOCK_EXCL) - mrupdate_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_write_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); else if (lock_flags & XFS_ILOCK_SHARED) - mraccess_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_read_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); } /* @@ -251,10 +251,10 @@ xfs_ilock_nowait( } if (lock_flags & XFS_ILOCK_EXCL) { - if (!mrtryupdate(&ip->i_lock)) + if (!down_write_trylock(&ip->i_lock)) goto out_undo_mmaplock; } else if (lock_flags & XFS_ILOCK_SHARED) { - if (!mrtryaccess(&ip->i_lock)) + if (!down_read_trylock(&ip->i_lock)) goto out_undo_mmaplock; } return 1; @@ -303,9 +303,9 @@ xfs_iunlock( up_read(&VFS_I(ip)->i_mapping->invalidate_lock); if (lock_flags & XFS_ILOCK_EXCL) - mrunlock_excl(&ip->i_lock); + up_write(&ip->i_lock); else if (lock_flags & XFS_ILOCK_SHARED) - mrunlock_shared(&ip->i_lock); + up_read(&ip->i_lock); trace_xfs_iunlock(ip, lock_flags, _RET_IP_); } @@ -324,7 +324,7 @@ xfs_ilock_demote( ~(XFS_IOLOCK_EXCL|XFS_MMAPLOCK_EXCL|XFS_ILOCK_EXCL)) == 0); if (lock_flags & XFS_ILOCK_EXCL) - mrdemote(&ip->i_lock); + downgrade_write(&ip->i_lock); if (lock_flags & XFS_MMAPLOCK_EXCL) downgrade_write(&VFS_I(ip)->i_mapping->invalidate_lock); if (lock_flags & XFS_IOLOCK_EXCL) @@ -338,10 +338,14 @@ xfs_assert_ilocked( struct xfs_inode *ip, uint lock_flags) { + /* + * Sometimes we assert the ILOCK is held exclusively, but we're in + * a workqueue, so lockdep doesn't know we're the owner. + */ if (lock_flags & XFS_ILOCK_SHARED) - rwsem_assert_held(&ip->i_lock.mr_lock); + rwsem_assert_held(&ip->i_lock); else if (lock_flags & XFS_ILOCK_EXCL) - ASSERT(ip->i_lock.mr_writer); + rwsem_assert_held_write_nolockdep(&ip->i_lock); if (lock_flags & XFS_MMAPLOCK_SHARED) rwsem_assert_held(&VFS_I(ip)->i_mapping->invalidate_lock); diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index dd8b8339ba1b..3cade5903fda 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -39,7 +39,7 @@ typedef struct xfs_inode { /* Transaction and locking information. */ struct xfs_inode_log_item *i_itemp; /* logging information */ - mrlock_t i_lock; /* inode lock */ + struct rw_semaphore i_lock; /* inode lock */ atomic_t i_pincount; /* inode pin count */ struct llist_node i_gclist; /* deferred inactivation list */ diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index e711668d601b..79010d60ee21 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -1284,9 +1284,9 @@ xfs_setup_inode( */ lockdep_set_class(&inode->i_rwsem, &inode->i_sb->s_type->i_mutex_dir_key); - lockdep_set_class(&ip->i_lock.mr_lock, &xfs_dir_ilock_class); + lockdep_set_class(&ip->i_lock, &xfs_dir_ilock_class); } else { - lockdep_set_class(&ip->i_lock.mr_lock, &xfs_nondir_ilock_class); + lockdep_set_class(&ip->i_lock, &xfs_nondir_ilock_class); } /* diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h index d7873e0360f0..ec3c6c138a63 100644 --- a/fs/xfs/xfs_linux.h +++ b/fs/xfs/xfs_linux.h @@ -22,7 +22,6 @@ typedef __u32 xfs_nlink_t; #include "xfs_types.h" #include "kmem.h" -#include "mrlock.h" #include #include @@ -51,6 +50,7 @@ typedef __u32 xfs_nlink_t; #include #include #include +#include #include #include #include diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 764304595e8b..cd0200ed51f0 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -724,9 +724,7 @@ xfs_fs_inode_init_once( /* xfs inode */ atomic_set(&ip->i_pincount, 0); spin_lock_init(&ip->i_flags_lock); - - mrlock_init(&ip->i_lock, MRLOCK_ALLOW_EQUAL_PRI|MRLOCK_BARRIER, - "xfsino", ip->i_ino); + init_rwsem(&ip->i_lock); } /*