[dm-devel,v3] dm thin: Fix ABBA deadlock between shrink_slab and dm_pool_abort_metadata
Message ID | 20221130133134.2870646-1-chengzhihao1@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp921240wrr; Wed, 30 Nov 2022 05:23:21 -0800 (PST) X-Google-Smtp-Source: AA0mqf7btdTol4PZcPa5p09nJvI+dKxHg1D2hmIQc5vXSU1cnkYhzBYe9pH89yFt9e8TPxN6nPDC X-Received: by 2002:a17:90a:9c18:b0:212:fa9a:12df with SMTP id h24-20020a17090a9c1800b00212fa9a12dfmr71504574pjp.231.1669814601026; Wed, 30 Nov 2022 05:23:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669814601; cv=none; d=google.com; s=arc-20160816; b=t5hfjPsdE7rfxZU4iy6JIU+8jY/zJAFU5Lfp9tbnaoVOR/u83KksJcOjTenB2hqN/A wsgY8GiJGMdCACs/ZYTg1HRNKW4/Oj4zCel357cAS6hgRrku2nXoZk2ry3e3D5iT+ohd zN+wrOVN82PxDg3bPWuQlbnb0ObNfbTG7H2OV48sY/+b34qvRJAQswx92HinxBn6GfsH 571k9J5ipMhuI/PSR85oZg8chVVr/QcfOQBppiQQ6Uqd9Yu06oLfJ+BVkiqQhUgZf+VM H8r+pPP60FQS8IE/EgXUqQjO9yYwgcIMSdGVCCtFVX1VVdk3dvkYO/yMxlj1YrTPA/nZ 9dmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=fIZ7ThGqanAb8xYzOLCS42ogvSOYncDMNWpV9Mb2/98=; b=T4sq12rKqDCPTvpnr5C78O3b+rKJqp6vDxUNZ1/v1CNigagnQfpgmiWuEgPoj/wAiB klKVyG4c7c6gr+PpJ+Y55hzXwDp4nJCX4cSblCk3Nub4Fu65ytBZ29L4AyBYn5ZdMHEB sjNXSQosr+dZc1onEBfKcJLRWTv+H52oFHRMhx7KB4zj638gxejT6sH7O130jFKMUWO7 OLHBRZICyKUMtTxV8vZ3fK7USY8OtmPkOVyB9tJJd3hibS1hFuIEEXQM3dQ1Rnf1ofHs 6emMOJLLVaW2Shf1ZO9vZEwLNxqKo4H+nfjw5dmJZNWdaNTMdXlce0c/kGMA7Uy/aMeO pNlA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 20-20020a631854000000b0047864c42826si769387pgy.735.2022.11.30.05.23.07; Wed, 30 Nov 2022 05:23:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235586AbiK3NKj (ORCPT <rfc822;heyuhang3455@gmail.com> + 99 others); Wed, 30 Nov 2022 08:10:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233918AbiK3NKh (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 30 Nov 2022 08:10:37 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBC876B3A4 for <linux-kernel@vger.kernel.org>; Wed, 30 Nov 2022 05:10:35 -0800 (PST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4NMfbp2Gz3zqSmw; Wed, 30 Nov 2022 21:06:30 +0800 (CST) Received: from kwepemm600013.china.huawei.com (7.193.23.68) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 30 Nov 2022 21:10:33 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600013.china.huawei.com (7.193.23.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 30 Nov 2022 21:10:32 +0800 From: Zhihao Cheng <chengzhihao1@huawei.com> To: <Martin.Wilck@suse.com>, <snitzer@kernel.org>, <jack@suse.cz>, <ejt@redhat.com> CC: <linux-kernel@vger.kernel.org>, <dm-devel@redhat.com>, <chengzhihao1@huawei.com>, <yi.zhang@huawei.com> Subject: [dm-devel] [PATCH v3] dm thin: Fix ABBA deadlock between shrink_slab and dm_pool_abort_metadata Date: Wed, 30 Nov 2022 21:31:34 +0800 Message-ID: <20221130133134.2870646-1-chengzhihao1@huawei.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600013.china.huawei.com (7.193.23.68) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,WEIRD_PORT autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750925687222545417?= X-GMAIL-MSGID: =?utf-8?q?1750927515256607238?= |
Series |
[dm-devel,v3] dm thin: Fix ABBA deadlock between shrink_slab and dm_pool_abort_metadata
|
|
Commit Message
Zhihao Cheng
Nov. 30, 2022, 1:31 p.m. UTC
Following concurrent processes:
P1(drop cache) P2(kworker)
drop_caches_sysctl_handler
drop_slab
shrink_slab
down_read(&shrinker_rwsem) - LOCK A
do_shrink_slab
super_cache_scan
prune_icache_sb
dispose_list
evict
ext4_evict_inode
ext4_clear_inode
ext4_discard_preallocations
ext4_mb_load_buddy_gfp
ext4_mb_init_cache
ext4_read_block_bitmap_nowait
ext4_read_bh_nowait
submit_bh
dm_submit_bio
do_worker
process_deferred_bios
commit
metadata_operation_failed
dm_pool_abort_metadata
down_write(&pmd->root_lock) - LOCK B
__destroy_persistent_data_objects
dm_block_manager_destroy
dm_bufio_client_destroy
unregister_shrinker
down_write(&shrinker_rwsem)
thin_map |
dm_thin_find_block ↓
down_read(&pmd->root_lock) --> ABBA deadlock
, which triggers hung task:
[ 76.974820] INFO: task kworker/u4:3:63 blocked for more than 15 seconds.
[ 76.976019] Not tainted 6.1.0-rc4-00011-g8f17dd350364-dirty #910
[ 76.978521] task:kworker/u4:3 state:D stack:0 pid:63 ppid:2
[ 76.978534] Workqueue: dm-thin do_worker
[ 76.978552] Call Trace:
[ 76.978564] __schedule+0x6ba/0x10f0
[ 76.978582] schedule+0x9d/0x1e0
[ 76.978588] rwsem_down_write_slowpath+0x587/0xdf0
[ 76.978600] down_write+0xec/0x110
[ 76.978607] unregister_shrinker+0x2c/0xf0
[ 76.978616] dm_bufio_client_destroy+0x116/0x3d0
[ 76.978625] dm_block_manager_destroy+0x19/0x40
[ 76.978629] __destroy_persistent_data_objects+0x5e/0x70
[ 76.978636] dm_pool_abort_metadata+0x8e/0x100
[ 76.978643] metadata_operation_failed+0x86/0x110
[ 76.978649] commit+0x6a/0x230
[ 76.978655] do_worker+0xc6e/0xd90
[ 76.978702] process_one_work+0x269/0x630
[ 76.978714] worker_thread+0x266/0x630
[ 76.978730] kthread+0x151/0x1b0
[ 76.978772] INFO: task test.sh:2646 blocked for more than 15 seconds.
[ 76.979756] Not tainted 6.1.0-rc4-00011-g8f17dd350364-dirty #910
[ 76.982111] task:test.sh state:D stack:0 pid:2646 ppid:2459
[ 76.982128] Call Trace:
[ 76.982139] __schedule+0x6ba/0x10f0
[ 76.982155] schedule+0x9d/0x1e0
[ 76.982159] rwsem_down_read_slowpath+0x4f4/0x910
[ 76.982173] down_read+0x84/0x170
[ 76.982177] dm_thin_find_block+0x4c/0xd0
[ 76.982183] thin_map+0x201/0x3d0
[ 76.982188] __map_bio+0x5b/0x350
[ 76.982195] dm_submit_bio+0x2b6/0x930
[ 76.982202] __submit_bio+0x123/0x2d0
[ 76.982209] submit_bio_noacct_nocheck+0x101/0x3e0
[ 76.982222] submit_bio_noacct+0x389/0x770
[ 76.982227] submit_bio+0x50/0xc0
[ 76.982232] submit_bh_wbc+0x15e/0x230
[ 76.982238] submit_bh+0x14/0x20
[ 76.982241] ext4_read_bh_nowait+0xc5/0x130
[ 76.982247] ext4_read_block_bitmap_nowait+0x340/0xc60
[ 76.982254] ext4_mb_init_cache+0x1ce/0xdc0
[ 76.982259] ext4_mb_load_buddy_gfp+0x987/0xfa0
[ 76.982263] ext4_discard_preallocations+0x45d/0x830
[ 76.982274] ext4_clear_inode+0x48/0xf0
[ 76.982280] ext4_evict_inode+0xcf/0xc70
[ 76.982285] evict+0x119/0x2b0
[ 76.982290] dispose_list+0x43/0xa0
[ 76.982294] prune_icache_sb+0x64/0x90
[ 76.982298] super_cache_scan+0x155/0x210
[ 76.982303] do_shrink_slab+0x19e/0x4e0
[ 76.982310] shrink_slab+0x2bd/0x450
[ 76.982317] drop_slab+0xcc/0x1a0
[ 76.982323] drop_caches_sysctl_handler+0xb7/0xe0
[ 76.982327] proc_sys_call_handler+0x1bc/0x300
[ 76.982331] proc_sys_write+0x17/0x20
[ 76.982334] vfs_write+0x3d3/0x570
[ 76.982342] ksys_write+0x73/0x160
[ 76.982347] __x64_sys_write+0x1e/0x30
[ 76.982352] do_syscall_64+0x35/0x80
[ 76.982357] entry_SYSCALL_64_after_hwframe+0x63/0xcd
Function metadata_operation_failed() is called when operations failed
on dm pool metadata, dm pool will destroy and recreate metadata. So,
shrinker will be unregistered and registered, which could down write
shrinker_rwsem under pmd_write_lock.
Fix it by allocating dm_block_manager before locking pmd->root_lock
and destroying old dm_block_manager after unlocking pmd->root_lock,
then old dm_block_manager is replaced with new dm_block_manager under
pmd->root_lock. So, shrinker register/unregister could be done without
holding pmd->root_lock.
Fetch a reproducer in [Link].
Link: https://bugzilla.kernel.org/show_bug.cgi?id=216676
Fixes: e49e582965b3 ("dm thin: add read only and fail io modes")
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
---
v1->v2: Update fix strategy. Allocating/Destroying dm_block_manager
outside of lock pmd->root.
v2->v3: Move fail_io setting inside pmd->root lock.
drivers/md/dm-thin-metadata.c | 40 +++++++++++++++++++++++++++++------
1 file changed, 34 insertions(+), 6 deletions(-)
Comments
On Wed, Nov 30 2022 at 8:31P -0500, Zhihao Cheng <chengzhihao1@huawei.com> wrote: > Following concurrent processes: > > P1(drop cache) P2(kworker) > drop_caches_sysctl_handler > drop_slab > shrink_slab > down_read(&shrinker_rwsem) - LOCK A > do_shrink_slab > super_cache_scan > prune_icache_sb > dispose_list > evict > ext4_evict_inode > ext4_clear_inode > ext4_discard_preallocations > ext4_mb_load_buddy_gfp > ext4_mb_init_cache > ext4_read_block_bitmap_nowait > ext4_read_bh_nowait > submit_bh > dm_submit_bio > do_worker > process_deferred_bios > commit > metadata_operation_failed > dm_pool_abort_metadata > down_write(&pmd->root_lock) - LOCK B > __destroy_persistent_data_objects > dm_block_manager_destroy > dm_bufio_client_destroy > unregister_shrinker > down_write(&shrinker_rwsem) > thin_map | > dm_thin_find_block ↓ > down_read(&pmd->root_lock) --> ABBA deadlock > > , which triggers hung task: > > [ 76.974820] INFO: task kworker/u4:3:63 blocked for more than 15 seconds. > [ 76.976019] Not tainted 6.1.0-rc4-00011-g8f17dd350364-dirty #910 > [ 76.978521] task:kworker/u4:3 state:D stack:0 pid:63 ppid:2 > [ 76.978534] Workqueue: dm-thin do_worker > [ 76.978552] Call Trace: > [ 76.978564] __schedule+0x6ba/0x10f0 > [ 76.978582] schedule+0x9d/0x1e0 > [ 76.978588] rwsem_down_write_slowpath+0x587/0xdf0 > [ 76.978600] down_write+0xec/0x110 > [ 76.978607] unregister_shrinker+0x2c/0xf0 > [ 76.978616] dm_bufio_client_destroy+0x116/0x3d0 > [ 76.978625] dm_block_manager_destroy+0x19/0x40 > [ 76.978629] __destroy_persistent_data_objects+0x5e/0x70 > [ 76.978636] dm_pool_abort_metadata+0x8e/0x100 > [ 76.978643] metadata_operation_failed+0x86/0x110 > [ 76.978649] commit+0x6a/0x230 > [ 76.978655] do_worker+0xc6e/0xd90 > [ 76.978702] process_one_work+0x269/0x630 > [ 76.978714] worker_thread+0x266/0x630 > [ 76.978730] kthread+0x151/0x1b0 > [ 76.978772] INFO: task test.sh:2646 blocked for more than 15 seconds. > [ 76.979756] Not tainted 6.1.0-rc4-00011-g8f17dd350364-dirty #910 > [ 76.982111] task:test.sh state:D stack:0 pid:2646 ppid:2459 > [ 76.982128] Call Trace: > [ 76.982139] __schedule+0x6ba/0x10f0 > [ 76.982155] schedule+0x9d/0x1e0 > [ 76.982159] rwsem_down_read_slowpath+0x4f4/0x910 > [ 76.982173] down_read+0x84/0x170 > [ 76.982177] dm_thin_find_block+0x4c/0xd0 > [ 76.982183] thin_map+0x201/0x3d0 > [ 76.982188] __map_bio+0x5b/0x350 > [ 76.982195] dm_submit_bio+0x2b6/0x930 > [ 76.982202] __submit_bio+0x123/0x2d0 > [ 76.982209] submit_bio_noacct_nocheck+0x101/0x3e0 > [ 76.982222] submit_bio_noacct+0x389/0x770 > [ 76.982227] submit_bio+0x50/0xc0 > [ 76.982232] submit_bh_wbc+0x15e/0x230 > [ 76.982238] submit_bh+0x14/0x20 > [ 76.982241] ext4_read_bh_nowait+0xc5/0x130 > [ 76.982247] ext4_read_block_bitmap_nowait+0x340/0xc60 > [ 76.982254] ext4_mb_init_cache+0x1ce/0xdc0 > [ 76.982259] ext4_mb_load_buddy_gfp+0x987/0xfa0 > [ 76.982263] ext4_discard_preallocations+0x45d/0x830 > [ 76.982274] ext4_clear_inode+0x48/0xf0 > [ 76.982280] ext4_evict_inode+0xcf/0xc70 > [ 76.982285] evict+0x119/0x2b0 > [ 76.982290] dispose_list+0x43/0xa0 > [ 76.982294] prune_icache_sb+0x64/0x90 > [ 76.982298] super_cache_scan+0x155/0x210 > [ 76.982303] do_shrink_slab+0x19e/0x4e0 > [ 76.982310] shrink_slab+0x2bd/0x450 > [ 76.982317] drop_slab+0xcc/0x1a0 > [ 76.982323] drop_caches_sysctl_handler+0xb7/0xe0 > [ 76.982327] proc_sys_call_handler+0x1bc/0x300 > [ 76.982331] proc_sys_write+0x17/0x20 > [ 76.982334] vfs_write+0x3d3/0x570 > [ 76.982342] ksys_write+0x73/0x160 > [ 76.982347] __x64_sys_write+0x1e/0x30 > [ 76.982352] do_syscall_64+0x35/0x80 > [ 76.982357] entry_SYSCALL_64_after_hwframe+0x63/0xcd > > Function metadata_operation_failed() is called when operations failed > on dm pool metadata, dm pool will destroy and recreate metadata. So, > shrinker will be unregistered and registered, which could down write > shrinker_rwsem under pmd_write_lock. > > Fix it by allocating dm_block_manager before locking pmd->root_lock > and destroying old dm_block_manager after unlocking pmd->root_lock, > then old dm_block_manager is replaced with new dm_block_manager under > pmd->root_lock. So, shrinker register/unregister could be done without > holding pmd->root_lock. > > Fetch a reproducer in [Link]. > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=216676 > Fixes: e49e582965b3 ("dm thin: add read only and fail io modes") > Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> > --- > v1->v2: Update fix strategy. Allocating/Destroying dm_block_manager > outside of lock pmd->root. > v2->v3: Move fail_io setting inside pmd->root lock. > drivers/md/dm-thin-metadata.c | 40 +++++++++++++++++++++++++++++------ > 1 file changed, 34 insertions(+), 6 deletions(-) > > diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c > index a27395c8621f..7729372519b8 100644 > --- a/drivers/md/dm-thin-metadata.c > +++ b/drivers/md/dm-thin-metadata.c > @@ -782,7 +782,6 @@ static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) > dm_sm_destroy(pmd->metadata_sm); > dm_tm_destroy(pmd->nb_tm); > dm_tm_destroy(pmd->tm); > - dm_block_manager_destroy(pmd->bm); > } > > static int __begin_transaction(struct dm_pool_metadata *pmd) > @@ -988,8 +987,10 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd) > __func__, r); > } > pmd_write_unlock(pmd); > - if (!pmd->fail_io) > + if (!pmd->fail_io) { > __destroy_persistent_data_objects(pmd); > + dm_block_manager_destroy(pmd->bm); > + } > > kfree(pmd); > return 0; > @@ -1860,19 +1861,46 @@ static void __set_abort_with_changes_flags(struct dm_pool_metadata *pmd) > int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) > { > int r = -EINVAL; > + struct dm_block_manager *old_bm = NULL, *new_bm = NULL; > > - pmd_write_lock(pmd); > if (pmd->fail_io) > + return -EINVAL; > + > + new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, > + THIN_MAX_CONCURRENT_LOCKS); > + > + pmd_write_lock(pmd); > + if (pmd->fail_io) { > + pmd_write_unlock(pmd); > goto out; > + } > > __set_abort_with_changes_flags(pmd); > __destroy_persistent_data_objects(pmd); > - r = __create_persistent_data_objects(pmd, false); > + old_bm = pmd->bm; > + if (IS_ERR(new_bm)) { > + /* Return back if creating dm_block_manager failed. */ > + DMERR("could not create block manager"); > + pmd->bm = NULL; > + r = PTR_ERR(new_bm); > + goto out_unlock; > + } > + > + pmd->bm = new_bm; > + r = __open_or_format_metadata(pmd, false); > + if (r) { > + pmd->bm = NULL; > + goto out_unlock; > + } > + new_bm = NULL; > +out_unlock: > if (r) > pmd->fail_io = true; > - > -out: > pmd_write_unlock(pmd); > + dm_block_manager_destroy(old_bm); > +out: > + if (new_bm && !IS_ERR(new_bm)) > + dm_block_manager_destroy(new_bm); > > return r; > } > -- > 2.31.1 > I've picked your patch up and folded into these following changes: diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index 7729372519b8..1a62226ac34e 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -776,12 +776,15 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f return r; } -static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) +static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd, + bool destroy_bm) { dm_sm_destroy(pmd->data_sm); dm_sm_destroy(pmd->metadata_sm); dm_tm_destroy(pmd->nb_tm); dm_tm_destroy(pmd->tm); + if (destroy_bm) + dm_block_manager_destroy(pmd->bm); } static int __begin_transaction(struct dm_pool_metadata *pmd) @@ -987,10 +990,8 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd) __func__, r); } pmd_write_unlock(pmd); - if (!pmd->fail_io) { - __destroy_persistent_data_objects(pmd); - dm_block_manager_destroy(pmd->bm); - } + if (!pmd->fail_io) + __destroy_persistent_data_objects(pmd, true); kfree(pmd); return 0; @@ -1863,9 +1864,16 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) int r = -EINVAL; struct dm_block_manager *old_bm = NULL, *new_bm = NULL; - if (pmd->fail_io) - return -EINVAL; + /* fail_io is double-checked with pmd->root_lock held below */ + if (unlikely(pmd->fail_io)) + return r; + /* + * Replacement block manager (new_bm) is created and old_bm destroyed outside of + * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of + * shrinker associated with the block manager's bufio client vs pmd root_lock). + * - must take shrinker_rwsem without holding pmd->root_lock + */ new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, THIN_MAX_CONCURRENT_LOCKS); @@ -1876,11 +1884,10 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) } __set_abort_with_changes_flags(pmd); - __destroy_persistent_data_objects(pmd); + __destroy_persistent_data_objects(pmd, false); old_bm = pmd->bm; if (IS_ERR(new_bm)) { - /* Return back if creating dm_block_manager failed. */ - DMERR("could not create block manager"); + DMERR("could not create block manager during abort"); pmd->bm = NULL; r = PTR_ERR(new_bm); goto out_unlock;
> On Wed, Nov 30 2022 at 8:31P -0500 > Zhihao Cheng <chengzhihao1@huawei.com> wrote: > >> Following concurrent processes: >> >> P1(drop cache) P2(kworker) [...] >> 2.31.1 >> > > I've picked your patch up and folded into these following changes: > > diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c > index 7729372519b8..1a62226ac34e 100644 > --- a/drivers/md/dm-thin-metadata.c > +++ b/drivers/md/dm-thin-metadata.c > @@ -776,12 +776,15 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f > return r; > } > > -static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) > +static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd, > + bool destroy_bm) > { > dm_sm_destroy(pmd->data_sm); > dm_sm_destroy(pmd->metadata_sm); > dm_tm_destroy(pmd->nb_tm); > dm_tm_destroy(pmd->tm); > + if (destroy_bm) > + dm_block_manager_destroy(pmd->bm); > } > > static int __begin_transaction(struct dm_pool_metadata *pmd) > @@ -987,10 +990,8 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd) > __func__, r); > } > pmd_write_unlock(pmd); > - if (!pmd->fail_io) { > - __destroy_persistent_data_objects(pmd); > - dm_block_manager_destroy(pmd->bm); > - } > + if (!pmd->fail_io) > + __destroy_persistent_data_objects(pmd, true); > > kfree(pmd); > return 0; > @@ -1863,9 +1864,16 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) > int r = -EINVAL; > struct dm_block_manager *old_bm = NULL, *new_bm = NULL; > > - if (pmd->fail_io) > - return -EINVAL; > + /* fail_io is double-checked with pmd->root_lock held below */ > + if (unlikely(pmd->fail_io)) > + return r; > > + /* > + * Replacement block manager (new_bm) is created and old_bm destroyed outside of > + * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of > + * shrinker associated with the block manager's bufio client vs pmd root_lock). > + * - must take shrinker_rwsem without holding pmd->root_lock > + */ > new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, > THIN_MAX_CONCURRENT_LOCKS); > > @@ -1876,11 +1884,10 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) > } > > __set_abort_with_changes_flags(pmd); > - __destroy_persistent_data_objects(pmd); > + __destroy_persistent_data_objects(pmd, false); > old_bm = pmd->bm; > if (IS_ERR(new_bm)) { > - /* Return back if creating dm_block_manager failed. */ > - DMERR("could not create block manager"); > + DMERR("could not create block manager during abort"); > pmd->bm = NULL; > r = PTR_ERR(new_bm); > goto out_unlock; > Hi,Mike This version is great, thanks for reviewing and modifying.
diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index a27395c8621f..7729372519b8 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -782,7 +782,6 @@ static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) dm_sm_destroy(pmd->metadata_sm); dm_tm_destroy(pmd->nb_tm); dm_tm_destroy(pmd->tm); - dm_block_manager_destroy(pmd->bm); } static int __begin_transaction(struct dm_pool_metadata *pmd) @@ -988,8 +987,10 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd) __func__, r); } pmd_write_unlock(pmd); - if (!pmd->fail_io) + if (!pmd->fail_io) { __destroy_persistent_data_objects(pmd); + dm_block_manager_destroy(pmd->bm); + } kfree(pmd); return 0; @@ -1860,19 +1861,46 @@ static void __set_abort_with_changes_flags(struct dm_pool_metadata *pmd) int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) { int r = -EINVAL; + struct dm_block_manager *old_bm = NULL, *new_bm = NULL; - pmd_write_lock(pmd); if (pmd->fail_io) + return -EINVAL; + + new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, + THIN_MAX_CONCURRENT_LOCKS); + + pmd_write_lock(pmd); + if (pmd->fail_io) { + pmd_write_unlock(pmd); goto out; + } __set_abort_with_changes_flags(pmd); __destroy_persistent_data_objects(pmd); - r = __create_persistent_data_objects(pmd, false); + old_bm = pmd->bm; + if (IS_ERR(new_bm)) { + /* Return back if creating dm_block_manager failed. */ + DMERR("could not create block manager"); + pmd->bm = NULL; + r = PTR_ERR(new_bm); + goto out_unlock; + } + + pmd->bm = new_bm; + r = __open_or_format_metadata(pmd, false); + if (r) { + pmd->bm = NULL; + goto out_unlock; + } + new_bm = NULL; +out_unlock: if (r) pmd->fail_io = true; - -out: pmd_write_unlock(pmd); + dm_block_manager_destroy(old_bm); +out: + if (new_bm && !IS_ERR(new_bm)) + dm_block_manager_destroy(new_bm); return r; }