Message ID | 20221115171709.3774614-2-chenxiaosong2@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2818368wru; Tue, 15 Nov 2022 08:17:46 -0800 (PST) X-Google-Smtp-Source: AA0mqf715F09iuPR0qH25Bb+Zpi7GAbMF8VqlFIPbTzWDRY83q2pkZulKSNadR7e2EVWdszSz/ER X-Received: by 2002:a17:906:7a11:b0:7ad:84c7:502d with SMTP id d17-20020a1709067a1100b007ad84c7502dmr14147369ejo.177.1668529066044; Tue, 15 Nov 2022 08:17:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668529066; cv=none; d=google.com; s=arc-20160816; b=fMcRufMpGbScSf+8x3rgCsh3vwaTzhnFg+H6bKg0FFZDjiRkHUsi+ZojWxDLfkzHva bGJnTTwwpFO+gVyEVQtumqJpi/aMktCxaIZ7NLPDI+owWJTuReiTtaOC5/HJ1qUa6q2K 5esZliJxoLvebCVWHND/+qqJhjSijbLesz8HB9c7HGdtolOtC81EYIDud1c7T7Uj/Zk9 Y6czw4Uu2p0Xb/FXSiUzoq71+YjGG/G5zFk5JjusXplvASD0WpxJQrzsV2ppV49JHQuJ eJlOFpFm5Tq9b9r3foLhYrEZkjrFuzdwd5YbKDEj+uwJlacsjuNIDIAAE4D6KORAJabl mcZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jItK0oqLNfGEkkN2DvQvMV3up2IvpJhARVksJ+H4an0=; b=j6dOsqAAKPOrpqXvsZZmh4SAZaFR0jVRuVXcPAyg7uf96bF5uyh+AQi6rWuwkJgj8i qlM0giMCkcNMbtAW3/UeFZ7yB7yz4LzFfG5qvnhsPucZk0wz6PRcsWHUmWiivj09PEVz eeflFosRkiuQODmXqBXb1dnSU4w0wtbRZ1tQFwzUJ8XOSIOqfQpuWBNiewB29qLHF+se fTtsAkN2qaML+zG6ecC0rP9tE48FgE+Lzunjld8P9IlBb/2uO/m7R8GRMajb6xRD3ba5 qGMotJeGWTNNcXumS2j8RoTXBpUh1vPFKoD+RWaS8HtEsKsCoMW9wHa+lPizjtauaiQ3 1MFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds5-20020a170907724500b00770872942d8si12768351ejc.958.2022.11.15.08.17.21; Tue, 15 Nov 2022 08:17:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237952AbiKOQMh (ORCPT <rfc822;maxim.cournoyer@gmail.com> + 99 others); Tue, 15 Nov 2022 11:12:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232114AbiKOQMf (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 15 Nov 2022 11:12:35 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6951B1DF3E; Tue, 15 Nov 2022 08:12:34 -0800 (PST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NBWR15ZtrzRpGL; Wed, 16 Nov 2022 00:12:13 +0800 (CST) Received: from kwepemm600015.china.huawei.com (7.193.23.52) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 16 Nov 2022 00:12:32 +0800 Received: from huawei.com (10.175.101.6) by kwepemm600015.china.huawei.com (7.193.23.52) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 16 Nov 2022 00:12:30 +0800 From: ChenXiaoSong <chenxiaosong2@huawei.com> To: <clm@fb.com>, <josef@toxicpanda.com>, <dsterba@suse.com> CC: <linux-btrfs@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <chenxiaosong2@huawei.com>, <zhangxiaoxu5@huawei.com>, <yanaijie@huawei.com>, <quwenruo.btrfs@gmx.com>, <wqu@suse.com> Subject: [PATCH v4 1/3] btrfs: add might_sleep() to some places in update_qgroup_limit_item() Date: Wed, 16 Nov 2022 01:17:07 +0800 Message-ID: <20221115171709.3774614-2-chenxiaosong2@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20221115171709.3774614-1-chenxiaosong2@huawei.com> References: <20221115171709.3774614-1-chenxiaosong2@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.101.6] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600015.china.huawei.com (7.193.23.52) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749579534007096249?= X-GMAIL-MSGID: =?utf-8?q?1749579534007096249?= |
Series |
btrfs: fix sleep from invalid context bug in update_qgroup_limit_item()
|
|
Commit Message
ChenXiaoSong
Nov. 15, 2022, 5:17 p.m. UTC
As the potential sleeping under spin lock is hard to spot, we should add
might_sleep() to some places.
Signed-off-by: ChenXiaoSong <chenxiaosong2@huawei.com>
---
fs/btrfs/ctree.c | 2 ++
fs/btrfs/qgroup.c | 2 ++
2 files changed, 4 insertions(+)
Comments
On Wed, Nov 16, 2022 at 01:17:07AM +0800, ChenXiaoSong wrote: > As the potential sleeping under spin lock is hard to spot, we should add > might_sleep() to some places. > > Signed-off-by: ChenXiaoSong <chenxiaosong2@huawei.com> > --- > fs/btrfs/ctree.c | 2 ++ > fs/btrfs/qgroup.c | 2 ++ > 2 files changed, 4 insertions(+) > > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c > index a9543f01184c..809053e9cfde 100644 > --- a/fs/btrfs/ctree.c > +++ b/fs/btrfs/ctree.c > @@ -1934,6 +1934,8 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root, > int min_write_lock_level; > int prev_cmp; > > + might_sleep(); This needs some explanation in the changelog, the reason was mentioned in some past patch iteration that it's due to potential IO fi the blocks are not cached. > + > lowest_level = p->lowest_level; > WARN_ON(lowest_level && ins_len > 0); > WARN_ON(p->nodes[0] != NULL); > diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c > index 9334c3157c22..d0480b9c6c86 100644 > --- a/fs/btrfs/qgroup.c > +++ b/fs/btrfs/qgroup.c > @@ -779,6 +779,8 @@ static int update_qgroup_limit_item(struct btrfs_trans_handle *trans, > int ret; > int slot; > > + might_sleep(); This one is redundant, no? There's call to btrfs_search_slot a few lines below. > + > key.objectid = 0; > key.type = BTRFS_QGROUP_LIMIT_KEY; > key.offset = qgroup->qgroupid; > -- > 2.31.1 >
On 2022/11/16 01:17, ChenXiaoSong wrote: > As the potential sleeping under spin lock is hard to spot, we should add > might_sleep() to some places. > > Signed-off-by: ChenXiaoSong <chenxiaosong2@huawei.com> Looks good. We may want to add more in other locations, but this is really a good start. Reviewed-by: Qu Wenruo <wqu@suse.com> Thanks, Qu > --- > fs/btrfs/ctree.c | 2 ++ > fs/btrfs/qgroup.c | 2 ++ > 2 files changed, 4 insertions(+) > > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c > index a9543f01184c..809053e9cfde 100644 > --- a/fs/btrfs/ctree.c > +++ b/fs/btrfs/ctree.c > @@ -1934,6 +1934,8 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root, > int min_write_lock_level; > int prev_cmp; > > + might_sleep(); > + > lowest_level = p->lowest_level; > WARN_ON(lowest_level && ins_len > 0); > WARN_ON(p->nodes[0] != NULL); > diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c > index 9334c3157c22..d0480b9c6c86 100644 > --- a/fs/btrfs/qgroup.c > +++ b/fs/btrfs/qgroup.c > @@ -779,6 +779,8 @@ static int update_qgroup_limit_item(struct btrfs_trans_handle *trans, > int ret; > int slot; > > + might_sleep(); > + > key.objectid = 0; > key.type = BTRFS_QGROUP_LIMIT_KEY; > key.offset = qgroup->qgroupid;
在 2022/11/16 6:48, Qu Wenruo 写道: > Looks good. > > We may want to add more in other locations, but this is really a good > start. > > Reviewed-by: Qu Wenruo <wqu@suse.com> > > Thanks, > Qu If I just add might_sleep() in btrfs_alloc_path() and btrfs_search_slot(), is it reasonable? Or just add might_sleep() to one place in update_qgroup_limit_item() ?
On 2022/11/16 16:09, ChenXiaoSong wrote: > 在 2022/11/16 6:48, Qu Wenruo 写道: >> Looks good. >> >> We may want to add more in other locations, but this is really a good >> start. >> >> Reviewed-by: Qu Wenruo <wqu@suse.com> >> >> Thanks, >> Qu > > If I just add might_sleep() in btrfs_alloc_path() and > btrfs_search_slot(), is it reasonable? Adding it to btrfs_search_slot() is definitely correct. But why for btrfs_alloc_path()? Wouldn't kmem_cache_zalloc() itself already do the might_sleep_if() somewhere? I just looked the call chain, and indeed it is doing the check already: btrfs_alloc_path() |- kmem_cache_zalloc() |- kmem_cache_alloc() |- __kmem_cache_alloc_lru() |- slab_alloc() |- slab_alloc_node() |- slab_pre_alloc_hook() |- might_alloc() |- might_sleep_if() Thanks, Qu > > Or just add might_sleep() to one place in update_qgroup_limit_item() ?
On Wed, Nov 16, 2022 at 04:43:50PM +0800, Qu Wenruo wrote: > > > On 2022/11/16 16:09, ChenXiaoSong wrote: > > 在 2022/11/16 6:48, Qu Wenruo 写道: > >> Looks good. > >> > >> We may want to add more in other locations, but this is really a good > >> start. > >> > >> Reviewed-by: Qu Wenruo <wqu@suse.com> > >> > >> Thanks, > >> Qu > > > > If I just add might_sleep() in btrfs_alloc_path() and > > btrfs_search_slot(), is it reasonable? > > Adding it to btrfs_search_slot() is definitely correct. > > But why for btrfs_alloc_path()? Wouldn't kmem_cache_zalloc() itself > already do the might_sleep_if() somewhere? > > I just looked the call chain, and indeed it is doing the check already: > > btrfs_alloc_path() > |- kmem_cache_zalloc() > |- kmem_cache_alloc() > |- __kmem_cache_alloc_lru() > |- slab_alloc() > |- slab_alloc_node() > |- slab_pre_alloc_hook() > |- might_alloc() > |- might_sleep_if() The call chaing is unconditional so the check will always happen but the condition itself in might_sleep_if does not recognize GFP_NOFS: 34 static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) 35 { 36 return !!(gfp_flags & __GFP_DIRECT_RECLAIM); 37 } #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) And I think the qgroup limit was exactly a spin lock over btrfs_path_alloc so it did not help. An might_sleep() inside btrfs_path_alloc() is a very minimal but reliable check we could add, the paths are used in many places so it would increase the coverage.
On 2022/11/16 20:24, David Sterba wrote: > On Wed, Nov 16, 2022 at 04:43:50PM +0800, Qu Wenruo wrote: >> >> >> On 2022/11/16 16:09, ChenXiaoSong wrote: >>> 在 2022/11/16 6:48, Qu Wenruo 写道: >>>> Looks good. >>>> >>>> We may want to add more in other locations, but this is really a good >>>> start. >>>> >>>> Reviewed-by: Qu Wenruo <wqu@suse.com> >>>> >>>> Thanks, >>>> Qu >>> >>> If I just add might_sleep() in btrfs_alloc_path() and >>> btrfs_search_slot(), is it reasonable? >> >> Adding it to btrfs_search_slot() is definitely correct. >> >> But why for btrfs_alloc_path()? Wouldn't kmem_cache_zalloc() itself >> already do the might_sleep_if() somewhere? >> >> I just looked the call chain, and indeed it is doing the check already: >> >> btrfs_alloc_path() >> |- kmem_cache_zalloc() >> |- kmem_cache_alloc() >> |- __kmem_cache_alloc_lru() >> |- slab_alloc() >> |- slab_alloc_node() >> |- slab_pre_alloc_hook() >> |- might_alloc() >> |- might_sleep_if() > > The call chaing is unconditional so the check will always happen but the > condition itself in might_sleep_if does not recognize GFP_NOFS: > > 34 static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) > 35 { > 36 return !!(gfp_flags & __GFP_DIRECT_RECLAIM); > 37 } > > #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) > > And I think the qgroup limit was exactly a spin lock over btrfs_path_alloc so > it did not help. An might_sleep() inside btrfs_path_alloc() is a very minimal > but reliable check we could add, the paths are used in many places so it would > increase the coverage. OK, then it makes sense now for btrfs_alloc_path(). But I still believe this looks like a bug in gfpflags_allow_blocking()... Thanks, Qu
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index a9543f01184c..809053e9cfde 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -1934,6 +1934,8 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root, int min_write_lock_level; int prev_cmp; + might_sleep(); + lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); WARN_ON(p->nodes[0] != NULL); diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 9334c3157c22..d0480b9c6c86 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -779,6 +779,8 @@ static int update_qgroup_limit_item(struct btrfs_trans_handle *trans, int ret; int slot; + might_sleep(); + key.objectid = 0; key.type = BTRFS_QGROUP_LIMIT_KEY; key.offset = qgroup->qgroupid;