Message ID | 20221214025101.1268437-1-ming.lei@redhat.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp499705wrn; Tue, 13 Dec 2022 18:54:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf5rgsClh0t3kdCflAMDU51MSqziJe2TGWPqxZM0LLso8VN0A+DVIrnos7VVwtI0PcjYkKhS X-Received: by 2002:a05:6a21:168c:b0:ad:e904:f247 with SMTP id np12-20020a056a21168c00b000ade904f247mr3806595pzb.29.1670986484064; Tue, 13 Dec 2022 18:54:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670986484; cv=none; d=google.com; s=arc-20160816; b=D5+CJODiiEZr6J7UypMOlEqS++p+ORnVSBmptvjdsPu+5kAzJdK0RJ4SUWdJdopX6m G3KnOWpxd7Sw7oECwzj16g2EYfAclTyjAk+X3XL8hhqgCjOctZNUW0aUrBYgPm7cdW0J W3b7jhgMfl1gpGKdZRbS32VRfIdTwTpSn+Ioq7uZZhvj5VfcLylH5Q7AYFb2SHWqNvLL GJGyW2OIhHySFo8xAe9Fz6MYBaQpzePa6258TzEq8ak6acMH2suVbtXR8EIuWYCVU78B PGCVXXozz4183v6zqpZ/MkTECvmwhtdqQFSbTztoBYP+drjLU5Trthkn22VwcAAGWwfY Qp/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=qQk9ADgibSy8NmUYT5gPp7mGF+QH9fTOK/8Sedoi10U=; b=Xou2pryURyL2sjmG0bQdKyuFUqZTOodAodapOP0RVHT76NtV4rwVUMUAy8sLIbbk52 25fiup0zlfvpEqGjpXDSUq/Tj292OG2kBP/CQTv8y910nkwjuOC1/ESNP7FE8UYnfm5N iNAq9eS/XOLkL1GmVAGE4Bb8a/lc5dpUENFogc0Lzwuqj12eQDmzMrsoEU5jmZ4zPUpS gte+SjthNCXl0jBN9FbNzdZCI5fkXdzxXIY7jkfDCohhGP54TKYBOd/h7vdZthzdsT2r UMLnLLL/RIB4FYFjgeW4OiTpFly65jX7YAnbyRQDi6KvBOQQbA37MbqYYzWGvlWfqAXX BmIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZaEd4ptx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t13-20020a056a0021cd00b0056da577864fsi14247111pfj.241.2022.12.13.18.54.30; Tue, 13 Dec 2022 18:54:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZaEd4ptx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237044AbiLNCwU (ORCPT <rfc822;jeantsuru.cumc.mandola@gmail.com> + 99 others); Tue, 13 Dec 2022 21:52:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236739AbiLNCwK (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 13 Dec 2022 21:52:10 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F4E721E00 for <linux-kernel@vger.kernel.org>; Tue, 13 Dec 2022 18:51:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670986288; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=qQk9ADgibSy8NmUYT5gPp7mGF+QH9fTOK/8Sedoi10U=; b=ZaEd4ptxnj+GQdJEeThD4SSknkRlofBrK7Hw7XwoYK9aXIFgBR9dUfqvxK2XGFInDM/gnb GMUaQi62PmEEZmv65+Nv0yqgtPag2A5h4VmQE7t3vLbbvAj4EPJagbPu6YYPX5IVOkIWC8 l3CtKcypJHTUf3iiR1owQqqZBhhok/c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-197-l95CNPfmNyibDInMNjPGhA-1; Tue, 13 Dec 2022 21:51:19 -0500 X-MC-Unique: l95CNPfmNyibDInMNjPGhA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B0A985A5A6; Wed, 14 Dec 2022 02:51:18 +0000 (UTC) Received: from localhost (ovpn-8-24.pek2.redhat.com [10.72.8.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 630052166B26; Wed, 14 Dec 2022 02:51:16 +0000 (UTC) From: Ming Lei <ming.lei@redhat.com> To: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org> Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Zhong Jinghua <zhongjinghua@huawei.com>, Yu Kuai <yukuai3@huawei.com>, Dennis Zhou <dennis@kernel.org>, Ming Lei <ming.lei@redhat.com> Subject: [PATCH 0/3] lib/percpu-refcount: fix use-after-free by late ->release Date: Wed, 14 Dec 2022 10:50:58 +0800 Message-Id: <20221214025101.1268437-1-ming.lei@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752156323215613500?= X-GMAIL-MSGID: =?utf-8?q?1752156323215613500?= |
Series |
lib/percpu-refcount: fix use-after-free by late ->release
|
|
Message
Ming Lei
Dec. 14, 2022, 2:50 a.m. UTC
Hi, The pattern of wait_event(percpu_ref_is_zero()) may cause percpu_ref_exit() to be called before ->release() is done, so user-after-free may be caused, fix the issue by draining ->release() in percpu_ref_exit(). Ming Lei (3): lib/percpu-refcount: support to exit refcount automatically during releasing lib/percpu-refcount: apply PERCPU_REF_AUTO_EXIT lib/percpu-refcount: drain ->release() in perpcu_ref_exit() drivers/infiniband/ulp/rtrs/rtrs-srv.c | 4 +-- include/linux/percpu-refcount.h | 36 ++++++++++++++++++++++++-- lib/percpu-refcount.c | 31 +++++++++++++++++++--- mm/memcontrol.c | 5 ++-- 4 files changed, 66 insertions(+), 10 deletions(-)
Comments
On Wed, Dec 14, 2022 at 04:16:51PM +0800, Hillf Danton wrote: > On 14 Dec 2022 10:51:01 +0800 Ming Lei <ming.lei@redhat.com> > > The pattern of wait_event(percpu_ref_is_zero()) has been used in several > > For example? blk_mq_freeze_queue_wait() and target_wait_for_sess_cmds(). > > > kernel components, and this way actually has the following risk: > > > > - percpu_ref_is_zero() can be returned just between > > atomic_long_sub_and_test() and ref->data->release(ref) > > > > - given the refcount is found as zero, percpu_ref_exit() could > > be called, and the host data structure is freed > > > > - then use-after-free is triggered in ->release() when the user host > > data structure is freed after percpu_ref_exit() returns > > The race between exit and the release callback should be considered at the > corresponding callsite, given the comment below, and closed for instance > by synchronizing rcu. > > /** > * percpu_ref_put_many - decrement a percpu refcount > * @ref: percpu_ref to put > * @nr: number of references to put > * > * Decrement the refcount, and if 0, call the release function (which was passed > * to percpu_ref_init()) > * > * This function is safe to call as long as @ref is between init and exit. > */ Not sure if the above comment implies that the callsite should cover the race. But blk-mq can really avoid the trouble by using the existed call_rcu(): diff --git a/block/blk-core.c b/block/blk-core.c index 3866b6c4cd88..9321767470dc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -254,14 +254,15 @@ EXPORT_SYMBOL_GPL(blk_clear_pm_only); static void blk_free_queue_rcu(struct rcu_head *rcu_head) { - kmem_cache_free(blk_requestq_cachep, - container_of(rcu_head, struct request_queue, rcu_head)); + struct request_queue *q = container_of(rcu_head, + struct request_queue, rcu_head); + + percpu_ref_exit(&q->q_usage_counter); + kmem_cache_free(blk_requestq_cachep, q); } static void blk_free_queue(struct request_queue *q) { - percpu_ref_exit(&q->q_usage_counter); - if (q->poll_stat) blk_stat_remove_callback(q, q->poll_cb); blk_stat_free_callback(q->poll_cb); Thanks, Ming
Hello, On Wed, Dec 14, 2022 at 09:30:08PM +0800, Ming Lei wrote: > On Wed, Dec 14, 2022 at 04:16:51PM +0800, Hillf Danton wrote: > > On 14 Dec 2022 10:51:01 +0800 Ming Lei <ming.lei@redhat.com> > > > The pattern of wait_event(percpu_ref_is_zero()) has been used in several > > > > For example? > > blk_mq_freeze_queue_wait() and target_wait_for_sess_cmds(). > > > > > > kernel components, and this way actually has the following risk: > > > > > > - percpu_ref_is_zero() can be returned just between > > > atomic_long_sub_and_test() and ref->data->release(ref) > > > > > > - given the refcount is found as zero, percpu_ref_exit() could > > > be called, and the host data structure is freed > > > > > > - then use-after-free is triggered in ->release() when the user host > > > data structure is freed after percpu_ref_exit() returns > > > > The race between exit and the release callback should be considered at the > > corresponding callsite, given the comment below, and closed for instance > > by synchronizing rcu. > > > > /** > > * percpu_ref_put_many - decrement a percpu refcount > > * @ref: percpu_ref to put > > * @nr: number of references to put > > * > > * Decrement the refcount, and if 0, call the release function (which was passed > > * to percpu_ref_init()) > > * > > * This function is safe to call as long as @ref is between init and exit. > > */ > > Not sure if the above comment implies that the callsite should cover the > race. > > But blk-mq can really avoid the trouble by using the existed call_rcu(): > I struggle with the dependency on release(). release() itself should not block, but a common pattern would be to through a call_rcu() in and schedule additional work - see block/blk-cgroup.c, blkg_release(). I think the dependency really is the completion of release() and the work scheduled on it's behalf rather than strictly starting the release() callback. This series doesn't preclude that from happening. /** * percpu_ref_exit - undo percpu_ref_init() * @ref: percpu_ref to exit * * This function exits @ref. The caller is responsible for ensuring that * @ref is no longer in active use. The usual places to invoke this * function from are the @ref->release() callback or in init failure path * where percpu_ref_init() succeeded but other parts of the initialization * of the embedding object failed. */ I think the percpu_ref_exit() comment explains the more common use case approach to percpu refcounts. release() triggering percpu_ref_exit() is the ideal case. Thanks, Dennis > > diff --git a/block/blk-core.c b/block/blk-core.c > index 3866b6c4cd88..9321767470dc 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -254,14 +254,15 @@ EXPORT_SYMBOL_GPL(blk_clear_pm_only); > > static void blk_free_queue_rcu(struct rcu_head *rcu_head) > { > - kmem_cache_free(blk_requestq_cachep, > - container_of(rcu_head, struct request_queue, rcu_head)); > + struct request_queue *q = container_of(rcu_head, > + struct request_queue, rcu_head); > + > + percpu_ref_exit(&q->q_usage_counter); > + kmem_cache_free(blk_requestq_cachep, q); > } > > static void blk_free_queue(struct request_queue *q) > { > - percpu_ref_exit(&q->q_usage_counter); > - > if (q->poll_stat) > blk_stat_remove_callback(q, q->poll_cb); > blk_stat_free_callback(q->poll_cb); > > > Thanks, > Ming >
On Wed, Dec 14, 2022 at 08:07:28AM -0800, Dennis Zhou wrote: > Hello, > > On Wed, Dec 14, 2022 at 09:30:08PM +0800, Ming Lei wrote: > > On Wed, Dec 14, 2022 at 04:16:51PM +0800, Hillf Danton wrote: > > > On 14 Dec 2022 10:51:01 +0800 Ming Lei <ming.lei@redhat.com> > > > > The pattern of wait_event(percpu_ref_is_zero()) has been used in several > > > > > > For example? > > > > blk_mq_freeze_queue_wait() and target_wait_for_sess_cmds(). > > > > > > > > > kernel components, and this way actually has the following risk: > > > > > > > > - percpu_ref_is_zero() can be returned just between > > > > atomic_long_sub_and_test() and ref->data->release(ref) > > > > > > > > - given the refcount is found as zero, percpu_ref_exit() could > > > > be called, and the host data structure is freed > > > > > > > > - then use-after-free is triggered in ->release() when the user host > > > > data structure is freed after percpu_ref_exit() returns > > > > > > The race between exit and the release callback should be considered at the > > > corresponding callsite, given the comment below, and closed for instance > > > by synchronizing rcu. > > > > > > /** > > > * percpu_ref_put_many - decrement a percpu refcount > > > * @ref: percpu_ref to put > > > * @nr: number of references to put > > > * > > > * Decrement the refcount, and if 0, call the release function (which was passed > > > * to percpu_ref_init()) > > > * > > > * This function is safe to call as long as @ref is between init and exit. > > > */ > > > > Not sure if the above comment implies that the callsite should cover the > > race. > > > > But blk-mq can really avoid the trouble by using the existed call_rcu(): > > > > I struggle with the dependency on release(). release() itself should not > block, but a common pattern would be to through a call_rcu() in and Yes, release() is called with rcu read lock, and I guess the trouble may be originated from the fact release() may do nothing related with actual data releasing. > schedule additional work - see block/blk-cgroup.c, blkg_release(). I believe the pattern is user specific, and the motivation of using call_rcu can't be just for avoiding such potential race between release() and percpu_ref_exit(). > > I think the dependency really is the completion of release() and the > work scheduled on it's behalf rather than strictly starting the > release() callback. This series doesn't preclude that from happening. Yeah. For any additional work or sort of thing scheduled in release(), only the caller can guarantee they are drained before percpu_exit_ref(), so I agree now it is better for caller to avoid the race. > > /** > * percpu_ref_exit - undo percpu_ref_init() > * @ref: percpu_ref to exit > * > * This function exits @ref. The caller is responsible for ensuring that > * @ref is no longer in active use. The usual places to invoke this > * function from are the @ref->release() callback or in init failure path > * where percpu_ref_init() succeeded but other parts of the initialization > * of the embedding object failed. > */ > > I think the percpu_ref_exit() comment explains the more common use case > approach to percpu refcounts. release() triggering percpu_ref_exit() is > the ideal case. But most of callers don't use in this way actually. Thanks, Ming