Message ID | 20221019083708.27138-6-nstange@suse.de |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp201947wrs; Wed, 19 Oct 2022 01:39:28 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7fGEqymNgslTXNg2TixhhA3S1raJN6IwRm063s3Q9vs+ed4t5P27OAMBAtKIbM107v0oAa X-Received: by 2002:a05:6402:14c9:b0:459:1a5b:6c47 with SMTP id f9-20020a05640214c900b004591a5b6c47mr6468046edx.426.1666168767721; Wed, 19 Oct 2022 01:39:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666168767; cv=none; d=google.com; s=arc-20160816; b=b5MDcHHbu2XIptpqpBmSX17fPlYWcHZKcwle7kSSEawaRMMy9ityVraV5tpRla1ODu uYEzJMKVMec0p2znaHWHuH2PnQU1YIzkvNhizz8IPbTTkKMDhGnFfhWkvsBezQaGB9rB TgMh+RrIQ2ZthPnCRhQgNMTvzc7YfkMhj0+iyeMMjWK4mNBo0d0kjUgnc3EPQwvwnJZD 8kcfDCk/0B/uBSjag4fZ53zq9Of/5WqNXOtvzP5pu8WGCxDzfycyvRabMYYHPYxDDkjP FfZHvx7D42V35XI70rxZlA28bqscC117iqvUCUJcvPKr+DFaMbzZ2ijcvtmW4np2rFUq VQRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=qPk+7KJG/hENNIRans/TaYIf8cB/olSwAdJS6bByusQ=; b=V2xAY5hu+gqHZbTToR1/P9xbcy9fJd/z2+O4M8BWJZm4ZXA8O5DeZm/C3M/2z5V2NE MqmmCinYlShVgrqppst04O7o0ifVlQV2tIC4AHJTllN0dz/NkMLkazr7B88995Pwy2E0 djqAiOQBNd7vN9CsofDkt8Oj2XLLraFY2qrZWs0molyjH5FjxmyL9MTqf0NhYOuybUE3 4HTUxPn2bQVCVyW4tWCqCzs6E2pKfBBFdBWiBQuRaXvnWqQGt0DRyZsA30jGaUcAckPi vNF2tzeQ/9+sU/pxOBr6rO3eCO/YFwSLxG5qhN+RIm4HwTeDWxeUrw6bJcPYYqBe8e5p Rg5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=sbu5QmuD; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gn28-20020a1709070d1c00b0078db6b965c7si14386244ejc.782.2022.10.19.01.39.02; Wed, 19 Oct 2022 01:39:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=sbu5QmuD; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229592AbiJSIh6 (ORCPT <rfc822;samuel.l.nystrom@gmail.com> + 99 others); Wed, 19 Oct 2022 04:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230137AbiJSIhd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 19 Oct 2022 04:37:33 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 851967E31B; Wed, 19 Oct 2022 01:37:30 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 8A774339DA; Wed, 19 Oct 2022 08:37:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1666168648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPk+7KJG/hENNIRans/TaYIf8cB/olSwAdJS6bByusQ=; b=sbu5QmuD5hMfVso6xNbaey8BXdTICxX0QJIyj+zGamtTVZA3XrzfkmprCbaJCQof4QagCt UMDPeanrrGRLIoH6oVp4Wm/aY6XA/94xo7RFPyNdAasI9nOZvs+MBi/023hxnrAJ8WvuhK qG6MICJ2dw5PaqdZZTSnHE2IcOVc6Sc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1666168648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPk+7KJG/hENNIRans/TaYIf8cB/olSwAdJS6bByusQ=; b=huKMRmkzkKOWK6c3+pNQ++3VFnwwWIJMujRYg2MGpRG6My0bGRwOBCLlxXYsfQnm008dSd UUpdKfJqF3WL6gAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 76EF613345; Wed, 19 Oct 2022 08:37:28 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id lv4HHUi3T2NfZQAAMHmgww (envelope-from <nstange@suse.de>); Wed, 19 Oct 2022 08:37:28 +0000 From: Nicolai Stange <nstange@suse.de> To: Steffen Klassert <steffen.klassert@secunet.com>, Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au>, Martin Doucha <mdoucha@suse.cz>, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Nicolai Stange <nstange@suse.de> Subject: [PATCH 5/5] padata: avoid potential UAFs to the padata_shell from padata_reorder() Date: Wed, 19 Oct 2022 10:37:08 +0200 Message-Id: <20221019083708.27138-6-nstange@suse.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221019083708.27138-1-nstange@suse.de> References: <20221019083708.27138-1-nstange@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747104581921601674?= X-GMAIL-MSGID: =?utf-8?q?1747104581921601674?= |
Series |
padata: fix liftime issues after ->serial() has completed
|
|
Commit Message
Nicolai Stange
Oct. 19, 2022, 8:37 a.m. UTC
Even though the parallel_data "pd" instance passed to padata_reorder() is
guaranteed to exist as per the reference held by its callers, the same is
not true for the associated padata_shell, pd->ps. More specifically, once
the last padata_priv request has been completed, either at entry from
padata_reorder() or concurrently to it, the padata API users are well
within their right to free the padata_shell instance.
Note that this is a purely theoretical issue, it has not been actually
observed. Yet it's worth fixing for the sake of robustness.
Exploit the fact that as long as there are any not yet completed
padata_priv's around on any of the percpu reorder queues, pd->ps is
guaranteed to exist. Make padata_reorder() to load from pd->ps only when
it's known that there is at least one in-flight padata_priv object to
reorder. Note that this involves moving pd->ps accesses to under the
reorder->lock as appropriate, so that the found padata_priv object won't
get dequeued and completed concurrently from a different context.
Fixes: bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance
padata queues")
Signed-off-by: Nicolai Stange <nstange@suse.de>
---
kernel/padata.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
Comments
On Wed, Oct 19, 2022 at 10:37:08AM +0200, Nicolai Stange wrote: > Even though the parallel_data "pd" instance passed to padata_reorder() is > guaranteed to exist as per the reference held by its callers, the same is > not true for the associated padata_shell, pd->ps. More specifically, once > the last padata_priv request has been completed, either at entry from > padata_reorder() or concurrently to it, the padata API users are well > within their right to free the padata_shell instance. The synchronize_rcu change seems to make padata_reorder safe from freed ps's with the exception of a straggler reorder_work. For that, I think something like this hybrid of your code and mine is enough to plug the hole. It's on top of 1-2 and my hunk from 3. It has to take an extra ref on pd, but only in the rare case where the reorder work is used. Thoughts? diff --git a/kernel/padata.c b/kernel/padata.c index cd6740ae6629..f14c256a0ee3 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -277,7 +277,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd, static void padata_reorder(struct parallel_data *pd) { - struct padata_instance *pinst = pd->ps->pinst; + struct padata_instance *pinst; int cb_cpu; struct padata_priv *padata; struct padata_serial_queue *squeue; @@ -314,7 +314,7 @@ static void padata_reorder(struct parallel_data *pd) list_add_tail(&padata->list, &squeue->serial.list); spin_unlock(&squeue->serial.lock); - queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work); + queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work); } spin_unlock_bh(&pd->lock); @@ -330,8 +330,10 @@ static void padata_reorder(struct parallel_data *pd) smp_mb(); reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); - if (!list_empty(&reorder->list) && padata_find_next(pd, false)) - queue_work(pinst->serial_wq, &pd->reorder_work); + if (!list_empty(&reorder->list) && padata_find_next(pd, false)) { + if (queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work)) + padata_get_pd(pd); + } } static void invoke_padata_reorder(struct work_struct *work) @@ -342,6 +344,7 @@ static void invoke_padata_reorder(struct work_struct *work) pd = container_of(work, struct parallel_data, reorder_work); padata_reorder(pd); local_bh_enable(); + padata_put_pd(pd); } static void padata_serial_worker(struct work_struct *serial_work)
Daniel Jordan <daniel.m.jordan@oracle.com> writes: > On Wed, Oct 19, 2022 at 10:37:08AM +0200, Nicolai Stange wrote: >> Even though the parallel_data "pd" instance passed to padata_reorder() is >> guaranteed to exist as per the reference held by its callers, the same is >> not true for the associated padata_shell, pd->ps. More specifically, once >> the last padata_priv request has been completed, either at entry from >> padata_reorder() or concurrently to it, the padata API users are well >> within their right to free the padata_shell instance. > > The synchronize_rcu change seems to make padata_reorder safe from freed > ps's with the exception of a straggler reorder_work. For that, I think > something like this hybrid of your code and mine is enough to plug the > hole. It's on top of 1-2 and my hunk from 3. It has to take an extra > ref on pd, but only in the rare case where the reorder work is used. > Thoughts? > > diff --git a/kernel/padata.c b/kernel/padata.c > index cd6740ae6629..f14c256a0ee3 100644 > --- a/kernel/padata.c > +++ b/kernel/padata.c > @@ -277,7 +277,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd, > > static void padata_reorder(struct parallel_data *pd) > { > - struct padata_instance *pinst = pd->ps->pinst; > + struct padata_instance *pinst; > int cb_cpu; > struct padata_priv *padata; > struct padata_serial_queue *squeue; > @@ -314,7 +314,7 @@ static void padata_reorder(struct parallel_data *pd) > list_add_tail(&padata->list, &squeue->serial.list); > spin_unlock(&squeue->serial.lock); > > - queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work); > + queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work); > } > > spin_unlock_bh(&pd->lock); > @@ -330,8 +330,10 @@ static void padata_reorder(struct parallel_data *pd) > smp_mb(); > > reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); > - if (!list_empty(&reorder->list) && padata_find_next(pd, false)) > - queue_work(pinst->serial_wq, &pd->reorder_work); > + if (!list_empty(&reorder->list) && padata_find_next(pd, false)) { > + if (queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work)) > + padata_get_pd(pd); As the reorder_work can start running immediately after having been submitted, wouldn't it be more correct to do something like padata_get_pd(pd); if (!queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work)) padata_put_pd(pd); ? Otherwise the patch looks good to me. Thanks! Nicolai > + } > } > > static void invoke_padata_reorder(struct work_struct *work) > @@ -342,6 +344,7 @@ static void invoke_padata_reorder(struct work_struct *work) > pd = container_of(work, struct parallel_data, reorder_work); > padata_reorder(pd); > local_bh_enable(); > + padata_put_pd(pd); > } > > static void padata_serial_worker(struct work_struct *serial_work) >
On Wed, Nov 09, 2022 at 02:03:29PM +0100, Nicolai Stange wrote: > Daniel Jordan <daniel.m.jordan@oracle.com> writes: > > > On Wed, Oct 19, 2022 at 10:37:08AM +0200, Nicolai Stange wrote: > >> Even though the parallel_data "pd" instance passed to padata_reorder() is > >> guaranteed to exist as per the reference held by its callers, the same is > >> not true for the associated padata_shell, pd->ps. More specifically, once > >> the last padata_priv request has been completed, either at entry from > >> padata_reorder() or concurrently to it, the padata API users are well > >> within their right to free the padata_shell instance. > > > > The synchronize_rcu change seems to make padata_reorder safe from freed > > ps's with the exception of a straggler reorder_work. For that, I think > > something like this hybrid of your code and mine is enough to plug the > > hole. It's on top of 1-2 and my hunk from 3. It has to take an extra > > ref on pd, but only in the rare case where the reorder work is used. > > Thoughts? > > > > diff --git a/kernel/padata.c b/kernel/padata.c > > index cd6740ae6629..f14c256a0ee3 100644 > > --- a/kernel/padata.c > > +++ b/kernel/padata.c > > @@ -277,7 +277,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd, > > > > static void padata_reorder(struct parallel_data *pd) > > { > > - struct padata_instance *pinst = pd->ps->pinst; > > + struct padata_instance *pinst; > > int cb_cpu; > > struct padata_priv *padata; > > struct padata_serial_queue *squeue; > > @@ -314,7 +314,7 @@ static void padata_reorder(struct parallel_data *pd) > > list_add_tail(&padata->list, &squeue->serial.list); > > spin_unlock(&squeue->serial.lock); > > > > - queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work); > > + queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work); > > } > > > > spin_unlock_bh(&pd->lock); > > @@ -330,8 +330,10 @@ static void padata_reorder(struct parallel_data *pd) > > smp_mb(); > > > > reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); > > - if (!list_empty(&reorder->list) && padata_find_next(pd, false)) > > - queue_work(pinst->serial_wq, &pd->reorder_work); > > + if (!list_empty(&reorder->list) && padata_find_next(pd, false)) { > > + if (queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work)) > > + padata_get_pd(pd); > > As the reorder_work can start running immediately after having been > submitted, wouldn't it be more correct to do something like > > padata_get_pd(pd); > if (!queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work)) > padata_put_pd(pd); > > ? Yes, that's better, and all the above can go in your next version too.
diff --git a/kernel/padata.c b/kernel/padata.c index e9eab3e94cfc..fa4818b81eca 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -286,7 +286,6 @@ static struct padata_priv *padata_dequeue_next(struct parallel_data *pd) static bool padata_reorder(struct parallel_data *pd) { - struct padata_instance *pinst = pd->ps->pinst; int cb_cpu; struct padata_priv *padata; struct padata_serial_queue *squeue; @@ -323,7 +322,11 @@ static bool padata_reorder(struct parallel_data *pd) list_add_tail(&padata->list, &squeue->serial.list); spin_unlock(&squeue->serial.lock); - queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work); + /* + * Note: as long as there are requests in-flight, + * pd->ps is guaranteed to exist. + */ + queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work); } spin_unlock_bh(&pd->lock); @@ -340,14 +343,23 @@ static bool padata_reorder(struct parallel_data *pd) reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); if (!list_empty(&reorder->list)) { + bool reenqueued; + spin_lock(&reorder->lock); if (!__padata_find_next(pd, reorder)) { spin_unlock(&reorder->lock); return false; } + + /* + * Note: as long as there are requests in-flight, + * pd->ps is guaranteed to exist. + */ + reenqueued = queue_work(pd->ps->pinst->serial_wq, + &pd->reorder_work); spin_unlock(&reorder->lock); - return queue_work(pinst->serial_wq, &pd->reorder_work); + return reenqueued; } return false;