From patchwork Wed Jan 31 08:51:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 194603 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp1758201dyb; Wed, 31 Jan 2024 01:01:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IGYYHPdCJonKytupVa7eYF/9sqCL7XhdvTvk1ORqh+/Ba7LmDDzRqVsC5fhObDrtUvizalr X-Received: by 2002:a05:622a:1394:b0:42a:b037:5103 with SMTP id o20-20020a05622a139400b0042ab0375103mr890520qtk.2.1706691697574; Wed, 31 Jan 2024 01:01:37 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706691697; cv=pass; d=google.com; s=arc-20160816; b=hnqMn7UOrzXcusGUZJ2RpRRnB547uOWQSJNn3on/Ajx1Mk6pTC0UFI/bNGXOx0yIgK SKFY9rwiDXKjrdW/TnUI20ULeM9p1yb1bGIa4vwZ+fg2rs6QCUqnvvWkhhRyO6+tJPAJ 5bM7W5denaOt+CyGjHp1dh0jdYlTxM6SSrOOxx0kE6JiKiMi+SMIdv/s2Uv9gv78JZmL NdvFXK+w/+o7WY1ifAEOn3KyJolmtvzwal6yY+CH7n0Jh41autCat7kttbH2+SeXBaUD qneoltY/gh3kK1068c+Jr2b0zgK+8/OCFRMyxu8l9q5AIZyEVrZsref8plWv4mZ/emp4 Ucpg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature:dkim-signature :dkim-signature:dkim-signature; bh=eaoPZ8PN1/minQk2Jt29Iu1TaNXSJjydOpAcW4JzqvE=; fh=Qt4GV8M8RdWXR+jCV1aAPJyj8JA8mkOsO4bzIuYKJoM=; b=t54MfXcMy+g+n1v6vGQgOfWpeiOWk7zUfTkj2npYCU9bq7OC7Y2IOMkk8jYA/2OtWD AdOmaKyiNFoyOGuKahu6S/NACbO5ybFYezMn26hbiS96JfcpHjDPfiwuVqDyPrvdEvOc 4Twd1yJ8BWhe7kgQav9vpB8pQA0TY/odyrQp4nvp1UzZn7FAY35EA2+e+VPgH35wJxuT UMX1P8ugzeC4zJFTh0WgCY7/ogyoFUMjc7aU2jMwFSL2sHodhms2HKLekFhwkdHNyVYP is8RBnhrjlqyILkvxODHZEaQqQ/iFxO7azTZVzFBmHsaHiwlTDwG/nYy8N+8ytAdVniJ DncQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=OQ9YaTcA; dkim=neutral (no key) header.i=@suse.de; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=OQ9YaTcA; dkim=neutral (no key) header.i=@suse.de; arc=pass (i=1 spf=pass spfdomain=suse.de dkim=pass dkdomain=suse.de dkim=pass dkdomain=suse.de dmarc=pass fromdomain=suse.de); spf=pass (google.com: domain of linux-kernel+bounces-46071-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46071-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de X-Forwarded-Encrypted: i=1; AJvYcCW4LjJCHj1EWlMNd7NPFkstvVsWlRigzU1qKgC3TioL68B0CdOVihvQjhbzO4ixq6Lffo0n8PE1v1yQkI7uU/Lg/nqq9A== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id w9-20020ac87e89000000b0042bea816990si612362qtj.440.2024.01.31.01.01.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 01:01:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-46071-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=OQ9YaTcA; dkim=neutral (no key) header.i=@suse.de; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=OQ9YaTcA; dkim=neutral (no key) header.i=@suse.de; arc=pass (i=1 spf=pass spfdomain=suse.de dkim=pass dkdomain=suse.de dkim=pass dkdomain=suse.de dmarc=pass fromdomain=suse.de); spf=pass (google.com: domain of linux-kernel+bounces-46071-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46071-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 9442F1C20FBA for ; Wed, 31 Jan 2024 09:01:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6787680C0D; Wed, 31 Jan 2024 08:51:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b="OQ9YaTcA"; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b="zNtJa+iw"; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b="OQ9YaTcA"; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b="zNtJa+iw" Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E56379DA3 for ; Wed, 31 Jan 2024 08:51:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706691087; cv=none; b=ZR2FTTVXTqJBLZ6Glp2DGp1sq5HPSFT4SB0kQyVvy5F1Cz4Z4tfo9ZWoh9O451eheKC6U3SzI8RnY/fUY0U9Mqbpj9FzBGf6rwyyWz0JubuDwAc+OHw+Y1NiDR5Ciha2XNzU58DUqHHPzLrcDGd6ngip87u3m3yoa7VO+QuxvdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706691087; c=relaxed/simple; bh=Km0yftZmYPJcHUQjcwbG/T9RVLNQndYMikQXJWtM0+4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W0o4bFEww1Mww7Mc+c2fJfM6o/f4S3C6NJErm44EkgzTtA03CAlPa0akUc3d9eyGCEr+PR7ADK30Tk3dMzMDIznz6QTvrJhYL7YFJlhRCE63Gbljo60v08KWpYoMnKtyZxs2BLc8JIRGBn/0JFRGDtXnT1oLEvx4QIZ9BVdRUiI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=suse.de; spf=pass smtp.mailfrom=suse.de; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b=OQ9YaTcA; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b=zNtJa+iw; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b=OQ9YaTcA; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b=zNtJa+iw; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=suse.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.de Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D91AB1FD09; Wed, 31 Jan 2024 08:51:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1706691083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eaoPZ8PN1/minQk2Jt29Iu1TaNXSJjydOpAcW4JzqvE=; b=OQ9YaTcAtC4XPZ0NhdTWGIbUhNx8D3WvCiYTAhV5zNl972CH1rVaG+w8JAeBBv4BABBbhM bSbFKRp+cWiq+sIye466zsznkA/CFZPSsBU7Xx2nYZDTjc2gC1kl93b/4EjkF2mHCA+NCV sQyBK0xCT6JQKKF7971n3+4BdvPuHrg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1706691083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eaoPZ8PN1/minQk2Jt29Iu1TaNXSJjydOpAcW4JzqvE=; b=zNtJa+iwxbA5ZetvDtoKJTrcRY/6fhyBoO8bKTZIP8FHMdlfzSRBvtmFYvY2adQOifaQdt szxZYRpHmIcV5iBw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1706691083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eaoPZ8PN1/minQk2Jt29Iu1TaNXSJjydOpAcW4JzqvE=; b=OQ9YaTcAtC4XPZ0NhdTWGIbUhNx8D3WvCiYTAhV5zNl972CH1rVaG+w8JAeBBv4BABBbhM bSbFKRp+cWiq+sIye466zsznkA/CFZPSsBU7Xx2nYZDTjc2gC1kl93b/4EjkF2mHCA+NCV sQyBK0xCT6JQKKF7971n3+4BdvPuHrg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1706691083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eaoPZ8PN1/minQk2Jt29Iu1TaNXSJjydOpAcW4JzqvE=; b=zNtJa+iwxbA5ZetvDtoKJTrcRY/6fhyBoO8bKTZIP8FHMdlfzSRBvtmFYvY2adQOifaQdt szxZYRpHmIcV5iBw== Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id C60F1132FA; Wed, 31 Jan 2024 08:51:23 +0000 (UTC) Received: from dovecot-director2.suse.de ([10.150.64.162]) by imap2.dmz-prg2.suse.org with ESMTPSA id KofxLgsKumWHHAAAn2gu4w (envelope-from ); Wed, 31 Jan 2024 08:51:23 +0000 From: Daniel Wagner To: James Smart Cc: Keith Busch , Christoph Hellwig , Hannes Reinecke , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Daniel Wagner Subject: [PATCH v5 04/12] nvmet-fc: defer cleanup using RCU properly Date: Wed, 31 Jan 2024 09:51:04 +0100 Message-ID: <20240131085112.21668-5-dwagner@suse.de> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240131085112.21668-1-dwagner@suse.de> References: <20240131085112.21668-1-dwagner@suse.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Authentication-Results: smtp-out2.suse.de; none X-Spam-Level: X-Spam-Score: 0.70 X-Spamd-Result: default: False [0.70 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_SEVEN(0.00)[7]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[]; BAYES_HAM(-3.00)[100.00%] X-Spam-Flag: NO X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789595953310183179 X-GMAIL-MSGID: 1789595953310183179 When the target executes a disconnect and the host triggers a reconnect immediately, the reconnect command still finds an existing association. The reconnect crashes later on because nvmet_fc_delete_target_assoc blindly removes resources while the reconnect code wants to use it. To address this, nvmet_fc_find_target_assoc should not be able to lookup an association which is being removed. The association list is already under RCU lifetime management, so let's properly use it and remove the association from the list and wait for a grace period before cleaning up all. This means we also can drop the RCU management on the queues, because this is now handled via the association itself. A second step split the execution context so that the initial disconnect command can complete without running the reconnect code in the same context. As usual, this is done by deferring the ->done to a workqueue. Reviewed-by: Christoph Hellwig Signed-off-by: Daniel Wagner Reviewed-by: Hannes Reinecke --- drivers/nvme/target/fc.c | 83 ++++++++++++++++++---------------------- 1 file changed, 37 insertions(+), 46 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 3d53d9dc1099..11ecfef41bd1 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -166,7 +166,7 @@ struct nvmet_fc_tgt_assoc { struct nvmet_fc_hostport *hostport; struct nvmet_fc_ls_iod *rcv_disconn; struct list_head a_list; - struct nvmet_fc_tgt_queue __rcu *queues[NVMET_NR_QUEUES + 1]; + struct nvmet_fc_tgt_queue *queues[NVMET_NR_QUEUES + 1]; struct kref ref; struct work_struct del_work; struct rcu_head rcu; @@ -803,14 +803,11 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc, if (!queue) return NULL; - if (!nvmet_fc_tgt_a_get(assoc)) - goto out_free_queue; - queue->work_q = alloc_workqueue("ntfc%d.%d.%d", 0, 0, assoc->tgtport->fc_target_port.port_num, assoc->a_id, qid); if (!queue->work_q) - goto out_a_put; + goto out_free_queue; queue->qid = qid; queue->sqsize = sqsize; @@ -832,15 +829,13 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc, goto out_fail_iodlist; WARN_ON(assoc->queues[qid]); - rcu_assign_pointer(assoc->queues[qid], queue); + assoc->queues[qid] = queue; return queue; out_fail_iodlist: nvmet_fc_destroy_fcp_iodlist(assoc->tgtport, queue); destroy_workqueue(queue->work_q); -out_a_put: - nvmet_fc_tgt_a_put(assoc); out_free_queue: kfree(queue); return NULL; @@ -853,12 +848,8 @@ nvmet_fc_tgt_queue_free(struct kref *ref) struct nvmet_fc_tgt_queue *queue = container_of(ref, struct nvmet_fc_tgt_queue, ref); - rcu_assign_pointer(queue->assoc->queues[queue->qid], NULL); - nvmet_fc_destroy_fcp_iodlist(queue->assoc->tgtport, queue); - nvmet_fc_tgt_a_put(queue->assoc); - destroy_workqueue(queue->work_q); kfree_rcu(queue, rcu); @@ -970,7 +961,7 @@ nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport, rcu_read_lock(); list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { if (association_id == assoc->association_id) { - queue = rcu_dereference(assoc->queues[qid]); + queue = assoc->queues[qid]; if (queue && (!atomic_read(&queue->connected) || !nvmet_fc_tgt_q_get(queue))) @@ -1173,13 +1164,18 @@ nvmet_fc_target_assoc_free(struct kref *ref) struct nvmet_fc_tgtport *tgtport = assoc->tgtport; struct nvmet_fc_ls_iod *oldls; unsigned long flags; + int i; + + for (i = NVMET_NR_QUEUES; i >= 0; i--) { + if (assoc->queues[i]) + nvmet_fc_delete_target_queue(assoc->queues[i]); + } /* Send Disconnect now that all i/o has completed */ nvmet_fc_xmt_disconnect_assoc(assoc); nvmet_fc_free_hostport(assoc->hostport); spin_lock_irqsave(&tgtport->lock, flags); - list_del_rcu(&assoc->a_list); oldls = assoc->rcv_disconn; spin_unlock_irqrestore(&tgtport->lock, flags); /* if pending Rcv Disconnect Association LS, send rsp now */ @@ -1209,7 +1205,7 @@ static void nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc) { struct nvmet_fc_tgtport *tgtport = assoc->tgtport; - struct nvmet_fc_tgt_queue *queue; + unsigned long flags; int i, terminating; terminating = atomic_xchg(&assoc->terminating, 1); @@ -1218,29 +1214,21 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc) if (terminating) return; + spin_lock_irqsave(&tgtport->lock, flags); + list_del_rcu(&assoc->a_list); + spin_unlock_irqrestore(&tgtport->lock, flags); - for (i = NVMET_NR_QUEUES; i >= 0; i--) { - rcu_read_lock(); - queue = rcu_dereference(assoc->queues[i]); - if (!queue) { - rcu_read_unlock(); - continue; - } + synchronize_rcu(); - if (!nvmet_fc_tgt_q_get(queue)) { - rcu_read_unlock(); - continue; - } - rcu_read_unlock(); - nvmet_fc_delete_target_queue(queue); - nvmet_fc_tgt_q_put(queue); + /* ensure all in-flight I/Os have been processed */ + for (i = NVMET_NR_QUEUES; i >= 0; i--) { + if (assoc->queues[i]) + flush_workqueue(assoc->queues[i]->work_q); } dev_info(tgtport->dev, "{%d:%d} Association deleted\n", tgtport->fc_target_port.port_num, assoc->a_id); - - nvmet_fc_tgt_a_put(assoc); } static struct nvmet_fc_tgt_assoc * @@ -1493,9 +1481,8 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport) list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { if (!nvmet_fc_tgt_a_get(assoc)) continue; - if (!queue_work(nvmet_wq, &assoc->del_work)) - /* already deleting - release local reference */ - nvmet_fc_tgt_a_put(assoc); + queue_work(nvmet_wq, &assoc->del_work); + nvmet_fc_tgt_a_put(assoc); } rcu_read_unlock(); } @@ -1548,9 +1535,8 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port, continue; assoc->hostport->invalid = 1; noassoc = false; - if (!queue_work(nvmet_wq, &assoc->del_work)) - /* already deleting - release local reference */ - nvmet_fc_tgt_a_put(assoc); + queue_work(nvmet_wq, &assoc->del_work); + nvmet_fc_tgt_a_put(assoc); } spin_unlock_irqrestore(&tgtport->lock, flags); @@ -1582,7 +1568,7 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl) rcu_read_lock(); list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { - queue = rcu_dereference(assoc->queues[0]); + queue = assoc->queues[0]; if (queue && queue->nvme_sq.ctrl == ctrl) { if (nvmet_fc_tgt_a_get(assoc)) found_ctrl = true; @@ -1594,9 +1580,8 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl) nvmet_fc_tgtport_put(tgtport); if (found_ctrl) { - if (!queue_work(nvmet_wq, &assoc->del_work)) - /* already deleting - release local reference */ - nvmet_fc_tgt_a_put(assoc); + queue_work(nvmet_wq, &assoc->del_work); + nvmet_fc_tgt_a_put(assoc); return; } @@ -1626,6 +1611,8 @@ nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *target_port) /* terminate any outstanding associations */ __nvmet_fc_free_assocs(tgtport); + flush_workqueue(nvmet_wq); + /* * should terminate LS's as well. However, LS's will be generated * at the tail end of association termination, so they likely don't @@ -1871,9 +1858,6 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport, sizeof(struct fcnvme_ls_disconnect_assoc_acc)), FCNVME_LS_DISCONNECT_ASSOC); - /* release get taken in nvmet_fc_find_target_assoc */ - nvmet_fc_tgt_a_put(assoc); - /* * The rules for LS response says the response cannot * go back until ABTS's have been sent for all outstanding @@ -1888,8 +1872,6 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport, assoc->rcv_disconn = iod; spin_unlock_irqrestore(&tgtport->lock, flags); - nvmet_fc_delete_target_assoc(assoc); - if (oldls) { dev_info(tgtport->dev, "{%d:%d} Multiple Disconnect Association LS's " @@ -1905,6 +1887,9 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport, nvmet_fc_xmt_ls_rsp(tgtport, oldls); } + queue_work(nvmet_wq, &assoc->del_work); + nvmet_fc_tgt_a_put(assoc); + return false; } @@ -2903,6 +2888,9 @@ nvmet_fc_remove_port(struct nvmet_port *port) nvmet_fc_portentry_unbind(pe); + /* terminate any outstanding associations */ + __nvmet_fc_free_assocs(pe->tgtport); + kfree(pe); } @@ -2934,6 +2922,9 @@ static int __init nvmet_fc_init_module(void) static void __exit nvmet_fc_exit_module(void) { + /* ensure any shutdown operation, e.g. delete ctrls have finished */ + flush_workqueue(nvmet_wq); + /* sanity check - all lports should be removed */ if (!list_empty(&nvmet_fc_target_list)) pr_warn("%s: targetport list not empty\n", __func__);