Message ID | 20230419084757.24846-3-dwagner@suse.de |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp246987vqo; Wed, 19 Apr 2023 02:31:31 -0700 (PDT) X-Google-Smtp-Source: AKy350YhcohrgffJggIasl8Y06YPOZZbxoGZuxd+h+ElFDcaR8+rsAlp4rAMxJ9Gg3TGJidbWVkK X-Received: by 2002:a05:6a20:8e19:b0:f0:3987:7b33 with SMTP id y25-20020a056a208e1900b000f039877b33mr2552323pzj.42.1681896691187; Wed, 19 Apr 2023 02:31:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681896691; cv=none; d=google.com; s=arc-20160816; b=u05BUAcfouLo6g6KxN+WOvCjNU6X7eSoQFb+L06R0otsYeCxiCywdg5W/Z9U0HioCk QUrZhmzqN0cggVtj6RTRoFJ6Fx9a+IUOp7U/X/iGnxaKDYR21IwICMvUjmgYU0Iz5P9A ZVwpRaZStTYRXm+1i7pOZGrqG01Ymjyp17lFychmlSlxhCv8nUVInrVQhvygvSJ2PDY5 hx6El0upcCbGThrF0DlW3OLhHqpLOGEkjtoZ1OiLrq5cywqL3MCZvIp5NOCypsnGd3qS EM8ENYfdknsTSr8siIG5TrZc/bzJw8TvIF0eV7+YYXYrf3KcSN2oBUqxfVX9AQZUUhcU 8GFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=NEnZ644JL7BzqvU0sxbGAoBWYjk3iv5CXkJiuBwPN/c=; b=PK59X5k4vg7ab6+ClRDViRr4g+ZWrARdsOCH2mdzv4TKUP8idZWc2LiYu4wJdg9IEh mKIj01JddG4+Jz+C4AOepu2WFqXFSyoBntPbO31yah6WZpVRjqzi7WDzxk8LWvMS8d3r +M0iB0mogB8mPu7j9l2cN7RoKi6sBxzIJehrBSiSZUBdE+nWEtAldO/j1imUBKDCzhu4 rHFObepsphMhKNUDV64LsCA+oyWJtQ2uDPNUwtyiW+D7ls53uSXQg1vQguwcoBRWb41P Ndmt5ZB6m8eCB2YOlcbngF92vTAwTnbF+W/rd9XISBePUqZhRp2x8E8EdEr5tvJBQpw/ vzqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=M8tPaWGN; dkim=neutral (no key) header.i=@suse.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x24-20020a63db58000000b005193a2c7b1esi2770487pgi.447.2023.04.19.02.31.18; Wed, 19 Apr 2023 02:31:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=M8tPaWGN; dkim=neutral (no key) header.i=@suse.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232757AbjDSIsL (ORCPT <rfc822;peter110.wang@gmail.com> + 99 others); Wed, 19 Apr 2023 04:48:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232730AbjDSIsF (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 19 Apr 2023 04:48:05 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F29E1A4; Wed, 19 Apr 2023 01:48:04 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 203E321908; Wed, 19 Apr 2023 08:48:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1681894083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NEnZ644JL7BzqvU0sxbGAoBWYjk3iv5CXkJiuBwPN/c=; b=M8tPaWGNuTcM0+4I5u+VLunpNCH8/cK5wAYTEPICSez2IJ173pcGHxIEgnWuXF/lZE2ZIS 3TyNEuKQ+TzkG6vjv5A5Gn/d+vsINCs4VC6Y8mnBxfCoUfRq5wOewasrdEfwpvrVCeBe+V FJMcsL7No4pmM/LALJEENsiuc1prKSM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1681894083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NEnZ644JL7BzqvU0sxbGAoBWYjk3iv5CXkJiuBwPN/c=; b=FHGWW/rllpupFShePGQj1O4dUlE0IMPgumOX32O9iG61oeSYDLecrcR+QYrvAf3mtcmTuO HF8gZr+O13sZBjBw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 118961390E; Wed, 19 Apr 2023 08:48:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id yasgBMOqP2S7dQAAMHmgww (envelope-from <dwagner@suse.de>); Wed, 19 Apr 2023 08:48:03 +0000 From: Daniel Wagner <dwagner@suse.de> To: linux-nvme@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Chaitanya Kulkarni <kch@nvidia.com>, Shin'ichiro Kawasaki <shinichiro@fastmail.com>, Daniel Wagner <dwagner@suse.de> Subject: [PATCH blktests v2 2/2] nvme-rc: Cleanup fc resource before module unloading Date: Wed, 19 Apr 2023 10:47:57 +0200 Message-Id: <20230419084757.24846-3-dwagner@suse.de> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230419084757.24846-1-dwagner@suse.de> References: <20230419084757.24846-1-dwagner@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763596504585955640?= X-GMAIL-MSGID: =?utf-8?q?1763596504585955640?= |
Series |
nvme_trtype=fc fixes
|
|
Commit Message
Daniel Wagner
April 19, 2023, 8:47 a.m. UTC
Before we unload the module we should cleanup the fc resources first,
basically reorder the shutdown sequence to be in reverse order of the
setup path.
Also unload the nvme-fcloop after usage.
While at it also update the rdma stop_soft_rdma before the module
unloading for the same reasoning.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/nvme/rc | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
Comments
On 4/19/23 01:47, Daniel Wagner wrote: > Before we unload the module we should cleanup the fc resources first, > basically reorder the shutdown sequence to be in reverse order of the > setup path. > > Also unload the nvme-fcloop after usage. > > While at it also update the rdma stop_soft_rdma before the module > unloading for the same reasoning. > > Signed-off-by: Daniel Wagner <dwagner@suse.de> > --- > tests/nvme/rc | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/tests/nvme/rc b/tests/nvme/rc > index ec0cc2d8d8cc..41f196b037d6 100644 > --- a/tests/nvme/rc > +++ b/tests/nvme/rc > @@ -260,18 +260,20 @@ _cleanup_nvmet() { > shopt -u nullglob > trap SIGINT > > - modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null > - if [[ "${nvme_trtype}" != "loop" ]]; then > - modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null > - fi > - modprobe -rq nvmet 2>/dev/null > if [[ "${nvme_trtype}" == "rdma" ]]; then > stop_soft_rdma > fi > if [[ "${nvme_trtype}" == "fc" ]]; then > _cleanup_fcloop "${def_local_wwnn}" "${def_local_wwpn}" \ > "${def_remote_wwnn}" "${def_remote_wwpn}" > + modprobe -rq nvme-fcloop > fi > + > + modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null > + if [[ "${nvme_trtype}" != "loop" ]]; then > + modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null > + fi > + modprobe -rq nvmet 2>/dev/null > } > > _setup_nvmet() { were you able to test this with RDMA ? just want to make sure we are not breaking anything since we are changing the order of module unload and stop_soft_rdma() in this patch ... -ck
> Before we unload the module we should cleanup the fc resources first, > basically reorder the shutdown sequence to be in reverse order of the > setup path. If this triggers a bug, then I think it is a good idea to have a dedicated test that reproduces it if we are changing the default behavior. > > Also unload the nvme-fcloop after usage. > > While at it also update the rdma stop_soft_rdma before the module > unloading for the same reasoning. Why? it creates the wrong reverse ordering. 1. setup soft-rdma 2. setup nvme-rdma 2. teardown nvme-rdma 1. teardown soft-rdma I don't think we need this change. I mean it is a good test to have that the rdma device goes away underneath nvme-rdma but it is good for a dedicated test. > > Signed-off-by: Daniel Wagner <dwagner@suse.de> > --- > tests/nvme/rc | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/tests/nvme/rc b/tests/nvme/rc > index ec0cc2d8d8cc..41f196b037d6 100644 > --- a/tests/nvme/rc > +++ b/tests/nvme/rc > @@ -260,18 +260,20 @@ _cleanup_nvmet() { > shopt -u nullglob > trap SIGINT > > - modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null > - if [[ "${nvme_trtype}" != "loop" ]]; then > - modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null > - fi > - modprobe -rq nvmet 2>/dev/null > if [[ "${nvme_trtype}" == "rdma" ]]; then > stop_soft_rdma > fi > if [[ "${nvme_trtype}" == "fc" ]]; then > _cleanup_fcloop "${def_local_wwnn}" "${def_local_wwpn}" \ > "${def_remote_wwnn}" "${def_remote_wwpn}" > + modprobe -rq nvme-fcloop > fi > + > + modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null > + if [[ "${nvme_trtype}" != "loop" ]]; then > + modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null > + fi > + modprobe -rq nvmet 2>/dev/null > } > > _setup_nvmet() {
On Wed, Apr 19, 2023 at 09:41:28AM +0000, Chaitanya Kulkarni wrote: > were you able to test this with RDMA ? Yes, I've tested it with all transports (loop, tcp, rdma, fc) > just want to make sure we are not breaking anything since we are changing > the order of module unload and stop_soft_rdma() in this patch ... Sure thing
On Wed, Apr 19, 2023 at 12:44:42PM +0300, Sagi Grimberg wrote: > > > Before we unload the module we should cleanup the fc resources first, > > basically reorder the shutdown sequence to be in reverse order of the > > setup path. > > If this triggers a bug, then I think it is a good idea to have a > dedicated test that reproduces it if we are changing the default > behavior. Right, though I would like to tackle one problem after the other, first get fc working with the 'correct' order. > > While at it also update the rdma stop_soft_rdma before the module > > unloading for the same reasoning. > > Why? it creates the wrong reverse ordering. > > 1. setup soft-rdma > 2. setup nvme-rdma > > 2. teardown nvme-rdma > 1. teardown soft-rdma > > I don't think we need this change. I mean it is a good test > to have that the rdma device goes away underneath nvme-rdma > but it is good for a dedicated test. I was woried about this setup sequence here: modprobe -q nvme-"${nvme_trtype}" if [[ "${nvme_trtype}" == "rdma" ]]; then start_soft_rdma The module is loaded before start_soft_rdma is started, thus I thought we should do the reverse, first call stop_soft_rdma and the unload the module.
>>> Before we unload the module we should cleanup the fc resources first, >>> basically reorder the shutdown sequence to be in reverse order of the >>> setup path. >> >> If this triggers a bug, then I think it is a good idea to have a >> dedicated test that reproduces it if we are changing the default >> behavior. > > Right, though I would like to tackle one problem after the other, first get fc > working with the 'correct' order. > >>> While at it also update the rdma stop_soft_rdma before the module >>> unloading for the same reasoning. >> >> Why? it creates the wrong reverse ordering. >> >> 1. setup soft-rdma >> 2. setup nvme-rdma >> >> 2. teardown nvme-rdma >> 1. teardown soft-rdma >> >> I don't think we need this change. I mean it is a good test >> to have that the rdma device goes away underneath nvme-rdma >> but it is good for a dedicated test. > > I was woried about this setup sequence here: > > modprobe -q nvme-"${nvme_trtype}" > if [[ "${nvme_trtype}" == "rdma" ]]; then > start_soft_rdma > > The module is loaded before start_soft_rdma is started, thus I thought we should > do the reverse, first call stop_soft_rdma and the unload the module. They should be unrelated. the safe route is to first remove the uld and then the device.
On 4/19/23 02:44, Sagi Grimberg wrote: > >> Before we unload the module we should cleanup the fc resources first, >> basically reorder the shutdown sequence to be in reverse order of the >> setup path. > > If this triggers a bug, then I think it is a good idea to have a > dedicated test that reproduces it if we are changing the default > behavior. > +1 -ck
On Apr 19, 2023 / 21:15, Chaitanya Kulkarni wrote: > On 4/19/23 02:44, Sagi Grimberg wrote: > > > >> Before we unload the module we should cleanup the fc resources first, > >> basically reorder the shutdown sequence to be in reverse order of the > >> setup path. > > > > If this triggers a bug, then I think it is a good idea to have a > > dedicated test that reproduces it if we are changing the default > > behavior. > > > > +1 Agreed. Patch post for the new test case will be appreciated. Not to forget this work, I will open a github issue later.
On Apr 19, 2023 / 12:36, Daniel Wagner wrote: > On Wed, Apr 19, 2023 at 09:41:28AM +0000, Chaitanya Kulkarni wrote: > > were you able to test this with RDMA ? > > Yes, I've tested it with all transports (loop, tcp, rdma, fc) > > > just want to make sure we are not breaking anything since we are changing > > the order of module unload and stop_soft_rdma() in this patch ... > > Sure thing I also tested, and observed no result change by these two patches. Only one failure I observed is nvme/003 due to lockdep WARN, but it happens regardless of the patches.
On Apr 19, 2023 / 13:45, Sagi Grimberg wrote: > > > > > Before we unload the module we should cleanup the fc resources first, > > > > basically reorder the shutdown sequence to be in reverse order of the > > > > setup path. > > > > > > If this triggers a bug, then I think it is a good idea to have a > > > dedicated test that reproduces it if we are changing the default > > > behavior. > > > > Right, though I would like to tackle one problem after the other, first get fc > > working with the 'correct' order. > > > > > > While at it also update the rdma stop_soft_rdma before the module > > > > unloading for the same reasoning. > > > > > > Why? it creates the wrong reverse ordering. > > > > > > 1. setup soft-rdma > > > 2. setup nvme-rdma > > > > > > 2. teardown nvme-rdma > > > 1. teardown soft-rdma > > > > > > I don't think we need this change. I mean it is a good test > > > to have that the rdma device goes away underneath nvme-rdma > > > but it is good for a dedicated test. I agree that the new test case is good. > > > > I was woried about this setup sequence here: > > > > modprobe -q nvme-"${nvme_trtype}" > > if [[ "${nvme_trtype}" == "rdma" ]]; then > > start_soft_rdma > > > > The module is loaded before start_soft_rdma is started, thus I thought we should > > do the reverse, first call stop_soft_rdma and the unload the module. > > They should be unrelated. the safe route is to first remove the uld and > then the device. Sagi, this comment above was not clear for me. Is Daniel's patch ok for you? IMO, it is reasonable to "do clean-up in reverse order as setup" as a general guide. It will reduce the chance to see module related failures when the test cases do not expect such failures. Instead, we can have dedicated test cases for the module load/unload order related failures. start_soft_rdma and stop_soft_rdma do module load and unload. So I think the guide is good for those helper functions also.
On 4/30/23 13:34, Shinichiro Kawasaki wrote: > On Apr 19, 2023 / 13:45, Sagi Grimberg wrote: >> >>>>> Before we unload the module we should cleanup the fc resources first, >>>>> basically reorder the shutdown sequence to be in reverse order of the >>>>> setup path. >>>> >>>> If this triggers a bug, then I think it is a good idea to have a >>>> dedicated test that reproduces it if we are changing the default >>>> behavior. >>> >>> Right, though I would like to tackle one problem after the other, first get fc >>> working with the 'correct' order. >>> >>>>> While at it also update the rdma stop_soft_rdma before the module >>>>> unloading for the same reasoning. >>>> >>>> Why? it creates the wrong reverse ordering. >>>> >>>> 1. setup soft-rdma >>>> 2. setup nvme-rdma >>>> >>>> 2. teardown nvme-rdma >>>> 1. teardown soft-rdma >>>> >>>> I don't think we need this change. I mean it is a good test >>>> to have that the rdma device goes away underneath nvme-rdma >>>> but it is good for a dedicated test. > > I agree that the new test case is good. > >>> >>> I was woried about this setup sequence here: >>> >>> modprobe -q nvme-"${nvme_trtype}" >>> if [[ "${nvme_trtype}" == "rdma" ]]; then >>> start_soft_rdma >>> >>> The module is loaded before start_soft_rdma is started, thus I thought we should >>> do the reverse, first call stop_soft_rdma and the unload the module. >> >> They should be unrelated. the safe route is to first remove the uld and >> then the device. > > Sagi, this comment above was not clear for me. Is Daniel's patch ok for you? > > IMO, it is reasonable to "do clean-up in reverse order as setup" as a general > guide. It will reduce the chance to see module related failures when the test > cases do not expect such failures. Instead, we can have dedicated test cases for > the module load/unload order related failures. start_soft_rdma and > stop_soft_rdma do module load and unload. So I think the guide is good for those > helper functions also. As I mentioned here, this change exercises a code path in the driver that is a surprise unplug of the rdma device. It is equivalent to triggering a surprise removal of the pci device normally during nvme-pci test teardown. While this is worth testing, I'm not sure we want the default behavior to do that, but rather add dedicated tests for it. Hence, my suggestion was to leave nvme-rdma as is.
On May 01, 2023 / 17:10, Sagi Grimberg wrote: > On 4/30/23 13:34, Shinichiro Kawasaki wrote: [...] > > Sagi, this comment above was not clear for me. Is Daniel's patch ok for you? > > > > IMO, it is reasonable to "do clean-up in reverse order as setup" as a general > > guide. It will reduce the chance to see module related failures when the test > > cases do not expect such failures. Instead, we can have dedicated test cases for > > the module load/unload order related failures. start_soft_rdma and > > stop_soft_rdma do module load and unload. So I think the guide is good for those > > helper functions also. > > As I mentioned here, this change exercises a code path in the driver > that is a surprise unplug of the rdma device. It is equivalent to > triggering a surprise removal of the pci device normally during > nvme-pci test teardown. While this is worth testing, I'm not sure we > want the default behavior to do that, but rather add dedicated tests for > it. > > Hence, my suggestion was to leave nvme-rdma as is. Thanks for the clarification. I assume that stop_soft_rdma is the "surprise unplug of the rdma device". If I understand it correctly, the change for nvme-fc will be like this: diff --git a/tests/nvme/rc b/tests/nvme/rc index ec0cc2d..24803af 100644 --- a/tests/nvme/rc +++ b/tests/nvme/rc @@ -260,6 +260,11 @@ _cleanup_nvmet() { shopt -u nullglob trap SIGINT + if [[ "${nvme_trtype}" == "fc" ]]; then + _cleanup_fcloop "${def_local_wwnn}" "${def_local_wwpn}" \ + "${def_remote_wwnn}" "${def_remote_wwpn}" + modprobe -rq nvme-fcloop + fi modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null if [[ "${nvme_trtype}" != "loop" ]]; then modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null @@ -268,10 +273,6 @@ _cleanup_nvmet() { if [[ "${nvme_trtype}" == "rdma" ]]; then stop_soft_rdma fi - if [[ "${nvme_trtype}" == "fc" ]]; then - _cleanup_fcloop "${def_local_wwnn}" "${def_local_wwpn}" \ - "${def_remote_wwnn}" "${def_remote_wwpn}" - fi } _setup_nvmet() {
diff --git a/tests/nvme/rc b/tests/nvme/rc index ec0cc2d8d8cc..41f196b037d6 100644 --- a/tests/nvme/rc +++ b/tests/nvme/rc @@ -260,18 +260,20 @@ _cleanup_nvmet() { shopt -u nullglob trap SIGINT - modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null - if [[ "${nvme_trtype}" != "loop" ]]; then - modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null - fi - modprobe -rq nvmet 2>/dev/null if [[ "${nvme_trtype}" == "rdma" ]]; then stop_soft_rdma fi if [[ "${nvme_trtype}" == "fc" ]]; then _cleanup_fcloop "${def_local_wwnn}" "${def_local_wwpn}" \ "${def_remote_wwnn}" "${def_remote_wwpn}" + modprobe -rq nvme-fcloop fi + + modprobe -rq nvme-"${nvme_trtype}" 2>/dev/null + if [[ "${nvme_trtype}" != "loop" ]]; then + modprobe -rq nvmet-"${nvme_trtype}" 2>/dev/null + fi + modprobe -rq nvmet 2>/dev/null } _setup_nvmet() {