Message ID | 20230620132703.20648-2-dwagner@suse.de |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3671601vqr; Tue, 20 Jun 2023 06:38:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6AyDT0hW3kfCdHNvPk6TKrndYuJ1b9uJiafeWIaW500sit+eC1lFepowr48xzi5KyU+VRb X-Received: by 2002:a17:902:f546:b0:1b6:6751:95ec with SMTP id h6-20020a170902f54600b001b6675195ecmr3746737plf.30.1687268304827; Tue, 20 Jun 2023 06:38:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687268304; cv=none; d=google.com; s=arc-20160816; b=VE0gC7Dtl+lsC7seThGYp23qAMJ0zCOUuUqOLiqUj58K7sOn15M9+QKMk30u2Vup0Z gRFHMwEagEL/ENCjRUBM4Qhkky8+1Ckcu+PyAR3JIODgnZEAj+fKzWaf2uUQon4MQBHh t/E4TPFyWvaRy1m9iuh5u4yb4OJ2JMftQFBXmFe4Rr0v7pUndZ+yEM+Cj3ffuyV7fwMS o8+Ej8XaW2adX9nGbl1qlkxgYrhaO7bGSlMDPno+qn+yELyuiesB+dtJ2ZB3STQwHvkn MUx+KOaDQH5O1WiN6b/YX1U/Xdsg80n2o6+vXXwWvVJ1zDFadwJEWahByRngdLAOMCe4 XDfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=xdu4SLeXRX96eYJIkjFLNcHQfvWSCAFCqgoWIOGzVEU=; b=y+GUj2Sq9EwnytE5Pap9XlFX18B+Of8D2b2EHnu5RUrWsTZxuLNvHJeY+NFQf1cfS3 3FKt3MpSi6DBvVrhH4rGfYEadVmiRACTNtW1ZVlG2rpAZqsugPJUNvgxiOuxnORlM+RD s+ji7AhdUWr17J7uzAKX6+XLCOFdHQ+C5BYpH4+DycAew/ZRE8NAs6FyhPo/yU0H79Fh 9IR1FoD/jRoA3Wy0smINxkvO4kGjAOiKD46B/U+/UbwLBflIlTVRs3A9lVTKn9k7HYTW syZMulq8OHCVmxTBjUw41w2vnJM17Jef0qS6iskDSmIHX4NQ4GOMNhuBxFPwXixQsLe7 7ywA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=VpHAssbZ; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b14-20020a170902d50e00b001b016313b20si109976plg.345.2023.06.20.06.38.12; Tue, 20 Jun 2023 06:38:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=VpHAssbZ; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230466AbjFTN1X (ORCPT <rfc822;maxin.john@gmail.com> + 99 others); Tue, 20 Jun 2023 09:27:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231225AbjFTN1T (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 20 Jun 2023 09:27:19 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8DD1B4; Tue, 20 Jun 2023 06:27:18 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7156921846; Tue, 20 Jun 2023 13:27:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1687267637; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xdu4SLeXRX96eYJIkjFLNcHQfvWSCAFCqgoWIOGzVEU=; b=VpHAssbZ6FRo5bahpfqMn2JLtxTsbdwQg8XiNVzAWn97FXnm2W4DQ0d0Ut+s3zkXWwPh9+ jtUimsqy1R29joc6RPnp3JJmhuwmybTMHTgD/yOAeKdyCcYoUEON9FxV4TQ5EA8R1kYDvZ p+wheu/EeNaK0wnzNf7kARKfnfjwCnY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1687267637; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xdu4SLeXRX96eYJIkjFLNcHQfvWSCAFCqgoWIOGzVEU=; b=KX02MQlMHQWX8D3ST42NL2EOoRv3f2X0aZYichzWCZ1f8zSFb8mwk5+Uwc6XodYr/je8+R 65J8aSgfNXeRXSCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6370B1346D; Tue, 20 Jun 2023 13:27:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id YfhCGDWpkWSdOAAAMHmgww (envelope-from <dwagner@suse.de>); Tue, 20 Jun 2023 13:27:17 +0000 From: Daniel Wagner <dwagner@suse.de> To: linux-nvme@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Chaitanya Kulkarni <kch@nvidia.com>, Shin'ichiro Kawasaki <shinichiro@fastmail.com>, Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>, James Smart <jsmart2021@gmail.com>, Martin Belanger <Martin.Belanger@dell.com>, Daniel Wagner <dwagner@suse.de> Subject: [PATCH blktests v1 1/3] nvme/048: Check for queue count check directly Date: Tue, 20 Jun 2023 15:27:01 +0200 Message-ID: <20230620132703.20648-2-dwagner@suse.de> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230620132703.20648-1-dwagner@suse.de> References: <20230620132703.20648-1-dwagner@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769229049967659960?= X-GMAIL-MSGID: =?utf-8?q?1769229049967659960?= |
Series |
More fixes for FC enabling
|
|
Commit Message
Daniel Wagner
June 20, 2023, 1:27 p.m. UTC
The test monitored the state changes live -> resetting -> connecting ->
live, to figure out the queue count change was successful.
The fc transport is reconnecting very fast and the state transitions
are not observed by the current approach.
So instead trying to monitor the state changes, let's just wait for the
live state and the correct queue number.
As queue count is depending on the number of online CPUs we explicitly
use 1 and 2 for the max_queue count. This means the queue_count value
needs to reach either 2 or 3 (admin queue included).
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/nvme/048 | 35 ++++++++++++++++++++++++-----------
1 file changed, 24 insertions(+), 11 deletions(-)
Comments
> The test monitored the state changes live -> resetting -> connecting -> > live, to figure out the queue count change was successful. > > The fc transport is reconnecting very fast and the state transitions > are not observed by the current approach. > > So instead trying to monitor the state changes, let's just wait for the > live state and the correct queue number. > > As queue count is depending on the number of online CPUs we explicitly > use 1 and 2 for the max_queue count. This means the queue_count value > needs to reach either 2 or 3 (admin queue included). > > Signed-off-by: Daniel Wagner <dwagner@suse.de> > --- > tests/nvme/048 | 35 ++++++++++++++++++++++++----------- > 1 file changed, 24 insertions(+), 11 deletions(-) > > diff --git a/tests/nvme/048 b/tests/nvme/048 > index 81084f0440c2..3dc5169132de 100755 > --- a/tests/nvme/048 > +++ b/tests/nvme/048 > @@ -42,6 +42,26 @@ nvmf_wait_for_state() { > return 0 > } > > +nvmf_wait_for_queue_count() { > + local subsys_name="$1" > + local queue_count="$2" > + local nvmedev > + > + nvmedev=$(_find_nvme_dev "${subsys_name}") > + > + queue_count_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/queue_count" > + > + nvmf_wait_for_state "${subsys_name}" "live" || return 1 > + > + queue_count=$((queue_count + 1)) > + if grep -q "${queue_count}" "${queue_count_file}"; then > + return 0 > + fi > + > + echo "expected queue count ${queue_count} not set" > + return 1 > +} > + > set_nvmet_attr_qid_max() { > local nvmet_subsystem="$1" > local qid_max="$2" > @@ -56,10 +76,7 @@ set_qid_max() { > local qid_max="$3" > > set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" > - > - # Setting qid_max forces a disconnect and the reconntect attempt starts > - nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 > - nvmf_wait_for_state "${subsys_name}" "live" || return 1 > + nvmf_wait_for_queue_count "${subsys_name}" "${qid_max}" || return 1 Why not simply wait for live? The connecting is obviously racy... > > return 0 > } > @@ -77,12 +94,8 @@ test() { > > _setup_nvmet > > - hostid="$(uuidgen)" > - if [ -z "$hostid" ] ; then > - echo "uuidgen failed" > - return 1 > - fi > - hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" > + hostid="${def_hostid}" > + hostnqn="${def_hostnqn}" > > truncate -s "${nvme_img_size}" "${file_path}" > > @@ -103,7 +116,7 @@ test() { > echo FAIL > else > set_qid_max "${port}" "${subsys_name}" 1 || echo FAIL > - set_qid_max "${port}" "${subsys_name}" 128 || echo FAIL > + set_qid_max "${port}" "${subsys_name}" 2 || echo FAIL > fi > > _nvme_disconnect_subsys "${subsys_name}"
On Tue, Jun 20, 2023 at 04:49:45PM +0300, Sagi Grimberg wrote: > > +nvmf_wait_for_queue_count() { > > + local subsys_name="$1" > > + local queue_count="$2" > > + local nvmedev > > + > > + nvmedev=$(_find_nvme_dev "${subsys_name}") > > + > > + queue_count_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/queue_count" > > + > > + nvmf_wait_for_state "${subsys_name}" "live" || return 1 > > + > > + queue_count=$((queue_count + 1)) > > + if grep -q "${queue_count}" "${queue_count_file}"; then > > + return 0 > > + fi > > + > > + echo "expected queue count ${queue_count} not set" > > + return 1 > > +} > > + > > set_nvmet_attr_qid_max() { > > local nvmet_subsystem="$1" > > local qid_max="$2" > > @@ -56,10 +76,7 @@ set_qid_max() { > > local qid_max="$3" > > set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" > > - > > - # Setting qid_max forces a disconnect and the reconntect attempt starts > > - nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 > > - nvmf_wait_for_state "${subsys_name}" "live" || return 1 > > + nvmf_wait_for_queue_count "${subsys_name}" "${qid_max}" || return 1 > > Why not simply wait for live? The connecting is obviously racy... That is what the new version is doing. It's waiting for the live state and then checks the queue count.
>>> +nvmf_wait_for_queue_count() { >>> + local subsys_name="$1" >>> + local queue_count="$2" >>> + local nvmedev >>> + >>> + nvmedev=$(_find_nvme_dev "${subsys_name}") >>> + >>> + queue_count_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/queue_count" >>> + >>> + nvmf_wait_for_state "${subsys_name}" "live" || return 1 >>> + >>> + queue_count=$((queue_count + 1)) >>> + if grep -q "${queue_count}" "${queue_count_file}"; then >>> + return 0 >>> + fi >>> + >>> + echo "expected queue count ${queue_count} not set" >>> + return 1 >>> +} >>> + >>> set_nvmet_attr_qid_max() { >>> local nvmet_subsystem="$1" >>> local qid_max="$2" >>> @@ -56,10 +76,7 @@ set_qid_max() { >>> local qid_max="$3" >>> set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" >>> - >>> - # Setting qid_max forces a disconnect and the reconntect attempt starts >>> - nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 >>> - nvmf_wait_for_state "${subsys_name}" "live" || return 1 >>> + nvmf_wait_for_queue_count "${subsys_name}" "${qid_max}" || return 1 >> >> Why not simply wait for live? The connecting is obviously racy... > > That is what the new version is doing. It's waiting for the live state and then > checks the queue count. Maybe don't fold waiting for live into waiting for queue_count.
On Wed, Jun 21, 2023 at 12:50:29PM +0300, Sagi Grimberg wrote: > > > Why not simply wait for live? The connecting is obviously racy... > > > > That is what the new version is doing. It's waiting for the live state and then > > checks the queue count. > > Maybe don't fold waiting for live into waiting for queue_count. Alright, will do!
On Tue, Jun 27, 2023 at 10:13:48AM +0000, Shinichiro Kawasaki wrote: > > + nvmf_wait_for_state "${subsys_name}" "live" || return 1 > > + > > + queue_count=$((queue_count + 1)) > > + if grep -q "${queue_count}" "${queue_count_file}"; then > > Does this check work when the number in queue_count_file has more digits than > queue_count? e.g.) queue_count_file=20, queue_count=2 The idea is that it should be an exact match. Let me figure out if this does what it is supposed to do. > - hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" > > + hostid="${def_hostid}" > > + hostnqn="${def_hostnqn}" > > I guess it's the better to move this hunk to the 3rd patch. Or we can mention it > in the commit message of this patch. Good point, I'll move this to the 3rd patch so this change is in one patch.
diff --git a/tests/nvme/048 b/tests/nvme/048 index 81084f0440c2..3dc5169132de 100755 --- a/tests/nvme/048 +++ b/tests/nvme/048 @@ -42,6 +42,26 @@ nvmf_wait_for_state() { return 0 } +nvmf_wait_for_queue_count() { + local subsys_name="$1" + local queue_count="$2" + local nvmedev + + nvmedev=$(_find_nvme_dev "${subsys_name}") + + queue_count_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/queue_count" + + nvmf_wait_for_state "${subsys_name}" "live" || return 1 + + queue_count=$((queue_count + 1)) + if grep -q "${queue_count}" "${queue_count_file}"; then + return 0 + fi + + echo "expected queue count ${queue_count} not set" + return 1 +} + set_nvmet_attr_qid_max() { local nvmet_subsystem="$1" local qid_max="$2" @@ -56,10 +76,7 @@ set_qid_max() { local qid_max="$3" set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" - - # Setting qid_max forces a disconnect and the reconntect attempt starts - nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 - nvmf_wait_for_state "${subsys_name}" "live" || return 1 + nvmf_wait_for_queue_count "${subsys_name}" "${qid_max}" || return 1 return 0 } @@ -77,12 +94,8 @@ test() { _setup_nvmet - hostid="$(uuidgen)" - if [ -z "$hostid" ] ; then - echo "uuidgen failed" - return 1 - fi - hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" + hostid="${def_hostid}" + hostnqn="${def_hostnqn}" truncate -s "${nvme_img_size}" "${file_path}" @@ -103,7 +116,7 @@ test() { echo FAIL else set_qid_max "${port}" "${subsys_name}" 1 || echo FAIL - set_qid_max "${port}" "${subsys_name}" 128 || echo FAIL + set_qid_max "${port}" "${subsys_name}" 2 || echo FAIL fi _nvme_disconnect_subsys "${subsys_name}"