Message ID | 20230419085643.25714-1-dwagner@suse.de |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp228316vqo; Wed, 19 Apr 2023 01:58:29 -0700 (PDT) X-Google-Smtp-Source: AKy350ZCiGTCsDJKR+uwxu6NfFHhamMPvwSE7zNKFgWN2fPVLoTUar9R86E5b3zqeDkpdI+F9plk X-Received: by 2002:a17:902:f550:b0:1a1:b440:3773 with SMTP id h16-20020a170902f55000b001a1b4403773mr5212559plf.27.1681894708773; Wed, 19 Apr 2023 01:58:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681894708; cv=none; d=google.com; s=arc-20160816; b=R4P3MoeVgUDhOxGgp7XM6crAWUC1luT3n9eOqjHJQdNZ3smcfLBlq/l8hGNxFp4Doz hdGp4D9vM0nD5t6Y7K7HWkpSx1oNuUy5ETw95uCcoKnH2N+fnvVwL/GaM/Kf+8a9Lcpc GtcfQtQfZUxOj+wuVRBPbJugLUyLnpijjBuGVYrYQ9rHBY4WFWkhypAQmvIjcXzw4tfs Pt6yjuozrg7RwJT896R+lfzkpJVUnNmtympzJdlS+buHgKBgI+HAIuezLqR1/cUltddT 8ptDqdg/tCaAenpeb39OuTMbxdnwKIMdpZeRQt0okR0k3yHnqs/g2MCTu3LRg7BIncGo 7XaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature:dkim-signature; bh=YGRx0VhssWGJmCamZm968ZBOnZT7J4WEtWI7lTv5X2o=; b=SSeJReYsu6nspj7AQPaNceJqU5n7DZVlMhUu8VeZYZGr7MEMh52CJpsEjfGDLzBSIS 6TZ5ygVCXXDr4Q5pq+VOX84S/05kgHKBEUKtZbRMDLNqeWB+lbjvLnLObUAKku9TStmg fCvDLs2Ui8mZ7EQuXVagPBr6r8TaGNmgS1chsHNeRkllCjP2QUPdy2ZAFv0+SGjxWwv9 XJ092iP0eHGa+GzfnO4YzJupK8BfsHtdYufrmBwtEDe5Quyv8KCMzTPD9RknZP79KQyE KFvXZ0qBlVHOrwZht/1xyai7keeZ6cuB2Wi4EMjuC+thqETAS5stcYFuQg/BSdlQz4L3 Xaxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=Sd6IwJjc; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c12-20020a170902aa4c00b0019e2bd0fea8si15260340plr.143.2023.04.19.01.58.16; Wed, 19 Apr 2023 01:58:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=Sd6IwJjc; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232747AbjDSI44 (ORCPT <rfc822;peter110.wang@gmail.com> + 99 others); Wed, 19 Apr 2023 04:56:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232700AbjDSI4s (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 19 Apr 2023 04:56:48 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EF4E4220; Wed, 19 Apr 2023 01:56:47 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 15235218E9; Wed, 19 Apr 2023 08:56:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1681894606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=YGRx0VhssWGJmCamZm968ZBOnZT7J4WEtWI7lTv5X2o=; b=Sd6IwJjckufw078HuJfh4vwPeCiCz8HEBBW2CY4kcso57ZbSSDNDo3AJn5clFge3dFtsnF LmWtilBZ/tH3odXDa7eAMliBIIp/0eICiWzSMcJm4IHC0yvPVCFnrES50xLjf6RTKhAj8B i0LQ8nDidW9Qfctui/ZS9khLuvA6J0M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1681894606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=YGRx0VhssWGJmCamZm968ZBOnZT7J4WEtWI7lTv5X2o=; b=tql449MYMu4a1UAQD1IhwaezuR6G/wP+RyHqPgY6j0dkMDtmDVS26at+vRSowxFBsq91uG O5SumSHT+m6U8HCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 058DF1390E; Wed, 19 Apr 2023 08:56:46 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id PHRKAc6sP2RiegAAMHmgww (envelope-from <dwagner@suse.de>); Wed, 19 Apr 2023 08:56:46 +0000 From: Daniel Wagner <dwagner@suse.de> To: linux-nvme@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Chaitanya Kulkarni <kch@nvidia.com>, Shin'ichiro Kawasaki <shinichiro@fastmail.com>, Daniel Wagner <dwagner@suse.de> Subject: [RFC v1 0/1] nvme testsuite runtime optimization Date: Wed, 19 Apr 2023 10:56:42 +0200 Message-Id: <20230419085643.25714-1-dwagner@suse.de> X-Mailer: git-send-email 2.40.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763594426571046683?= X-GMAIL-MSGID: =?utf-8?q?1763594426571046683?= |
Series |
nvme testsuite runtime optimization
|
|
Message
Daniel Wagner
April 19, 2023, 8:56 a.m. UTC
While testing the fc transport I got a bit tired of wait for the I/O jobs to finish. Thus here some runtime optimization. With a small/slow VM I got following values: with 'optimizations' loop: real 4m43.981s user 0m17.754s sys 2m6.249s rdma: real 2m35.160s user 0m6.264s sys 0m56.230s tcp: real 2m30.391s user 0m5.770s sys 0m46.007s fc: real 2m19.738s user 0m6.012s sys 0m42.201s base: loop: real 7m35.061s user 0m23.493s sys 2m54.866s rdma: real 8m29.347s user 0m13.078s sys 1m53.158s tcp: real 8m11.357s user 0m13.033s sys 2m43.156s fc: real 5m46.615s user 0m12.819s sys 1m46.338s Daniel Wagner (1): nvme: Limit runtime for verification and limit test image size common/xfs | 3 ++- tests/nvme/004 | 2 +- tests/nvme/005 | 2 +- tests/nvme/006 | 2 +- tests/nvme/007 | 2 +- tests/nvme/008 | 2 +- tests/nvme/009 | 2 +- tests/nvme/010 | 5 +++-- tests/nvme/011 | 5 +++-- tests/nvme/012 | 4 ++-- tests/nvme/013 | 4 ++-- tests/nvme/014 | 10 ++++++++-- tests/nvme/015 | 10 ++++++++-- tests/nvme/017 | 2 +- tests/nvme/018 | 2 +- tests/nvme/019 | 2 +- tests/nvme/020 | 2 +- tests/nvme/021 | 2 +- tests/nvme/022 | 2 +- tests/nvme/023 | 2 +- tests/nvme/024 | 2 +- tests/nvme/025 | 2 +- tests/nvme/026 | 2 +- tests/nvme/027 | 2 +- tests/nvme/028 | 2 +- tests/nvme/029 | 2 +- tests/nvme/031 | 2 +- tests/nvme/032 | 4 ++-- tests/nvme/034 | 3 ++- tests/nvme/035 | 4 ++-- tests/nvme/040 | 4 ++-- tests/nvme/041 | 2 +- tests/nvme/042 | 2 +- tests/nvme/043 | 2 +- tests/nvme/044 | 2 +- tests/nvme/045 | 2 +- tests/nvme/047 | 2 +- tests/nvme/048 | 2 +- 38 files changed, 63 insertions(+), 47 deletions(-)
Comments
On 4/19/23 01:56, Daniel Wagner wrote: > While testing the fc transport I got a bit tired of wait for the I/O jobs to > finish. Thus here some runtime optimization. > > With a small/slow VM I got following values: > > with 'optimizations' > loop: > real 4m43.981s > user 0m17.754s > sys 2m6.249s > > rdma: > real 2m35.160s > user 0m6.264s > sys 0m56.230s > > tcp: > real 2m30.391s > user 0m5.770s > sys 0m46.007s > > fc: > real 2m19.738s > user 0m6.012s > sys 0m42.201s > > base: > loop: > real 7m35.061s > user 0m23.493s > sys 2m54.866s > > rdma: > real 8m29.347s > user 0m13.078s > sys 1m53.158s > > tcp: > real 8m11.357s > user 0m13.033s > sys 2m43.156s > > fc: > real 5m46.615s > user 0m12.819s > sys 1m46.338s > > Those jobs are meant to be run for at least 1G to establish confidence on the data set and the system under test since SSDs are in TBs nowadays and we don't even get anywhere close to that, with your suggestion we are going even lower ... we cannot change the dataset size for slow VMs, instead add a command line argument and pass it to tests e.g. nvme_verification_size=XXX similar to nvme_trtype but don't change the default values which we have been testing for years now Testing is supposed to be time consuming especially verification jobs.. -ck
>> While testing the fc transport I got a bit tired of wait for the I/O jobs to >> finish. Thus here some runtime optimization. >> >> With a small/slow VM I got following values: >> >> with 'optimizations' >> loop: >> real 4m43.981s >> user 0m17.754s >> sys 2m6.249s How come loop is doubling the time with this patch? ratio is not the same before and after. >> >> rdma: >> real 2m35.160s >> user 0m6.264s >> sys 0m56.230s >> >> tcp: >> real 2m30.391s >> user 0m5.770s >> sys 0m46.007s >> >> fc: >> real 2m19.738s >> user 0m6.012s >> sys 0m42.201s >> >> base: >> loop: >> real 7m35.061s >> user 0m23.493s >> sys 2m54.866s >> >> rdma: >> real 8m29.347s >> user 0m13.078s >> sys 1m53.158s >> >> tcp: >> real 8m11.357s >> user 0m13.033s >> sys 2m43.156s >> >> fc: >> real 5m46.615s >> user 0m12.819s >> sys 1m46.338s >> >> > > Those jobs are meant to be run for at least 1G to establish > confidence on the data set and the system under test since SSDs > are in TBs nowadays and we don't even get anywhere close to that, > with your suggestion we are going even lower ... Where does the 1G boundary coming from? > we cannot change the dataset size for slow VMs, instead add > a command line argument and pass it to tests e.g. > nvme_verification_size=XXX similar to nvme_trtype but don't change > the default values which we have been testing for years now > > Testing is supposed to be time consuming especially verification jobs.. I like the idea, but I think it may need to be the other way around. Have shortest possible runs by default.
On Wed, Apr 19, 2023 at 12:50:10PM +0300, Sagi Grimberg wrote: > > > > While testing the fc transport I got a bit tired of wait for the I/O jobs to > > > finish. Thus here some runtime optimization. > > > > > > With a small/slow VM I got following values: > > > > > > with 'optimizations' > > > loop: > > > real 4m43.981s > > > user 0m17.754s > > > sys 2m6.249s > > How come loop is doubling the time with this patch? > ratio is not the same before and after. first run was with loop, second one with rdma: nvme/002 (create many subsystems and test discovery) [not run] runtime 82.089s ... nvme_trtype=rdma is not supported in this test nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run] runtime 39.948s ... nvme_trtype=rdma is not supported in this test nvme/017 (create/delete many file-ns and test discovery) [not run] runtime 40.237s ... nvme/047 (test different queue types for fabric transports) [passed] runtime ... 13.580s nvme/048 (Test queue count changes on reconnect) [passed] runtime ... 6.287s 82 + 40 + 40 - 14 - 6 = 142. So loop runs additional tests. Hmm, though my optimization didn't work there... > > Those jobs are meant to be run for at least 1G to establish > > confidence on the data set and the system under test since SSDs > > are in TBs nowadays and we don't even get anywhere close to that, > > with your suggestion we are going even lower ... > > Where does the 1G boundary coming from? No idea, it just the existing hard coded values. I guess it might be from efa06fcf3c83 ("loop: test partition scanning") which was the first real test case (according the logs). > > we cannot change the dataset size for slow VMs, instead add > > a command line argument and pass it to tests e.g. > > nvme_verification_size=XXX similar to nvme_trtype but don't change > > the default values which we have been testing for years now > > > > Testing is supposed to be time consuming especially verification jobs.. > > I like the idea, but I think it may need to be the other way around. > Have shortest possible runs by default. Good point, I'll make it configurable. What is a good small default then? There are some test cases in loop which allocated a 1M file. That's propably too small.
>>>> While testing the fc transport I got a bit tired of wait for the I/O jobs to >>>> finish. Thus here some runtime optimization. >>>> >>>> With a small/slow VM I got following values: >>>> >>>> with 'optimizations' >>>> loop: >>>> real 4m43.981s >>>> user 0m17.754s >>>> sys 2m6.249s >> >> How come loop is doubling the time with this patch? >> ratio is not the same before and after. > > first run was with loop, second one with rdma: > > nvme/002 (create many subsystems and test discovery) [not run] > runtime 82.089s ... > nvme_trtype=rdma is not supported in this test > > nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run] > runtime 39.948s ... > nvme_trtype=rdma is not supported in this test > nvme/017 (create/delete many file-ns and test discovery) [not run] > runtime 40.237s ... > > nvme/047 (test different queue types for fabric transports) [passed] > runtime ... 13.580s > nvme/048 (Test queue count changes on reconnect) [passed] > runtime ... 6.287s > > 82 + 40 + 40 - 14 - 6 = 142. So loop runs additional tests. Hmm, though my > optimization didn't work there... How come loop is 4m+ while the others are 2m+ when before all were the same timeframe more or less? > >>> Those jobs are meant to be run for at least 1G to establish >>> confidence on the data set and the system under test since SSDs >>> are in TBs nowadays and we don't even get anywhere close to that, >>> with your suggestion we are going even lower ... >> >> Where does the 1G boundary coming from? > > No idea, it just the existing hard coded values. I guess it might be from > efa06fcf3c83 ("loop: test partition scanning") which was the first real test > case (according the logs). Was asking Chaitanya why is 1G considered sufficient vs. other sizes? Why not 10G? Why not 100M?
On 4/19/23 02:50, Sagi Grimberg wrote: > >>> While testing the fc transport I got a bit tired of wait for the I/O >>> jobs to >>> finish. Thus here some runtime optimization. >>> >>> With a small/slow VM I got following values: >>> >>> with 'optimizations' >>> loop: >>> real 4m43.981s >>> user 0m17.754s >>> sys 2m6.249s > > How come loop is doubling the time with this patch? > ratio is not the same before and after. > >>> >>> rdma: >>> real 2m35.160s >>> user 0m6.264s >>> sys 0m56.230s >>> >>> tcp: >>> real 2m30.391s >>> user 0m5.770s >>> sys 0m46.007s >>> >>> fc: >>> real 2m19.738s >>> user 0m6.012s >>> sys 0m42.201s >>> >>> base: >>> loop: >>> real 7m35.061s >>> user 0m23.493s >>> sys 2m54.866s >>> >>> rdma: >>> real 8m29.347s >>> user 0m13.078s >>> sys 1m53.158s >>> >>> tcp: >>> real 8m11.357s >>> user 0m13.033s >>> sys 2m43.156s >>> >>> fc: >>> real 5m46.615s >>> user 0m12.819s >>> sys 1m46.338s >>> >>> >> >> Those jobs are meant to be run for at least 1G to establish >> confidence on the data set and the system under test since SSDs >> are in TBs nowadays and we don't even get anywhere close to that, >> with your suggestion we are going even lower ... > > Where does the 1G boundary coming from? > I wrote these testcases 3 times, initially they were the part of nvme-cli tests7-8 years ago, then nvmftests 7-6 years ago, then they moved to blktests. In that time some of the testcases would not fail on with small size such as less than 512MB especially with verification but they were in the errors with 1G Hence I kept to be 1G. Now I don't remember why I didn't use bigger size than 1G should have documented that somewhere ... >> we cannot change the dataset size for slow VMs, instead add >> a command line argument and pass it to tests e.g. >> nvme_verification_size=XXX similar to nvme_trtype but don't change >> the default values which we have been testing for years now >> >> Testing is supposed to be time consuming especially verification jobs.. > > I like the idea, but I think it may need to be the other way around. > Have shortest possible runs by default. see above.. -ck
On 4/19/23 06:15, Sagi Grimberg wrote: > >>>>> While testing the fc transport I got a bit tired of wait for the >>>>> I/O jobs to >>>>> finish. Thus here some runtime optimization. >>>>> >>>>> With a small/slow VM I got following values: >>>>> >>>>> with 'optimizations' >>>>> loop: >>>>> real 4m43.981s >>>>> user 0m17.754s >>>>> sys 2m6.249s >>> >>> How come loop is doubling the time with this patch? >>> ratio is not the same before and after. >> >> first run was with loop, second one with rdma: >> >> nvme/002 (create many subsystems and test discovery) [not run] >> runtime 82.089s ... >> nvme_trtype=rdma is not supported in this test >> >> nvme/016 (create/delete many NVMeOF block device-backed ns and test >> discovery) [not run] >> runtime 39.948s ... >> nvme_trtype=rdma is not supported in this test >> nvme/017 (create/delete many file-ns and test discovery) [not run] >> runtime 40.237s ... >> >> nvme/047 (test different queue types for fabric transports) [passed] >> runtime ... 13.580s >> nvme/048 (Test queue count changes on reconnect) [passed] >> runtime ... 6.287s >> >> 82 + 40 + 40 - 14 - 6 = 142. So loop runs additional tests. Hmm, >> though my >> optimization didn't work there... > > How come loop is 4m+ while the others are 2m+ when before all > were the same timeframe more or less? > >> >>>> Those jobs are meant to be run for at least 1G to establish >>>> confidence on the data set and the system under test since SSDs >>>> are in TBs nowadays and we don't even get anywhere close to that, >>>> with your suggestion we are going even lower ... >>> >>> Where does the 1G boundary coming from? >> >> No idea, it just the existing hard coded values. I guess it might be >> from >> efa06fcf3c83 ("loop: test partition scanning") which was the first >> real test >> case (according the logs). > > Was asking Chaitanya why is 1G considered sufficient vs. other sizes? > Why not 10G? Why not 100M? See the earlier response ... -ck
we cannot change the dataset size for slow VMs, instead add >> a command line argument and pass it to tests e.g. >> nvme_verification_size=XXX similar to nvme_trtype but don't change >> the default values which we have been testing for years now >> >> Testing is supposed to be time consuming especially verification jobs.. > > I like the idea, but I think it may need to be the other way around. > Have shortest possible runs by default. not everyone is running blktests on the slow vms, so I think it should be the other way around, the default integration of these testcases using 1G size in various distros, and it is not a good idea to change that so everyone else who are not running slow vms who should update their testscripts ... -ck
On Wed, Apr 19, 2023 at 09:11:33PM +0000, Chaitanya Kulkarni wrote: > >> Those jobs are meant to be run for at least 1G to establish > >> confidence on the data set and the system under test since SSDs > >> are in TBs nowadays and we don't even get anywhere close to that, > >> with your suggestion we are going even lower ... > > > > Where does the 1G boundary coming from? > > > > I wrote these testcases 3 times, initially they were the part of > nvme-cli tests7-8 years ago, then nvmftests 7-6 years ago, then they > moved to blktests. > > In that time some of the testcases would not fail on with small size > such as less than 512MB especially with verification but they were > in the errors with 1G Hence I kept to be 1G. > > Now I don't remember why I didn't use bigger size than 1G > should have documented that somewhere ... Can you remember why you chosed to set the image size to 1G and the io size for fio to 950m in nvme/012 and nvme/013? I am testing various image sizes and found that small images e.g in the range of [4..64]m are passing fine but larger ones like [512-...]M do not (no space left). Note I've added a calc function which does image size - 1M to leave some room left.
On Thu, Apr 20, 2023 at 10:24:15AM +0200, Daniel Wagner wrote: > On Wed, Apr 19, 2023 at 09:11:33PM +0000, Chaitanya Kulkarni wrote: > > >> Those jobs are meant to be run for at least 1G to establish > > >> confidence on the data set and the system under test since SSDs > > >> are in TBs nowadays and we don't even get anywhere close to that, > > >> with your suggestion we are going even lower ... > > > > > > Where does the 1G boundary coming from? > > > > > > > I wrote these testcases 3 times, initially they were the part of > > nvme-cli tests7-8 years ago, then nvmftests 7-6 years ago, then they > > moved to blktests. > > > > In that time some of the testcases would not fail on with small size > > such as less than 512MB especially with verification but they were > > in the errors with 1G Hence I kept to be 1G. > > > > Now I don't remember why I didn't use bigger size than 1G > > should have documented that somewhere ... > > Can you remember why you chosed to set the image size to 1G and the io size for > fio to 950m in nvme/012 and nvme/013? forget it, found a commit message which explains it e5bd71872b3b ("nvme/012,013,035: change fio I/O size and move size definition place") [...] Change fio I/O size of nvme/012,013,035 from 950m to 900m, since recent change increased the xfs log size and it caused fio failure with I/O size 950m.