[blktests,v4,09/11] nvme{045,047}: Calculate IO size for random fio jobs
Commit Message
_nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
function. The jobs size is the size per job, thus we have to divide
through the number of CPUs.
_xfs_run_fio_verify_io() is replaced with _run_fio_rand_io() because the
former has a minimum nvme_img_size of 350M. Both tests nvme/{045,047}
just want some IO to verify that the path is working. Thus reduce the
min nmve_img_size requirement switch to _run_fio_rand_io()
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/nvme/045 | 4 +++-
tests/nvme/047 | 6 ++++--
tests/nvme/rc | 10 ++++++++++
3 files changed, 17 insertions(+), 3 deletions(-)
Comments
On 5/11/23 07:09, Daniel Wagner wrote:
> _nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
> function. The jobs size is the size per job, thus we have to divide
> through the number of CPUs.
sorry I didn't understand why we have to divide through number of
CPUs ? isn't tht will change the current job size of the test ?
unless we are increasing somewhere which I missed it .
-ck
On May 17, 2023 / 04:44, Chaitanya Kulkarni wrote:
> On 5/11/23 07:09, Daniel Wagner wrote:
> > _nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
> > function. The jobs size is the size per job, thus we have to divide
> > through the number of CPUs.
>
> sorry I didn't understand why we have to divide through number of
> CPUs ? isn't tht will change the current job size of the test ?
>
> unless we are increasing somewhere which I missed it .
This change reduces the I/O size per job, but it keeps the total I/O size
regardless of the number of CPUs. This will keep test case runtime reasonable
on systems with hundreds of CPUs.
As for the test case nvme/045, it tests re-authentication. I don't think it
requires total I/O size proportional to number of CPUs. As for the test case
nvme/047, it exercises different queue types (write queue and poll queue). Does
it require total I/O size proportional to number of CPUs? Daniel is the test
case author, and I guessed he is ok with the change.
Sorry for the late response, had to deal with a lot of high prio stuff...
On Fri, May 19, 2023 at 01:36:17AM +0000, Shinichiro Kawasaki wrote:
> On May 17, 2023 / 04:44, Chaitanya Kulkarni wrote:
> > On 5/11/23 07:09, Daniel Wagner wrote:
> > > _nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
> > > function. The jobs size is the size per job, thus we have to divide
> > > through the number of CPUs.
> >
> > sorry I didn't understand why we have to divide through number of
> > CPUs ? isn't tht will change the current job size of the test ?
> >
> > unless we are increasing somewhere which I missed it .
>
> This change reduces the I/O size per job, but it keeps the total I/O size
> regardless of the number of CPUs. This will keep test case runtime reasonable
> on systems with hundreds of CPUs.
Yes, indeed.
> As for the test case nvme/045, it tests re-authentication. I don't think it
> requires total I/O size proportional to number of CPUs. As for the test case
> nvme/047, it exercises different queue types (write queue and poll queue). Does
> it require total I/O size proportional to number of CPUs? Daniel is the test
> case author, and I guessed he is ok with the change.
Yes :)
Thanks for applying these patches!
@@ -31,6 +31,7 @@ test() {
local ctrlkey
local new_ctrlkey
local ctrldev
+ local rand_io_size
echo "Running ${TEST_NAME}"
@@ -120,7 +121,8 @@ test() {
nvmedev=$(_find_nvme_dev "${subsys_name}")
- _run_fio_rand_io --size=4m --filename="/dev/${nvmedev}n1"
+ rand_io_size="$(_nvme_calc_rand_io_size 4m)"
+ _run_fio_rand_io --size="${rand_io_size}" --filename="/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
@@ -25,6 +25,7 @@ test() {
local port
local nvmedev
local loop_dev
+ local rand_io_size
local file_path="$TMPDIR/img"
local subsys_name="blktests-subsystem-1"
@@ -42,7 +43,8 @@ test() {
nvmedev=$(_find_nvme_dev "${subsys_name}")
- _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
+ rand_io_size="$(_nvme_calc_rand_io_size 4M)"
+ _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
_nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
@@ -50,7 +52,7 @@ test() {
--nr-write-queues 1 \
--nr-poll-queues 1 || echo FAIL
- _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
+ _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
_nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
@@ -150,6 +150,16 @@ _test_dev_nvme_nsid() {
cat "${TEST_DEV_SYSFS}/nsid"
}
+_nvme_calc_rand_io_size() {
+ local img_size_mb
+ local io_size_kb
+
+ img_size_mb="$(convert_to_mb "$1")"
+ io_size_kb="$(((img_size_mb * 1024) / $(nproc)))"
+
+ echo "${io_size_kb}k"
+}
+
_nvme_fcloop_add_rport() {
local local_wwnn="$1"
local local_wwpn="$2"