[-next,0/9] dm-raid, md/raid: fix v6.7 regressions part2

Message ID 20240301095657.662111-1-yukuai1@huaweicloud.com
Headers
Series dm-raid, md/raid: fix v6.7 regressions part2 |

Message

Yu Kuai March 1, 2024, 9:56 a.m. UTC
  From: Yu Kuai <yukuai3@huawei.com>

link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/

part1 contains fixes for deadlocks for stopping sync_thread

This set contains fixes:
 - reshape can start unexpected, cause data corruption, patch 1,5,6;
 - deadlocks that reshape concurrent with IO, patch 8;
 - a lockdep warning, patch 9;

I'm runing lvm2 tests with following scripts with a few rounds now,

for t in `ls test/shell`; do
        if cat test/shell/$t | grep raid &> /dev/null; then
                make check T=shell/$t
        fi
done

There are no deadlock and no fs corrupt now, however, there are still four
failed tests:

###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
###       failed: [ndev-vanilla] shell/lvextend-raid.sh

And failed reasons are the same:

## ERROR: The test started dmeventd (147856) unexpectedly

I have no clue yet, and it seems other folks doesn't have this issue.

Yu Kuai (9):
  md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
  md: export helpers to stop sync_thread
  md: export helper md_is_rdwr()
  md: add a new helper reshape_interrupted()
  dm-raid: really frozen sync_thread during suspend
  md/dm-raid: don't call md_reap_sync_thread() directly
  dm-raid: add a new helper prepare_suspend() in md_personality
  dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
    concurrent with reshape
  dm-raid: fix lockdep waring in "pers->hot_add_disk"

 drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
 drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
 drivers/md/md.h      | 38 +++++++++++++++++-
 drivers/md/raid5.c   | 32 ++++++++++++++-
 4 files changed, 196 insertions(+), 40 deletions(-)
  

Comments

Song Liu March 1, 2024, 10:36 p.m. UTC | #1
On Fri, Mar 1, 2024 at 2:03 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>
> part1 contains fixes for deadlocks for stopping sync_thread
>
> This set contains fixes:
>  - reshape can start unexpected, cause data corruption, patch 1,5,6;
>  - deadlocks that reshape concurrent with IO, patch 8;
>  - a lockdep warning, patch 9;
>
> I'm runing lvm2 tests with following scripts with a few rounds now,
>
> for t in `ls test/shell`; do
>         if cat test/shell/$t | grep raid &> /dev/null; then
>                 make check T=shell/$t
>         fi
> done
>
> There are no deadlock and no fs corrupt now, however, there are still four
> failed tests:
>
> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>
> And failed reasons are the same:
>
> ## ERROR: The test started dmeventd (147856) unexpectedly
>
> I have no clue yet, and it seems other folks doesn't have this issue.
>
> Yu Kuai (9):
>   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>   md: export helpers to stop sync_thread
>   md: export helper md_is_rdwr()
>   md: add a new helper reshape_interrupted()
>   dm-raid: really frozen sync_thread during suspend
>   md/dm-raid: don't call md_reap_sync_thread() directly
>   dm-raid: add a new helper prepare_suspend() in md_personality
>   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>     concurrent with reshape
>   dm-raid: fix lockdep waring in "pers->hot_add_disk"

This set looks good to me and passes the tests: reshape tests from
lvm2, mdadm tests, and the reboot test that catches some issue in
Xiao's version.

DM folks, please help review and test this set. If it looks good, we
can route it either via the md tree (I am thinking about md-6.8
branch) or the dm tree.

CC Jens,

I understand it is already late in the release cycle for 6.8 kernel.
Please let us know your thoughts on this set. These patches fixes
a crash when running lvm2 tests that are related to md-raid
reshape.

Thanks,
Song
  
Mike Snitzer March 2, 2024, 3:56 p.m. UTC | #2
On Fri, Mar 01 2024 at  5:36P -0500,
Song Liu <song@kernel.org> wrote:

> On Fri, Mar 1, 2024 at 2:03 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > From: Yu Kuai <yukuai3@huawei.com>
> >
> > link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> >
> > part1 contains fixes for deadlocks for stopping sync_thread
> >
> > This set contains fixes:
> >  - reshape can start unexpected, cause data corruption, patch 1,5,6;
> >  - deadlocks that reshape concurrent with IO, patch 8;
> >  - a lockdep warning, patch 9;
> >
> > I'm runing lvm2 tests with following scripts with a few rounds now,
> >
> > for t in `ls test/shell`; do
> >         if cat test/shell/$t | grep raid &> /dev/null; then
> >                 make check T=shell/$t
> >         fi
> > done
> >
> > There are no deadlock and no fs corrupt now, however, there are still four
> > failed tests:
> >
> > ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> > ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> >
> > And failed reasons are the same:
> >
> > ## ERROR: The test started dmeventd (147856) unexpectedly
> >
> > I have no clue yet, and it seems other folks doesn't have this issue.
> >
> > Yu Kuai (9):
> >   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> >   md: export helpers to stop sync_thread
> >   md: export helper md_is_rdwr()
> >   md: add a new helper reshape_interrupted()
> >   dm-raid: really frozen sync_thread during suspend
> >   md/dm-raid: don't call md_reap_sync_thread() directly
> >   dm-raid: add a new helper prepare_suspend() in md_personality
> >   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> >     concurrent with reshape
> >   dm-raid: fix lockdep waring in "pers->hot_add_disk"
> 
> This set looks good to me and passes the tests: reshape tests from
> lvm2, mdadm tests, and the reboot test that catches some issue in
> Xiao's version.
> 
> DM folks, please help review and test this set. If it looks good, we
> can route it either via the md tree (I am thinking about md-6.8
> branch) or the dm tree.

Please send these changes through md-6.8.

There are a few typos in patch subjects and headers but:

Acked-by: Mike Snitzer <snitzer@kernel.org>

> CC Jens,
> 
> I understand it is already late in the release cycle for 6.8 kernel.
> Please let us know your thoughts on this set. These patches fixes
> a crash when running lvm2 tests that are related to md-raid
> reshape.

Would be good to get these into 6.8, but worst case if they slip to
the 6.9 merge is they'll go to relevant stable kernels (due to
"Fixes:" tags, though not all commits have Fixes).

Mike
  
Xiao Ni March 3, 2024, 1:16 p.m. UTC | #3
Hi all

There is a error report from lvm regression tests. The case is
lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
tried to fix dmraid regression problems too. In my patch set,  after
reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
sync_thread for reshape directly), this problem doesn't appear.

I put the log in the attachment.

On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>
> part1 contains fixes for deadlocks for stopping sync_thread
>
> This set contains fixes:
>  - reshape can start unexpected, cause data corruption, patch 1,5,6;
>  - deadlocks that reshape concurrent with IO, patch 8;
>  - a lockdep warning, patch 9;
>
> I'm runing lvm2 tests with following scripts with a few rounds now,
>
> for t in `ls test/shell`; do
>         if cat test/shell/$t | grep raid &> /dev/null; then
>                 make check T=shell/$t
>         fi
> done
>
> There are no deadlock and no fs corrupt now, however, there are still four
> failed tests:
>
> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>
> And failed reasons are the same:
>
> ## ERROR: The test started dmeventd (147856) unexpectedly
>
> I have no clue yet, and it seems other folks doesn't have this issue.
>
> Yu Kuai (9):
>   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>   md: export helpers to stop sync_thread
>   md: export helper md_is_rdwr()
>   md: add a new helper reshape_interrupted()
>   dm-raid: really frozen sync_thread during suspend
>   md/dm-raid: don't call md_reap_sync_thread() directly
>   dm-raid: add a new helper prepare_suspend() in md_personality
>   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>     concurrent with reshape
>   dm-raid: fix lockdep waring in "pers->hot_add_disk"
>
>  drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>  drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>  drivers/md/md.h      | 38 +++++++++++++++++-
>  drivers/md/raid5.c   | 32 ++++++++++++++-
>  4 files changed, 196 insertions(+), 40 deletions(-)
>
> --
> 2.39.2
>
[ 0:00.223] Library version:   1.02.198-git (2023-11-21)
[ 0:00.223] Driver version:    4.48.0
[ 0:00.223] Kernel is Linux hp-dl380eg8-02.rhts.eng.pek2.redhat.com 6.8.0-rc1-dmraid+ #1 SMP PREEMPT_DYNAMIC Sat Mar  2 21:48:55 EST 2024 x86_64 x86_64 x86_64 GNU/Linux
[ 0:00.410] Selinux mode is Enforcing.
[ 0:00.427]                total        used        free      shared  buff/cache   available
[ 0:00.440] Mem:           15569         760       14935          20         104       14808
[ 0:00.440] Swap:           7975           0        7975
[ 0:00.440] Filesystem                              Size  Used Avail Use% Mounted on
[ 0:00.443] devtmpfs                                4.0M     0  4.0M   0% /dev
[ 0:00.443] tmpfs                                   7.7G     0  7.7G   0% /dev/shm
[ 0:00.443] tmpfs                                   3.1G   18M  3.1G   1% /run
[ 0:00.443] /dev/mapper/rhel_hp--dl380eg8--02-root   70G  4.1G   66G   6% /
[ 0:00.443] /dev/sda1                               960M  313M  648M  33% /boot
[ 0:00.443] /dev/mapper/rhel_hp--dl380eg8--02-home  853G   37G  816G   5% /home
[ 0:00.443] tmpfs                                   1.6G  4.0K  1.6G   1% /run/user/0
[ 0:00.443] @TESTDIR=/tmp/LVMTEST500118.AxR1K9qRUi
[ 0:00.445] @PREFIX=LVMTEST500118
[ 0:00.445] ## LVMCONF: activation {
[ 0:00.501] ## LVMCONF:     checks = 1
[ 0:00.501] ## LVMCONF:     monitoring = 0
[ 0:00.501] ## LVMCONF:     polling_interval = 1
[ 0:00.501] ## LVMCONF:     raid_region_size = 512
[ 0:00.501] ## LVMCONF:     retry_deactivation = 1
[ 0:00.501] ## LVMCONF:     snapshot_autoextend_percent = 50
[ 0:00.501] ## LVMCONF:     snapshot_autoextend_threshold = 50
[ 0:00.501] ## LVMCONF:     udev_rules = 1
[ 0:00.501] ## LVMCONF:     udev_sync = 1
[ 0:00.501] ## LVMCONF:     verify_udev_operations = 1
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: allocation {
[ 0:00.501] ## LVMCONF:     vdo_slab_size_mb = 128
[ 0:00.501] ## LVMCONF:     wipe_signatures_when_zeroing_new_lvs = 0
[ 0:00.501] ## LVMCONF:     zero_metadata = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: backup {
[ 0:00.501] ## LVMCONF:     archive = 0
[ 0:00.501] ## LVMCONF:     backup = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: devices {
[ 0:00.501] ## LVMCONF:     cache_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/etc"
[ 0:00.501] ## LVMCONF:     default_data_alignment = 1
[ 0:00.501] ## LVMCONF:     dir = "/tmp/LVMTEST500118.AxR1K9qRUi/dev"
[ 0:00.501] ## LVMCONF:     filter = "a|.*|"
[ 0:00.501] ## LVMCONF:     global_filter = [ "a|/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118.*pv[0-9_]*$|", "r|.*|" ]
[ 0:00.501] ## LVMCONF:     md_component_detection = 0
[ 0:00.501] ## LVMCONF:     scan = "/tmp/LVMTEST500118.AxR1K9qRUi/dev"
[ 0:00.501] ## LVMCONF:     sysfs_scan = 1
[ 0:00.501] ## LVMCONF:     use_devicesfile = 0
[ 0:00.501] ## LVMCONF:     write_cache_state = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: dmeventd {
[ 0:00.501] ## LVMCONF:     executable = "/home/lvm2/test/lib/dmeventd"
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: global {
[ 0:00.501] ## LVMCONF:     abort_on_internal_errors = 1
[ 0:00.501] ## LVMCONF:     cache_check_executable = "/usr/sbin/cache_check"
[ 0:00.501] ## LVMCONF:     cache_dump_executable = "/usr/sbin/cache_dump"
[ 0:00.501] ## LVMCONF:     cache_repair_executable = "/usr/sbin/cache_repair"
[ 0:00.501] ## LVMCONF:     cache_restore_executable = "/usr/sbin/cache_restore"
[ 0:00.501] ## LVMCONF:     detect_internal_vg_cache_corruption = 1
[ 0:00.501] ## LVMCONF:     etc = "/tmp/LVMTEST500118.AxR1K9qRUi/etc"
[ 0:00.501] ## LVMCONF:     fallback_to_local_locking = 0
[ 0:00.501] ## LVMCONF:     fsadm_executable = "/home/lvm2/test/lib/fsadm"
[ 0:00.501] ## LVMCONF:     library_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/lib"
[ 0:00.501] ## LVMCONF:     locking_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/var/lock/lvm"
[ 0:00.501] ## LVMCONF:     locking_type=1
[ 0:00.501] ## LVMCONF:     notify_dbus = 0
[ 0:00.501] ## LVMCONF:     si_unit_consistency = 1
[ 0:00.501] ## LVMCONF:     thin_check_executable = "/usr/sbin/thin_check"
[ 0:00.501] ## LVMCONF:     thin_dump_executable = "/usr/sbin/thin_dump"
[ 0:00.501] ## LVMCONF:     thin_repair_executable = "/usr/sbin/thin_repair"
[ 0:00.501] ## LVMCONF:     thin_restore_executable = "/usr/sbin/thin_restore"
[ 0:00.501] ## LVMCONF:     use_lvmlockd = 0
[ 0:00.501] ## LVMCONF:     use_lvmpolld = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: log {
[ 0:00.501] ## LVMCONF:     activation = 1
[ 0:00.501] ## LVMCONF:     file = "/tmp/LVMTEST500118.AxR1K9qRUi/debug.log"
[ 0:00.501] ## LVMCONF:     indent = 1
[ 0:00.501] ## LVMCONF:     level = 9
[ 0:00.501] ## LVMCONF:     overwrite = 1
[ 0:00.501] ## LVMCONF:     syslog = 0
[ 0:00.501] ## LVMCONF:     verbose = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] <======== Processing test: "lvconvert-raid-reshape-stripes-load-reload.sh" ========>
[ 0:00.507] 
[ 0:00.507] # Test reshaping under io load
[ 0:00.507] 
[ 0:00.507] which md5sum || skip
[ 0:00.507] #lvconvert-raid-reshape-stripes-load-reload.sh:20+ which md5sum
[ 0:00.507] #environment:0+ alias
[ 0:00.508] #environment:1+ eval declare -f
[ 0:00.508] declare -f
[ 0:00.508] ##environment:1+ declare -f
[ 0:00.508] #environment:1+ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot md5sum
[ 0:00.508] /usr/bin/md5sum
[ 0:00.511] which mkfs.ext4 || skip
[ 0:00.511] #lvconvert-raid-reshape-stripes-load-reload.sh:21+ which mkfs.ext4
[ 0:00.511] #environment:0+ alias
[ 0:00.512] #environment:1+ eval declare -f
[ 0:00.512] declare -f
[ 0:00.512] ##environment:1+ declare -f
[ 0:00.512] #environment:1+ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot mkfs.ext4
[ 0:00.512] /usr/sbin/mkfs.ext4
[ 0:00.514] aux have_raid 1 14 || skip
[ 0:00.515] #lvconvert-raid-reshape-stripes-load-reload.sh:22+ aux have_raid 1 14
[ 0:00.515] 
[ 0:00.595] mount_dir="mnt"
[ 0:00.595] #lvconvert-raid-reshape-stripes-load-reload.sh:24+ mount_dir=mnt
[ 0:00.595] 
[ 0:00.595] cleanup_mounted_and_teardown()
[ 0:00.595] {
[ 0:00.595] 	umount "$mount_dir" || true
[ 0:00.595] 	aux teardown
[ 0:00.595] }
[ 0:00.595] 
[ 0:00.595] checksum_()
[ 0:00.595] {
[ 0:00.595] 	md5sum "$1" | cut -f1 -d' '
[ 0:00.595] }
[ 0:00.595] 
[ 0:00.595] aux prepare_pvs 16 32
[ 0:00.595] #lvconvert-raid-reshape-stripes-load-reload.sh:37+ aux prepare_pvs 16 32
[ 0:00.596] ## preparing ramdisk device...ok (/dev/ram0)
[ 0:00.657] 6,17022,29996053013,-;brd: module loaded
[ 0:00.657] ## preparing 16 devices...ok
[ 0:00.725]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv1" successfully created.
[ 0:00.776]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2" successfully created.
[ 0:00.777]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv3" successfully created.
[ 0:00.778]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv4" successfully created.
[ 0:00.778]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv5" successfully created.
[ 0:00.779]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv6" successfully created.
[ 0:00.779]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv7" successfully created.
[ 0:00.780]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv8" successfully created.
[ 0:00.781]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv9" successfully created.
[ 0:00.781]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv10" successfully created.
[ 0:00.782]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv11" successfully created.
[ 0:00.783]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv12" successfully created.
[ 0:00.783]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv13" successfully created.
[ 0:00.784]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv14" successfully created.
[ 0:00.785]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv15" successfully created.
[ 0:00.785]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv16" successfully created.
[ 0:00.786] 
[ 0:00.815] get_devs
[ 0:00.815] #lvconvert-raid-reshape-stripes-load-reload.sh:39+ get_devs
[ 0:00.815] #utils:270+ local 'IFS=
[ 0:00.815] '
[ 0:00.815] #utils:271+ DEVICES=($(<DEVICES))
[ 0:00.815] #utils:272+ export DEVICES
[ 0:00.817] 
[ 0:00.817] vgcreate $SHARED -s 1M "$vg" "${DEVICES[@]}"
[ 0:00.817] #lvconvert-raid-reshape-stripes-load-reload.sh:41+ vgcreate -s 1M LVMTEST500118vg /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv1 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv3 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv4 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv5 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv6 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv7 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv8 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv9 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv10 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv11 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv12 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv13 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv14 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv15 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv16
[ 0:00.817]   WARNING: This metadata update is NOT backed up.
[ 0:00.880]   Volume group "LVMTEST500118vg" successfully created
[ 0:00.880] 
[ 0:00.894] trap 'cleanup_mounted_and_teardown' EXIT
[ 0:00.894] #lvconvert-raid-reshape-stripes-load-reload.sh:43+ trap cleanup_mounted_and_teardown EXIT
[ 0:00.894] 
[ 0:00.894] # Create 10-way striped raid5 (11 legs total)
[ 0:00.894] lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -n$lv1 $vg
[ 0:00.894] #lvconvert-raid-reshape-stripes-load-reload.sh:46+ lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -nLV1 LVMTEST500118vg
[ 0:00.894]   Rounding size 4.00 MiB (4 extents) up to stripe boundary size 10.00 MiB (10 extents).
[ 0:00.930]   Logical volume "LV1" created.
[ 0:01.259] 6,17023,29996612978,-;device-mapper: raid: Superblocks created for new raid set
[ 0:01.259] 5,17024,29996631884,-;md/raid:mdX: not clean -- starting background reconstruction
[ 0:01.259] 6,17025,29996632348,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:01.259] 6,17026,29996633474,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:01.259] 6,17027,29996634252,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:01.259] 6,17028,29996634972,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:01.259] 6,17029,29996635696,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:01.259] 6,17030,29996636407,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:01.259] 6,17031,29996637137,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:01.259] 6,17032,29996637846,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:01.259] 6,17033,29996638593,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:01.259] 6,17034,29996639486,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:01.259] 6,17035,29996640384,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:01.259] 6,17036,29996643318,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:01.259] 4,17037,29996645343,-;mdX: bitmap file is out of date, doing full recovery
[ 0:01.259] 6,17038,29996646487,-;md: resync of RAID array mdX
[ 0:01.259]   WARNING: This metadata update is NOT backed up.
[ 0:01.260] check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:01.284] 6,17039,29996664728,-;md: mdX: resync done.
[ 0:01.284] #lvconvert-raid-reshape-stripes-load-reload.sh:47+ check lv_first_seg_field LVMTEST500118vg/LV1 segtype raid5_ls
[ 0:01.284] check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:01.360] #lvconvert-raid-reshape-stripes-load-reload.sh:48+ check lv_first_seg_field LVMTEST500118vg/LV1 stripesize 64.00k
[ 0:01.360] check lv_first_seg_field $vg/$lv1 data_stripes 10
[ 0:01.442] #lvconvert-raid-reshape-stripes-load-reload.sh:49+ check lv_first_seg_field LVMTEST500118vg/LV1 data_stripes 10
[ 0:01.442] check lv_first_seg_field $vg/$lv1 stripes 11
[ 0:01.516] #lvconvert-raid-reshape-stripes-load-reload.sh:50+ check lv_first_seg_field LVMTEST500118vg/LV1 stripes 11
[ 0:01.517] wipefs -a "$DM_DEV_DIR/$vg/$lv1"
[ 0:01.600] #lvconvert-raid-reshape-stripes-load-reload.sh:51+ wipefs -a /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:01.600] mkfs -t ext4 "$DM_DEV_DIR/$vg/$lv1"
[ 0:01.620] #lvconvert-raid-reshape-stripes-load-reload.sh:52+ mkfs -t ext4 /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:01.620] mke2fs 1.46.5 (30-Dec-2021)
[ 0:01.654] Creating filesystem with 10240 1k blocks and 2560 inodes
[ 0:01.655] Filesystem UUID: 84c2201e-4589-48a8-ba44-019d481366f2
[ 0:01.655] Superblock backups stored on blocks: 
[ 0:01.655] 	8193
[ 0:01.655] 
[ 0:01.655] Allocating group tables: 0/2   done                            
[ 0:01.655] Writing inode tables: 0/2   done                            
[ 0:01.656] Creating journal (1024 blocks): done
[ 0:01.660] Writing superblocks and filesystem accounting information: 0/2   done
[ 0:01.662] 
[ 0:01.662] 
[ 0:01.663] mkdir -p "$mount_dir"
[ 0:01.663] #lvconvert-raid-reshape-stripes-load-reload.sh:54+ mkdir -p mnt
[ 0:01.663] mount "$DM_DEV_DIR/$vg/$lv1" "$mount_dir"
[ 0:01.666] #lvconvert-raid-reshape-stripes-load-reload.sh:55+ mount /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1 mnt
[ 0:01.666] 
[ 0:01.679] echo 3 >/proc/sys/vm/drop_caches
[ 0:01.679] 6,17040,29997074848,-;EXT4-fs (dm-41): mounted filesystem 84c2201e-4589-48a8-ba44-019d481366f2 r/w with ordered data mode. Quota mode: none.
[ 0:01.679] #lvconvert-raid-reshape-stripes-load-reload.sh:57+ echo 3
[ 0:01.679] # FIXME: This is filling up ram disk. Use sane amount of data please! Rate limit the data written!
[ 0:01.709] dd if=/dev/urandom of="$mount_dir/random" bs=1M count=4 conv=fdatasync
[ 0:01.709] 6,17041,29997106145,-;bash (500118): drop_caches: 3
[ 0:01.709] #lvconvert-raid-reshape-stripes-load-reload.sh:59+ dd if=/dev/urandom of=mnt/random bs=1M count=4 conv=fdatasync
[ 0:01.709] 4+0 records in
[ 0:02.154] 4+0 records out
[ 0:02.154] 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0978546 s, 42.9 MB/s
[ 0:02.154] checksum_ "$mount_dir/random" >MD5
[ 0:02.173] #lvconvert-raid-reshape-stripes-load-reload.sh:60+ checksum_ mnt/random
[ 0:02.173] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ md5sum mnt/random
[ 0:02.186] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ cut -f1 '-d '
[ 0:02.186] 
[ 0:02.296] # FIXME: wait_for_sync - is this really testing anything under load?
[ 0:02.296] aux wait_for_sync $vg $lv1
[ 0:02.296] #lvconvert-raid-reshape-stripes-load-reload.sh:63+ aux wait_for_sync LVMTEST500118vg LV1
[ 0:02.296] LVMTEST500118vg/LV1 (raid5_ls) is in-sync     0 20480 raid raid5_ls 11 AAAAAAAAAAA 2048/2048 idle 0 0 -
[ 0:02.487] aux delay_dev "$dev2" 0 200
[ 0:02.488] #lvconvert-raid-reshape-stripes-load-reload.sh:64+ aux delay_dev /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 0 200
[ 0:02.488] 
[ 0:02.803] # Reshape it to 15 data stripes
[ 0:02.803] lvconvert --yes --stripes 15 $vg/$lv1
[ 0:02.803] #lvconvert-raid-reshape-stripes-load-reload.sh:67+ lvconvert --yes --stripes 15 LVMTEST500118vg/LV1
[ 0:02.803]   Using default stripesize 64.00 KiB.
[ 0:03.682]   WARNING: Adding stripes to active and open logical volume LVMTEST500118vg/LV1 will grow it from 10 to 15 extents!
[ 0:03.683]   Run "lvresize -l10 LVMTEST500118vg/LV1" to shrink it or use the additional capacity.
[ 0:03.683]   Logical volume LVMTEST500118vg/LV1 successfully converted.
[ 0:11.171] 6,17042,30000258189,-;device-mapper: raid: Device 11 specified for rebuild; clearing superblock
[ 0:11.171] 6,17043,30000258720,-;device-mapper: raid: Device 12 specified for rebuild; clearing superblock
[ 0:11.171] 6,17044,30000259148,-;device-mapper: raid: Device 13 specified for rebuild; clearing superblock
[ 0:11.171] 6,17045,30000259613,-;device-mapper: raid: Device 14 specified for rebuild; clearing superblock
[ 0:11.171] 6,17046,30000260025,-;device-mapper: raid: Device 15 specified for rebuild; clearing superblock
[ 0:11.171] 6,17047,30000306430,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.171] 6,17048,30000307150,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.171] 6,17049,30000308006,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.171] 6,17050,30000308757,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.171] 6,17051,30000309488,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.171] 6,17052,30000310220,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.171] 6,17053,30000310955,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.171] 6,17054,30000311677,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.171] 6,17055,30000312395,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.171] 6,17056,30000313126,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.171] 6,17057,30000313850,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.171] 6,17058,30000316143,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:11.171] 4,17059,30000317235,-;mdX: bitmap file is out of date (20 < 21) -- forcing full recovery
[ 0:11.171] 4,17060,30001740496,-;mdX: bitmap file is out of date, doing full recovery
[ 0:11.171] 6,17061,30001948586,-;dm-41: detected capacity change from 30720 to 20480
[ 0:11.171] 6,17062,30002817703,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.171] 6,17063,30002818545,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.171] 6,17064,30002819394,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.171] 6,17065,30002820460,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.171] 6,17066,30002821197,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.171] 6,17067,30002821937,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.172] 6,17068,30002822700,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.172] 6,17069,30002823444,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.172] 6,17070,30002824167,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.172] 6,17071,30002824913,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.172] 6,17072,30002825619,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.172] 6,17073,30002827836,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:11.172] 6,17074,30004124627,-;dm-41: detected capacity change from 30720 to 20480
[ 0:11.172] 6,17075,30005415653,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.172] 6,17076,30005416418,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.172] 6,17077,30005417338,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.172] 6,17078,30005418354,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.172] 6,17079,30005419053,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.172] 6,17080,30005419743,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.172] 6,17081,30005420515,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.172] 6,17082,30005421263,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.172] 6,17083,30005421981,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.172] 6,17084,30005422720,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.172] 6,17085,30005423413,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.172] 6,17086,30005424138,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:11.172] 6,17087,30005424847,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:11.172] 6,17088,30005425731,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:11.172] 6,17089,30005426522,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:11.172] 6,17090,30005427252,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:11.172] 6,17091,30005430243,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:11.172] 3,17092,30006360584,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:11.172] 3,17093,30006361406,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:11.172] 3,17094,30006362352,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:11.172] 3,17095,30006363459,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:11.172]   WARNING: This metadata update is NOT backed up.
[ 0:11.178] check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:11.199] #lvconvert-raid-reshape-stripes-load-reload.sh:68+ check lv_first_seg_field LVMTEST500118vg/LV1 segtype raid5_ls
[ 0:11.199] check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:11.304] #lvconvert-raid-reshape-stripes-load-reload.sh:69+ check lv_first_seg_field LVMTEST500118vg/LV1 stripesize 64.00k
[ 0:11.304] check lv_first_seg_field $vg/$lv1 data_stripes 15
[ 0:11.378] 6,17096,30006769084,-;md: reshape of RAID array mdX
[ 0:11.378] #lvconvert-raid-reshape-stripes-load-reload.sh:70+ check lv_first_seg_field LVMTEST500118vg/LV1 data_stripes 15
[ 0:11.378] check lv_first_seg_field $vg/$lv1 stripes 16
[ 0:11.456] #lvconvert-raid-reshape-stripes-load-reload.sh:71+ check lv_first_seg_field LVMTEST500118vg/LV1 stripes 16
[ 0:11.456] 
[ 0:11.543] # Reload table during reshape to test for data corruption
[ 0:11.543] case "$(uname -r)" in
[ 0:11.543]   5.[89]*|5.1[012].*|3.10.0-862*|4.18.0-*.el8*)
[ 0:11.543] 	should not echo "Skipping table reload test on on unfixed kernel!!!" ;;
[ 0:11.543]   *)
[ 0:11.543] for i in {0..5}
[ 0:11.543] do
[ 0:11.543] 	dmsetup table $vg-$lv1|dmsetup load $vg-$lv1
[ 0:11.543] 	dmsetup suspend --noflush $vg-$lv1
[ 0:11.543] 	dmsetup resume $vg-$lv1
[ 0:11.543] 	sleep .5
[ 0:11.543] done
[ 0:11.543] 
[ 0:11.543] esac
[ 0:11.543] #lvconvert-raid-reshape-stripes-load-reload.sh:74+ case "$(uname -r)" in
[ 0:11.544] ##lvconvert-raid-reshape-stripes-load-reload.sh:74+ uname -r
[ 0:11.544] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:11.567] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:11.568] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:11.568] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:11.603] 6,17097,30006984799,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.603] 6,17098,30006985564,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.603] 6,17099,30006986463,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.603] 6,17100,30006987501,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.603] 6,17101,30006988250,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.604] 6,17102,30006988988,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.604] 6,17103,30006989747,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.604] 6,17104,30006990481,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.604] 6,17105,30006991229,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.604] 6,17106,30006991977,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.604] 6,17107,30006992711,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.604] 6,17108,30006993432,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:11.604] 6,17109,30006994192,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:11.604] 6,17110,30006994914,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:11.604] 6,17111,30006995670,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:11.604] 6,17112,30006996371,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:11.604] 6,17113,30006999783,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:11.604] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:13.909] 6,17114,30007428976,-;md: mdX: reshape interrupted.
[ 0:13.909] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:14.336] 3,17115,30009540918,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:14.336] 3,17116,30009541758,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:14.336] 3,17117,30009542650,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:14.336] 3,17118,30009543492,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:14.336] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:14.839] 6,17119,30009932832,-;md: reshape of RAID array mdX
[ 0:14.839] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:14.840] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:14.840] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:14.884] 6,17120,30010265679,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:14.884] 6,17121,30010266406,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:14.884] 6,17122,30010267319,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:14.884] 6,17123,30010268077,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:14.884] 6,17124,30010268805,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:14.884] 6,17125,30010269500,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:14.884] 6,17126,30010270233,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:14.884] 6,17127,30010270947,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:14.884] 6,17128,30010271662,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:14.884] 6,17129,30010272523,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:14.884] 6,17130,30010273444,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:14.884] 6,17131,30010274166,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:14.884] 6,17132,30010274922,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:14.884] 6,17133,30010275714,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:14.884] 6,17134,30010276417,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:14.884] 6,17135,30010277153,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:14.884] 6,17136,30010280355,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:14.884] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:16.637] 6,17137,30010361455,-;md: mdX: reshape interrupted.
[ 0:16.637] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:17.064] 3,17138,30012257019,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:17.064] 3,17139,30012257974,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:17.064] 3,17140,30012258833,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:17.064] 3,17141,30012259989,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:17.064] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:17.567] 6,17142,30012660971,-;md: reshape of RAID array mdX
[ 0:17.567] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:17.568] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:17.568] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:17.596] 6,17143,30012978632,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:17.596] 6,17144,30012979378,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:17.596] 6,17145,30012980100,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:17.596] 6,17146,30012980849,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:17.596] 6,17147,30012981544,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:17.596] 6,17148,30012982294,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:17.596] 6,17149,30012983120,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:17.596] 6,17150,30012983878,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:17.596] 6,17151,30012984576,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:17.596] 6,17152,30012985334,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:17.596] 6,17153,30012986029,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:17.596] 6,17154,30012986797,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:17.596] 6,17155,30012987500,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:17.596] 6,17156,30012988256,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:17.596] 6,17157,30012988987,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:17.596] 6,17158,30012989698,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:17.596] 6,17159,30012992381,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:17.596] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:19.993] 6,17160,30013501310,-;md: mdX: reshape interrupted.
[ 0:19.993] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:20.416] 3,17161,30015613049,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:20.416] 3,17162,30015613905,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:20.416] 3,17163,30015614847,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:20.416] 3,17164,30015615632,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:20.416] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:20.919] 6,17165,30016013020,-;md: reshape of RAID array mdX
[ 0:20.919] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:20.920] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:20.920] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:20.961] 6,17166,30016342829,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:20.961] 6,17167,30016343551,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:20.961] 6,17168,30016344518,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:20.961] 6,17169,30016345498,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:20.961] 6,17170,30016346194,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:20.961] 6,17171,30016346879,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:20.961] 6,17172,30016347589,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:20.961] 6,17173,30016348355,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:20.961] 6,17174,30016349071,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:20.961] 6,17175,30016349861,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:20.961] 6,17176,30016350608,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:20.961] 6,17177,30016351359,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:20.961] 6,17178,30016352070,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:20.961] 6,17179,30016352776,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:20.961] 6,17180,30016353487,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:20.961] 6,17181,30016354225,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:20.961] 6,17182,30016357360,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:20.961] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:23.149] 6,17183,30016665872,-;md: mdX: reshape interrupted.
[ 0:23.149] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:23.576] 3,17184,30018772956,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:23.576] 3,17185,30018773803,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:23.576] 3,17186,30018774727,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:23.576] 3,17187,30018775862,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:23.576] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:24.079] 6,17188,30019173004,-;md: reshape of RAID array mdX
[ 0:24.079] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:24.080] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:24.080] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:24.120] 6,17189,30019501847,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:24.120] 6,17190,30019502690,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:24.120] 6,17191,30019503725,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:24.120] 6,17192,30019504500,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:24.120] 6,17193,30019505259,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:24.120] 6,17194,30019505988,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:24.120] 6,17195,30019506721,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:24.120] 6,17196,30019507436,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:24.120] 6,17197,30019508142,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:24.120] 6,17198,30019508866,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:24.120] 6,17199,30019509566,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:24.120] 6,17200,30019510306,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:24.120] 6,17201,30019511022,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:24.120] 6,17202,30019511751,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:24.120] 6,17203,30019512508,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:24.120] 6,17204,30019513221,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:24.120] 6,17205,30019516335,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:24.120] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:25.881] 6,17206,30019601478,-;md: mdX: reshape interrupted.
[ 0:25.881] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:26.304] 3,17207,30021496938,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:26.304] 3,17208,30021497745,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:26.304] 3,17209,30021498689,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:26.304] 3,17210,30021499814,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:26.304] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:26.806] 6,17211,30021901052,-;md: reshape of RAID array mdX
[ 0:26.806] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:26.807] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:26.808] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:26.840] 6,17212,30022221732,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:26.840] 6,17213,30022222504,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:26.840] 6,17214,30022223360,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:26.840] 6,17215,30022224100,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:26.840] 6,17216,30022224812,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:26.840] 6,17217,30022225555,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:26.840] 6,17218,30022226304,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:26.840] 6,17219,30022227114,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:26.840] 6,17220,30022227891,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:26.840] 6,17221,30022228595,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:26.840] 6,17222,30022229350,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:26.840] 6,17223,30022230121,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:26.840] 6,17224,30022230846,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:26.840] 6,17225,30022231689,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:26.840] 6,17226,30022232713,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:26.840] 6,17227,30022233457,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:26.840] 6,17228,30022236365,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:26.840] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:29.221] 6,17229,30022534164,-;md: mdX: reshape done.
[ 0:29.221] 6,17230,30023572945,-;dm-41: detected capacity change from 30720 to 20480
[ 0:29.221] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:29.648] 3,17231,30024840682,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:29.648] 3,17232,30024841543,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:29.648] 3,17233,30024842483,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:29.648] 3,17234,30024843308,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:29.648] 
[ 0:30.150] aux delay_dev "$dev2" 0
[ 0:30.150] 6,17235,30025245116,-;md: reshape of RAID array mdX
[ 0:30.150] #lvconvert-raid-reshape-stripes-load-reload.sh:88+ aux delay_dev /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 0
[ 0:30.151] 
[ 0:30.194] kill -9 %% || true
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:90+ kill -9 %%
[ 0:30.194] /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh: line 90: kill: %%: no such job
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:90+ true
[ 0:30.194] wait
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:91+ wait
[ 0:30.194] 
[ 0:30.194] checksum_ "$mount_dir/random" >MD5_new
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:93+ checksum_ mnt/random
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ md5sum mnt/random
[ 0:30.195] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ cut -f1 '-d '
[ 0:30.196] 
[ 0:30.220] umount "$mount_dir"
[ 0:30.220] #lvconvert-raid-reshape-stripes-load-reload.sh:95+ umount mnt
[ 0:30.220] 
[ 0:30.270] fsck -fn "$DM_DEV_DIR/$vg/$lv1"
[ 0:30.270] 6,17236,30025620079,-;md: mdX: reshape done.
[ 0:30.270] 6,17237,30025648112,-;dm-41: detected capacity change from 30720 to 20480
[ 0:30.270] 6,17238,30025666627,-;EXT4-fs (dm-41): unmounting filesystem 84c2201e-4589-48a8-ba44-019d481366f2.
[ 0:30.270] #lvconvert-raid-reshape-stripes-load-reload.sh:97+ fsck -fn /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:30.271] fsck from util-linux 2.37.4
[ 0:30.285] e2fsck 1.46.5 (30-Dec-2021)
[ 0:30.376] Pass 1: Checking inodes, blocks, and sizes
[ 0:30.378] Pass 2: Checking directory structure
[ 0:30.379] Entry 'random' in / (2) references inode 12 found in group 0's unused inodes area.
[ 0:30.379] Fix? no
[ 0:30.379] 
[ 0:30.379] Entry 'random' in / (2) has deleted/unused inode 12.  Clear? no
[ 0:30.379] 
[ 0:30.379] Pass 3: Checking directory connectivity
[ 0:30.379] Pass 4: Checking reference counts
[ 0:30.379] Pass 5: Checking group summary information
[ 0:30.380] Block bitmap differences:  -(1920--1935) -(2560--5631) -(8289--9280) -(10209--10224)
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free blocks count wrong for group #0 (6429, counted=3341).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free blocks count wrong for group #1 (1966, counted=958).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Inode bitmap differences:  -12
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free inodes count wrong for group #0 (1269, counted=1268).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Inode bitmap differences: Group 0 inode bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] Block bitmap differences: Group 0 block bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] Group 1 block bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] 
[ 0:30.381] /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1: ********** WARNING: Filesystem still has errors **********
[ 0:30.381] 
[ 0:30.381] /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1: 12/2560 files (0.0% non-contiguous), 5941/10240 blocks
[ 0:30.381] set +vx; STACKTRACE; set -vx
[ 0:30.382] ##lvconvert-raid-reshape-stripes-load-reload.sh:97+ set +vx
[ 0:30.383] ## - /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh:97
[ 0:30.383] ## 1 STACKTRACE() called from /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh:97
[ 0:30.383] <======== Info ========>
[ 0:30.510] ## DMINFO:   Name                          Maj Min Stat Open Targ Event  UUID                                                                
[ 0:30.689] ## DMINFO:   LVMTEST500118pv1              254   3 L--w    2    1      0 TEST-LVMTEST500118pv1                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv10             254  12 L--w    2    1      0 TEST-LVMTEST500118pv10                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv11             254  13 L--w    2    1      0 TEST-LVMTEST500118pv11                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv12             254  14 L--w    2    1      0 TEST-LVMTEST500118pv12                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv13             254  15 L--w    2    1      0 TEST-LVMTEST500118pv13                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv14             254  16 L--w    2    1      0 TEST-LVMTEST500118pv14                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv15             254  17 L--w    2    1      0 TEST-LVMTEST500118pv15                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv16             254  18 L--w    2    1      0 TEST-LVMTEST500118pv16                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv2              254   4 L--w    2    1      0 TEST-LVMTEST500118pv2                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv3              254   5 L--w    2    1      0 TEST-LVMTEST500118pv3                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv4              254   6 L--w    2    1      0 TEST-LVMTEST500118pv4                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv5              254   7 L--w    2    1      0 TEST-LVMTEST500118pv5                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv6              254   8 L--w    2    1      0 TEST-LVMTEST500118pv6                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv7              254   9 L--w    2    1      0 TEST-LVMTEST500118pv7                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv8              254  10 L--w    2    1      0 TEST-LVMTEST500118pv8                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv9              254  11 L--w    2    1      0 TEST-LVMTEST500118pv9                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1           254  41 L--w    0    1     17 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_0  254  20 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtpD6WQF1CobU3tkwiFxBT1XBcHwerULBu
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_1  254  22 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqnoSSKda19GKA7WsZG1I9QA04OBf1B0y
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_10 254  40 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtTKg0whlxOIMgMzsuMlkxJ3KSb8XzSEWi
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_11 254  43 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt70tvr9yqHxN2GWU7yxH3YPL3k4xwA63I
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_12 254  45 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8mxaG5SGE32WHeEosPk8YRzjdhgnimXj
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_13 254  47 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYFS3S7q3tv79eaD0b9V2dvhXfFH5AzCe
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_14 254  49 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtxnwk2sgs9H0iRYfhExHQHhj8FeUZLHDK
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_15 254  51 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvULsrV4VKIQovvvPUaqdvK6zXKegCBvX
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_2  254  24 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtKj25JNrKV6SBVwMzsGVHpu3vYfecbUAT
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_3  254  26 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtFC3gDcqmY50hGMoUH5DmBdP1TwoaWdfa
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_4  254  28 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmty3RNUNUzcv7DcTiHTUOZMy3koAVhD0sc
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_5  254  30 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt5LvpjI7Lxa8BYrtk6O3jrFwjfXKjPmuU
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_6  254  32 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt6frue4ylGjtrPvQcVzeqnnJmBVHYiJLH
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_7  254  34 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtchsDOVA6wUZnaa6VF0sVfj3bvta4bRru
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_8  254  36 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtCWR2oKVdJld9Vfbzbyod2jo8EQjVhZpc
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_9  254  38 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8IuOMVo5Dq0bbWuODFimVzGzlTUAFEvz
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_0   254  19 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9VLfstRkWdjLYSEeOBm2XUe6tyWxOfOw
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_1   254  21 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1Kuhl8cjecGLPkGpXK3swZjCtmzaoffu
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_10  254  39 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1zmjff3ohmUgjtiD5skuJVn485x6iFw1
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_11  254  42 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYUT44N6SJiqq8cbNIXG5Kxi1XS1MHy7m
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_12  254  44 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtulWH74owXTdv6w9Lu7vy83W3oYxwff5L
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_13  254  46 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtajpWmgqoHqF78tHKorIemrhNI0NB7sj2
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_14  254  48 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdPv93N0GAHAOf7VdiCCnSjCOmpUHtMuq
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_15  254  50 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdCmqb463WIoq8Jf7vmvbwKinVFX0kDSA
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_2   254  23 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1nvTeTVDkLiF009Z2hQzmZsoHQwmyOuU
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_3   254  25 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9o0uj9s6CL0ZYEhKh2xg4Psukriei6bz
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_4   254  27 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtzzmLoHtGlsUTSMLMeIMofmzeeUmVCE5k
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_5   254  29 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtn0CwCkqlpgGgrYbEHSdKT6WDuoRwVm9y
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_6   254  31 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtjP5gBQ3S1Q3i0wDc6vWibWQpvx1p9tnR
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_7   254  33 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqUglRG9A9AL8V81RCER6HpY5nWeL89jG
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_8   254  35 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvHThjBcpzz3SsW1XJcxNj5ITg9qbmsw8
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_9   254  37 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtHLzF2FgdKKO9QnZCW0Tm65J2iFuP0cqi
[ 0:30.690] <======== Active table ========>
[ 0:30.691] ## DMTABLE:  LVMTEST500118pv1: 0 65536 linear 1:0 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv10: 0 65536 linear 1:0 591872
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv11: 0 65536 linear 1:0 657408
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv12: 0 65536 linear 1:0 722944
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv13: 0 65536 linear 1:0 788480
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv14: 0 65536 linear 1:0 854016
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv15: 0 65536 linear 1:0 919552
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv16: 0 65536 linear 1:0 985088
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv2: 0 65536 linear 1:0 67584
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv3: 0 65536 linear 1:0 133120
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv4: 0 65536 linear 1:0 198656
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv5: 0 65536 linear 1:0 264192
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv6: 0 65536 linear 1:0 329728
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv7: 0 65536 linear 1:0 395264
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv8: 0 65536 linear 1:0 460800
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv9: 0 65536 linear 1:0 526336
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1: 0 30720 raid raid5_ls 3 128 region_size 1024 16 254:19 254:20 254:21 254:22 254:23 254:24 254:25 254:26 254:27 254:28 254:29 254:30 254:31 254:32 254:33 254:34 254:35 254:36 254:37 254:38 254:39 254:40 254:42 254:43 254:44 254:45 254:46 254:47 254:48 254:49 254:50 254:51
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_0: 0 2048 linear 254:3 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_0: 2048 2048 linear 254:3 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_1: 0 2048 linear 254:4 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_1: 2048 2048 linear 254:4 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_10: 0 2048 linear 254:13 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_10: 2048 2048 linear 254:13 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_11: 0 2048 linear 254:14 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_11: 2048 2048 linear 254:14 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_12: 0 2048 linear 254:15 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_12: 2048 2048 linear 254:15 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_13: 0 2048 linear 254:16 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_13: 2048 2048 linear 254:16 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_14: 0 2048 linear 254:17 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_14: 2048 2048 linear 254:17 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_15: 0 2048 linear 254:18 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_15: 2048 2048 linear 254:18 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_2: 0 2048 linear 254:5 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_2: 2048 2048 linear 254:5 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_3: 0 2048 linear 254:6 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_3: 2048 2048 linear 254:6 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_4: 0 2048 linear 254:7 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_4: 2048 2048 linear 254:7 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_5: 0 2048 linear 254:8 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_5: 2048 2048 linear 254:8 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_6: 0 2048 linear 254:9 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_6: 2048 2048 linear 254:9 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_7: 0 2048 linear 254:10 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_7: 2048 2048 linear 254:10 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_8: 0 2048 linear 254:11 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_8: 2048 2048 linear 254:11 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_9: 0 2048 linear 254:12 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_9: 2048 2048 linear 254:12 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_0: 0 2048 linear 254:3 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_1: 0 2048 linear 254:4 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_10: 0 2048 linear 254:13 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_11: 0 2048 linear 254:14 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_12: 0 2048 linear 254:15 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_13: 0 2048 linear 254:16 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_14: 0 2048 linear 254:17 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_15: 0 2048 linear 254:18 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_2: 0 2048 linear 254:5 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_3: 0 2048 linear 254:6 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_4: 0 2048 linear 254:7 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_5: 0 2048 linear 254:8 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_6: 0 2048 linear 254:9 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_7: 0 2048 linear 254:10 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_8: 0 2048 linear 254:11 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_9: 0 2048 linear 254:12 2048
[ 0:30.696] <======== Inactive table ========>
[ 0:30.697] ## DMITABLE: LVMTEST500118pv1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv16: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv9: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_0: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_9: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_0: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_9: 
[ 0:30.701] <======== Status ========>
[ 0:30.702] ## DMSTATUS: LVMTEST500118pv1: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv10: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv11: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv12: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv13: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv14: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv15: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv16: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv2: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv3: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv4: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv5: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv6: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv7: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv8: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv9: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1: 0 30720 raid raid5_ls 16 AAAAAAAAAAAAAAAA 2048/2048 idle 0 0 -
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_0: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_0: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_1: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_1: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_10: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_10: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_11: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_11: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_12: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_12: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_13: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_13: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_14: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_14: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_15: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_15: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_2: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_2: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_3: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_3: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_4: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_4: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_5: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_5: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_6: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_6: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_7: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_7: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_8: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_8: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_9: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_9: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_0: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_1: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_10: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_11: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_12: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_13: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_14: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_15: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_2: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_3: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_4: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_5: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_6: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_7: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_8: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_9: 0 2048 linear 
[ 0:30.706] <======== Tree ========>
[ 0:30.707] ## DMTREE:   LVMTEST500118vg-LV1 (254:41)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_15 (254:51)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv16 (254:18)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_15 (254:50)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv16 (254:18)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_14 (254:49)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv15 (254:17)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_14 (254:48)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv15 (254:17)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_13 (254:47)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv14 (254:16)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_13 (254:46)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv14 (254:16)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_12 (254:45)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv13 (254:15)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_12 (254:44)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv13 (254:15)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_11 (254:43)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv12 (254:14)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_11 (254:42)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv12 (254:14)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_10 (254:40)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv11 (254:13)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_10 (254:39)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv11 (254:13)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_9 (254:38)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv10 (254:12)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_9 (254:37)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv10 (254:12)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_8 (254:36)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv9 (254:11)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_8 (254:35)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv9 (254:11)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_7 (254:34)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv8 (254:10)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_7 (254:33)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv8 (254:10)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_6 (254:32)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv7 (254:9)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_6 (254:31)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv7 (254:9)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_5 (254:30)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv6 (254:8)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_5 (254:29)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv6 (254:8)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_4 (254:28)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv5 (254:7)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_4 (254:27)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv5 (254:7)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_3 (254:26)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv4 (254:6)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_3 (254:25)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv4 (254:6)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_2 (254:24)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv3 (254:5)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_2 (254:23)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv3 (254:5)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_1 (254:22)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv2 (254:4)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_1 (254:21)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv2 (254:4)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_0 (254:20)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv1 (254:3)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    `-LVMTEST500118vg-LV1_rmeta_0 (254:19)
[ 0:30.711] ## DMTREE:       `-LVMTEST500118pv1 (254:3)
[ 0:30.713] ## DMTREE:          `- (1:0)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-home (254:2)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-root (254:0)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-swap (254:1)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] <======== Recursive list of /tmp/LVMTEST500118.AxR1K9qRUi/dev ========>
[ 0:30.713] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev:
[ 0:30.771] ## LS_LR:	total 4
[ 0:30.771] ## LS_LR:	drwxr-xr-x. 2 root root   17 Mar  3 06:23 LVMTEST500118vg
[ 0:30.771] ## LS_LR:	drwxr-xr-x. 2 root root 4096 Mar  3 06:23 mapper
[ 0:30.771] ## LS_LR:	crw-r--r--. 1 root root 1, 3 Mar  3 06:23 testnull
[ 0:30.771] ## LS_LR:	
[ 0:30.771] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg:
[ 0:30.771] ## LS_LR:	total 0
[ 0:30.771] ## LS_LR:	lrwxrwxrwx. 1 root root 60 Mar  3 06:23 LV1 -> /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1
[ 0:30.771] ## LS_LR:	
[ 0:30.771] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper:
[ 0:30.771] ## LS_LR:	total 0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   3 Mar  3 06:23 LVMTEST500118pv1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  12 Mar  3 06:23 LVMTEST500118pv10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  13 Mar  3 06:23 LVMTEST500118pv11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  14 Mar  3 06:23 LVMTEST500118pv12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  15 Mar  3 06:23 LVMTEST500118pv13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  16 Mar  3 06:23 LVMTEST500118pv14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  17 Mar  3 06:23 LVMTEST500118pv15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  18 Mar  3 06:23 LVMTEST500118pv16
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   4 Mar  3 06:23 LVMTEST500118pv2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   5 Mar  3 06:23 LVMTEST500118pv3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   6 Mar  3 06:23 LVMTEST500118pv4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   7 Mar  3 06:23 LVMTEST500118pv5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   8 Mar  3 06:23 LVMTEST500118pv6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   9 Mar  3 06:23 LVMTEST500118pv7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  10 Mar  3 06:23 LVMTEST500118pv8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  11 Mar  3 06:23 LVMTEST500118pv9
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  41 Mar  3 06:23 LVMTEST500118vg-LV1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  20 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  22 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  40 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  43 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  45 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  47 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  49 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  51 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  24 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  26 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  28 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  30 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  32 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  34 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  36 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  38 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_9
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  19 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  21 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  39 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  42 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  44 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  46 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  48 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  50 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  23 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  25 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  27 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  29 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  31 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  33 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  35 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  37 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_9
[ 0:30.771] ## LS_LR:	crw-------. 1 root root  10, 236 Mar  3 06:23 control
[ 0:30.771] <======== Udev DB content ========>
[ 0:30.772] ## UDEV:	P: /devices/virtual/block/dm-0
[ 0:30.943] ## UDEV:	M: dm-0
[ 0:30.943] ## UDEV:	R: 0
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:0
[ 0:30.943] ## UDEV:	N: dm-0
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: disk/by-uuid/4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	S: rhel_hp-dl380eg8-02/root
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1
[ 0:30.943] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	Q: 2
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-0
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-0
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=2
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=0
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=13273392
[ 0:30.943] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:30.943] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1
[ 0:30.943] ## UDEV:	E: DM_SUSPENDED=0
[ 0:30.943] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:30.943] ## UDEV:	E: DM_LV_NAME=root
[ 0:30.943] ## UDEV:	E: ID_FS_UUID=4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	E: ID_FS_UUID_ENC=4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	E: ID_FS_SIZE=75094818816
[ 0:30.943] ## UDEV:	E: ID_FS_LASTBLOCK=18350080
[ 0:30.943] ## UDEV:	E: ID_FS_BLOCKSIZE=4096
[ 0:30.943] ## UDEV:	E: ID_FS_TYPE=xfs
[ 0:30.943] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:30.943] ## UDEV:	E: SYSTEMD_READY=1
[ 0:30.943] ## UDEV:	E: DEVLINKS=/dev/disk/by-uuid/4617d6fe-d894-407c-82dd-d048e4ce4d2e /dev/rhel_hp-dl380eg8-02/root /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-root /dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1 /dev/mapper/rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	E: TAGS=:systemd:
[ 0:30.943] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:30.943] ## UDEV:	
[ 0:30.943] ## UDEV:	P: /devices/virtual/block/dm-1
[ 0:30.943] ## UDEV:	M: dm-1
[ 0:30.943] ## UDEV:	R: 1
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:1
[ 0:30.943] ## UDEV:	N: dm-1
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo
[ 0:30.943] ## UDEV:	S: rhel_hp-dl380eg8-02/swap
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	S: disk/by-uuid/1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	Q: 3
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-1
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-1
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=3
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=1
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=13441256
[ 0:30.943] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:30.943] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo
[ 0:30.943] ## UDEV:	E: DM_SUSPENDED=0
[ 0:30.943] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:30.943] ## UDEV:	E: DM_LV_NAME=swap
[ 0:30.943] ## UDEV:	E: ID_FS_UUID=1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: ID_FS_UUID_ENC=1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: ID_FS_VERSION=1
[ 0:30.943] ## UDEV:	E: ID_FS_TYPE=swap
[ 0:30.943] ## UDEV:	E: ID_FS_USAGE=other
[ 0:30.943] ## UDEV:	E: SYSTEMD_READY=1
[ 0:30.943] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo /dev/rhel_hp-dl380eg8-02/swap /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-swap /dev/mapper/rhel_hp--dl380eg8--02-swap /dev/disk/by-uuid/1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: TAGS=:systemd:
[ 0:30.943] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:30.943] ## UDEV:	
[ 0:30.943] ## UDEV:	P: /devices/virtual/block/dm-10
[ 0:30.943] ## UDEV:	M: dm-10
[ 0:30.943] ## UDEV:	R: 10
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:10
[ 0:30.943] ## UDEV:	N: dm-10
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: mapper/LVMTEST500118pv8
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv8
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv8
[ 0:30.943] ## UDEV:	Q: 28776
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-10
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-10
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=28776
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=10
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=29990200302
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv8 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv8 /dev/disk/by-id/dm-name-LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-11
[ 0:31.015] ## UDEV:	M: dm-11
[ 0:31.015] ## UDEV:	R: 11
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:11
[ 0:31.015] ## UDEV:	N: dm-11
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv9
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	Q: 28777
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-11
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-11
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28777
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=11
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990202104
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv9 /dev/mapper/LVMTEST500118pv9 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-12
[ 0:31.015] ## UDEV:	M: dm-12
[ 0:31.015] ## UDEV:	R: 12
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:12
[ 0:31.015] ## UDEV:	N: dm-12
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv10
[ 0:31.015] ## UDEV:	Q: 28778
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-12
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-12
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28778
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=12
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990203194
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv10 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv10 /dev/mapper/LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-13
[ 0:31.015] ## UDEV:	M: dm-13
[ 0:31.015] ## UDEV:	R: 13
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:13
[ 0:31.015] ## UDEV:	N: dm-13
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv11
[ 0:31.015] ## UDEV:	Q: 28779
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-13
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-13
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28779
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=13
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990204380
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv11 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv11 /dev/mapper/LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-14
[ 0:31.015] ## UDEV:	M: dm-14
[ 0:31.015] ## UDEV:	R: 14
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:14
[ 0:31.015] ## UDEV:	N: dm-14
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv12
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv12
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv12
[ 0:31.015] ## UDEV:	Q: 28780
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-14
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-14
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28780
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=14
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990205824
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv12
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv12
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv12 /dev/mapper/LVMTEST500118pv12 /dev/disk/by-id/dm-name-LVMTEST500118pv12
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-15
[ 0:31.087] ## UDEV:	M: dm-15
[ 0:31.087] ## UDEV:	R: 15
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:15
[ 0:31.087] ## UDEV:	N: dm-15
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv13
[ 0:31.087] ## UDEV:	Q: 28781
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-15
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-15
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28781
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=15
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990206972
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv13 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv13 /dev/mapper/LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-16
[ 0:31.087] ## UDEV:	M: dm-16
[ 0:31.087] ## UDEV:	R: 16
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:16
[ 0:31.087] ## UDEV:	N: dm-16
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv14
[ 0:31.087] ## UDEV:	Q: 28782
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-16
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-16
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28782
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=16
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990207932
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv14 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv14 /dev/mapper/LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-17
[ 0:31.087] ## UDEV:	M: dm-17
[ 0:31.087] ## UDEV:	R: 17
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:17
[ 0:31.087] ## UDEV:	N: dm-17
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv15
[ 0:31.087] ## UDEV:	Q: 28783
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-17
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-17
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28783
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=17
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990209086
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv15 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv15 /dev/mapper/LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-18
[ 0:31.087] ## UDEV:	M: dm-18
[ 0:31.087] ## UDEV:	R: 18
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:18
[ 0:31.087] ## UDEV:	N: dm-18
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv16
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	Q: 28784
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-18
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-18
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28784
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=18
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990210836
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv16
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv16 /dev/mapper/LVMTEST500118pv16 /dev/disk/by-id/dm-name-LVMTEST500118pv16
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-19
[ 0:31.141] ## UDEV:	M: dm-19
[ 0:31.141] ## UDEV:	R: 19
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:19
[ 0:31.141] ## UDEV:	N: dm-19
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	Q: 28796
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-19
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-19
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=28796
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=19
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=29990670220
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9VLfstRkWdjLYSEeOBm2XUe6tyWxOfOw
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-2
[ 0:31.141] ## UDEV:	M: dm-2
[ 0:31.141] ## UDEV:	R: 2
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:2
[ 0:31.141] ## UDEV:	N: dm-2
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp
[ 0:31.141] ## UDEV:	S: rhel_hp-dl380eg8-02/home
[ 0:31.141] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	S: disk/by-uuid/de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	Q: 4
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-2
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-2
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=4
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=2
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=23787344
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=home
[ 0:31.141] ## UDEV:	E: ID_FS_UUID=de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	E: ID_FS_UUID_ENC=de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	E: ID_FS_SIZE=915152715776
[ 0:31.141] ## UDEV:	E: ID_FS_LASTBLOCK=223535104
[ 0:31.141] ## UDEV:	E: ID_FS_BLOCKSIZE=4096
[ 0:31.141] ## UDEV:	E: ID_FS_TYPE=xfs
[ 0:31.141] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp /dev/rhel_hp-dl380eg8-02/home /dev/mapper/rhel_hp--dl380eg8--02-home /dev/disk/by-uuid/de9cbcb9-5f04-4a30-84dd-d62aaad366a4 /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-20
[ 0:31.141] ## UDEV:	M: dm-20
[ 0:31.141] ## UDEV:	R: 20
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:20
[ 0:31.141] ## UDEV:	N: dm-20
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_0
[ 0:31.141] ## UDEV:	Q: 28797
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-20
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-20
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=28797
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=20
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=29990671640
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_0
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtpD6WQF1CobU3tkwiFxBT1XBcHwerULBu
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=LV1_rimage_0
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_0
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-21
[ 0:31.213] ## UDEV:	M: dm-21
[ 0:31.213] ## UDEV:	R: 21
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:21
[ 0:31.213] ## UDEV:	N: dm-21
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	Q: 28798
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-21
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-21
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28798
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=21
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990672680
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1Kuhl8cjecGLPkGpXK3swZjCtmzaoffu
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-22
[ 0:31.213] ## UDEV:	M: dm-22
[ 0:31.213] ## UDEV:	R: 22
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:22
[ 0:31.213] ## UDEV:	N: dm-22
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	Q: 28799
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-22
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-22
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28799
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=22
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990673821
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqnoSSKda19GKA7WsZG1I9QA04OBf1B0y
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rimage_1
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-23
[ 0:31.213] ## UDEV:	M: dm-23
[ 0:31.213] ## UDEV:	R: 23
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:23
[ 0:31.213] ## UDEV:	N: dm-23
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	Q: 28800
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-23
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-23
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28800
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=23
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990675299
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1nvTeTVDkLiF009Z2hQzmZsoHQwmyOuU
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-24
[ 0:31.213] ## UDEV:	M: dm-24
[ 0:31.213] ## UDEV:	R: 24
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:24
[ 0:31.213] ## UDEV:	N: dm-24
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_2
[ 0:31.213] ## UDEV:	Q: 28801
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-24
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-24
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28801
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=24
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990676126
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_2
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtKj25JNrKV6SBVwMzsGVHpu3vYfecbUAT
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rimage_2
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_2
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-25
[ 0:31.283] ## UDEV:	M: dm-25
[ 0:31.283] ## UDEV:	R: 25
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:25
[ 0:31.283] ## UDEV:	N: dm-25
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	Q: 28802
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-25
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-25
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28802
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=25
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990677189
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9o0uj9s6CL0ZYEhKh2xg4Psukriei6bz
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-26
[ 0:31.283] ## UDEV:	M: dm-26
[ 0:31.283] ## UDEV:	R: 26
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:26
[ 0:31.283] ## UDEV:	N: dm-26
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	Q: 28803
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-26
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-26
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28803
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=26
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990678638
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtFC3gDcqmY50hGMoUH5DmBdP1TwoaWdfa
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rimage_3
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-27
[ 0:31.283] ## UDEV:	M: dm-27
[ 0:31.283] ## UDEV:	R: 27
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:27
[ 0:31.283] ## UDEV:	N: dm-27
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	Q: 28804
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-27
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-27
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28804
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=27
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990680155
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtzzmLoHtGlsUTSMLMeIMofmzeeUmVCE5k
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-28
[ 0:31.283] ## UDEV:	M: dm-28
[ 0:31.283] ## UDEV:	R: 28
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:28
[ 0:31.283] ## UDEV:	N: dm-28
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	Q: 28805
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-28
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-28
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28805
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=28
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990681734
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmty3RNUNUzcv7DcTiHTUOZMy3koAVhD0sc
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rimage_4
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-29
[ 0:31.336] ## UDEV:	M: dm-29
[ 0:31.336] ## UDEV:	R: 29
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:29
[ 0:31.336] ## UDEV:	N: dm-29
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	Q: 28806
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-29
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-29
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28806
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=29
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990682964
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtn0CwCkqlpgGgrYbEHSdKT6WDuoRwVm9y
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-3
[ 0:31.336] ## UDEV:	M: dm-3
[ 0:31.336] ## UDEV:	R: 3
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:3
[ 0:31.336] ## UDEV:	N: dm-3
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118pv1
[ 0:31.336] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	Q: 28769
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-3
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-3
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28769
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=3
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990192084
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv1 /dev/mapper/LVMTEST500118pv1 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-30
[ 0:31.336] ## UDEV:	M: dm-30
[ 0:31.336] ## UDEV:	R: 30
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:30
[ 0:31.336] ## UDEV:	N: dm-30
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_5
[ 0:31.336] ## UDEV:	Q: 28807
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-30
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-30
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28807
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=30
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990684635
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_5
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt5LvpjI7Lxa8BYrtk6O3jrFwjfXKjPmuU
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rimage_5
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_5
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-31
[ 0:31.407] ## UDEV:	M: dm-31
[ 0:31.407] ## UDEV:	R: 31
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:31
[ 0:31.407] ## UDEV:	N: dm-31
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	Q: 28808
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-31
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-31
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28808
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=31
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990685668
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtjP5gBQ3S1Q3i0wDc6vWibWQpvx1p9tnR
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-32
[ 0:31.407] ## UDEV:	M: dm-32
[ 0:31.407] ## UDEV:	R: 32
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:32
[ 0:31.407] ## UDEV:	N: dm-32
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	Q: 28809
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-32
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-32
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28809
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=32
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990687628
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt6frue4ylGjtrPvQcVzeqnnJmBVHYiJLH
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rimage_6
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-33
[ 0:31.407] ## UDEV:	M: dm-33
[ 0:31.407] ## UDEV:	R: 33
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:33
[ 0:31.407] ## UDEV:	N: dm-33
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	Q: 28810
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-33
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-33
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28810
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=33
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990688628
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqUglRG9A9AL8V81RCER6HpY5nWeL89jG
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-34
[ 0:31.407] ## UDEV:	M: dm-34
[ 0:31.407] ## UDEV:	R: 34
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:34
[ 0:31.407] ## UDEV:	N: dm-34
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_7
[ 0:31.407] ## UDEV:	Q: 28811
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-34
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-34
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28811
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=34
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990690344
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_7
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtchsDOVA6wUZnaa6VF0sVfj3bvta4bRru
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rimage_7
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_7
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-35
[ 0:31.478] ## UDEV:	M: dm-35
[ 0:31.478] ## UDEV:	R: 35
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:35
[ 0:31.478] ## UDEV:	N: dm-35
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	Q: 28812
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-35
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-35
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28812
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=35
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990691862
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvHThjBcpzz3SsW1XJcxNj5ITg9qbmsw8
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-36
[ 0:31.478] ## UDEV:	M: dm-36
[ 0:31.478] ## UDEV:	R: 36
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:36
[ 0:31.478] ## UDEV:	N: dm-36
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	Q: 28813
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-36
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-36
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28813
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=36
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990693748
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtCWR2oKVdJld9Vfbzbyod2jo8EQjVhZpc
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rimage_8
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-37
[ 0:31.478] ## UDEV:	M: dm-37
[ 0:31.478] ## UDEV:	R: 37
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:37
[ 0:31.478] ## UDEV:	N: dm-37
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	Q: 28814
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-37
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-37
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28814
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=37
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990695609
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtHLzF2FgdKKO9QnZCW0Tm65J2iFuP0cqi
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-38
[ 0:31.478] ## UDEV:	M: dm-38
[ 0:31.478] ## UDEV:	R: 38
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:38
[ 0:31.478] ## UDEV:	N: dm-38
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_9
[ 0:31.478] ## UDEV:	Q: 28815
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-38
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-38
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28815
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=38
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990695991
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_9
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8IuOMVo5Dq0bbWuODFimVzGzlTUAFEvz
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.531] ## UDEV:	E: DM_LV_NAME=LV1_rimage_9
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_9
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-39
[ 0:31.531] ## UDEV:	M: dm-39
[ 0:31.531] ## UDEV:	R: 39
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:39
[ 0:31.531] ## UDEV:	N: dm-39
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	Q: 28816
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-39
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-39
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28816
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=39
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990699428
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1zmjff3ohmUgjtiD5skuJVn485x6iFw1
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.531] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-4
[ 0:31.531] ## UDEV:	M: dm-4
[ 0:31.531] ## UDEV:	R: 4
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:4
[ 0:31.531] ## UDEV:	N: dm-4
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: disk/by-id/lvm-pv-uuid-0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118pv2
[ 0:31.531] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	Q: 28770
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-4
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-4
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28770
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=4
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990193736
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: ID_FS_UUID=0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	E: ID_FS_UUID_ENC=0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	E: ID_FS_VERSION=LVM2 001
[ 0:31.531] ## UDEV:	E: ID_FS_TYPE=LVM2_member
[ 0:31.531] ## UDEV:	E: ID_FS_USAGE=raid
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv2 /dev/mapper/LVMTEST500118pv2 /dev/disk/by-id/dm-name-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-40
[ 0:31.531] ## UDEV:	M: dm-40
[ 0:31.531] ## UDEV:	R: 40
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:40
[ 0:31.531] ## UDEV:	N: dm-40
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_10
[ 0:31.531] ## UDEV:	Q: 28817
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-40
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-40
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28817
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=40
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990699857
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_10
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtTKg0whlxOIMgMzsuMlkxJ3KSb8XzSEWi
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rimage_10
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_10
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-41
[ 0:31.585] ## UDEV:	M: dm-41
[ 0:31.585] ## UDEV:	R: 41
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:41
[ 0:31.585] ## UDEV:	N: dm-41
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: disk/by-uuid/84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	S: disk/by-id/dm-uuid-LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:31.585] ## UDEV:	S: LVMTEST500118vg/LV1
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	Q: 28818
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-41
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-41
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28818
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=41
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29990733425
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1
[ 0:31.585] ## UDEV:	E: ID_FS_UUID=84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	E: ID_FS_UUID_ENC=84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	E: ID_FS_VERSION=1.0
[ 0:31.585] ## UDEV:	E: ID_FS_BLOCKSIZE=1024
[ 0:31.585] ## UDEV:	E: ID_FS_LASTBLOCK=10240
[ 0:31.585] ## UDEV:	E: ID_FS_TYPE=ext4
[ 0:31.585] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/disk/by-uuid/84c2201e-4589-48a8-ba44-019d481366f2 /dev/disk/by-id/dm-uuid-LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB /dev/LVMTEST500118vg/LV1 /dev/mapper/LVMTEST500118vg-LV1 /dev/disk/by-id/dm-name-LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-42
[ 0:31.585] ## UDEV:	M: dm-42
[ 0:31.585] ## UDEV:	R: 42
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:42
[ 0:31.585] ## UDEV:	N: dm-42
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	Q: 28824
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-42
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-42
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28824
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=42
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29994334470
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYUT44N6SJiqq8cbNIXG5Kxi1XS1MHy7m
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-43
[ 0:31.585] ## UDEV:	M: dm-43
[ 0:31.585] ## UDEV:	R: 43
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:43
[ 0:31.585] ## UDEV:	N: dm-43
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_11
[ 0:31.585] ## UDEV:	Q: 28825
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-43
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-43
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28825
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=43
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29994335859
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_11
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt70tvr9yqHxN2GWU7yxH3YPL3k4xwA63I
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rimage_11
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_11
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-44
[ 0:31.658] ## UDEV:	M: dm-44
[ 0:31.658] ## UDEV:	R: 44
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:44
[ 0:31.658] ## UDEV:	N: dm-44
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	Q: 28826
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-44
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-44
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28826
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=44
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994336795
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtulWH74owXTdv6w9Lu7vy83W3oYxwff5L
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-45
[ 0:31.658] ## UDEV:	M: dm-45
[ 0:31.658] ## UDEV:	R: 45
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:45
[ 0:31.658] ## UDEV:	N: dm-45
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	Q: 28827
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-45
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-45
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28827
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=45
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994338024
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8mxaG5SGE32WHeEosPk8YRzjdhgnimXj
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rimage_12
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-46
[ 0:31.658] ## UDEV:	M: dm-46
[ 0:31.658] ## UDEV:	R: 46
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:46
[ 0:31.658] ## UDEV:	N: dm-46
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	Q: 28828
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-46
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-46
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28828
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=46
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994339353
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtajpWmgqoHqF78tHKorIemrhNI0NB7sj2
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-47
[ 0:31.658] ## UDEV:	M: dm-47
[ 0:31.658] ## UDEV:	R: 47
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:47
[ 0:31.658] ## UDEV:	N: dm-47
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_13
[ 0:31.658] ## UDEV:	Q: 28829
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-47
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-47
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28829
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=47
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994340712
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_13
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYFS3S7q3tv79eaD0b9V2dvhXfFH5AzCe
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rimage_13
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_13
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-48
[ 0:31.728] ## UDEV:	M: dm-48
[ 0:31.728] ## UDEV:	R: 48
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:48
[ 0:31.728] ## UDEV:	N: dm-48
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	Q: 28830
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-48
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-48
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28830
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=48
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29994341327
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdPv93N0GAHAOf7VdiCCnSjCOmpUHtMuq
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-49
[ 0:31.728] ## UDEV:	M: dm-49
[ 0:31.728] ## UDEV:	R: 49
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:49
[ 0:31.728] ## UDEV:	N: dm-49
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	Q: 28831
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-49
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-49
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28831
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=49
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29994342720
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtxnwk2sgs9H0iRYfhExHQHhj8FeUZLHDK
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rimage_14
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-5
[ 0:31.728] ## UDEV:	M: dm-5
[ 0:31.728] ## UDEV:	R: 5
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:5
[ 0:31.728] ## UDEV:	N: dm-5
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118pv3
[ 0:31.728] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	Q: 28771
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-5
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-5
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28771
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=5
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29990194284
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv3 /dev/disk/by-id/dm-name-LVMTEST500118pv3 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-50
[ 0:31.728] ## UDEV:	M: dm-50
[ 0:31.728] ## UDEV:	R: 50
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:50
[ 0:31.728] ## UDEV:	N: dm-50
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_15
[ 0:31.728] ## UDEV:	Q: 28832
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-50
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-50
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28832
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=50
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29994344531
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdCmqb463WIoq8Jf7vmvbwKinVFX0kDSA
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.800] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-51
[ 0:31.800] ## UDEV:	M: dm-51
[ 0:31.800] ## UDEV:	R: 51
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:51
[ 0:31.800] ## UDEV:	N: dm-51
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	Q: 28833
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-51
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-51
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28833
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=51
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29994346372
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvULsrV4VKIQovvvPUaqdvK6zXKegCBvX
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.800] ## UDEV:	E: DM_LV_NAME=LV1_rimage_15
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-6
[ 0:31.800] ## UDEV:	M: dm-6
[ 0:31.800] ## UDEV:	R: 6
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:6
[ 0:31.800] ## UDEV:	N: dm-6
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv4
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	Q: 28772
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-6
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-6
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28772
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=6
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29990195543
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv4 /dev/disk/by-id/dm-name-LVMTEST500118pv4 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-7
[ 0:31.800] ## UDEV:	M: dm-7
[ 0:31.800] ## UDEV:	R: 7
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:7
[ 0:31.800] ## UDEV:	N: dm-7
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv5
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	Q: 28773
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-7
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-7
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28773
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=7
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29990196844
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv5 /dev/disk/by-id/dm-name-LVMTEST500118pv5 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-8
[ 0:31.800] ## UDEV:	M: dm-8
[ 0:31.800] ## UDEV:	R: 8
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:8
[ 0:31.800] ## UDEV:	N: dm-8
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv6
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	Q: 28774
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-8
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/dm-8
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=28774
[ 0:31.887] ## UDEV:	E: MAJOR=254
[ 0:31.887] ## UDEV:	E: MINOR=8
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=29990199222
[ 0:31.887] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.887] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.887] ## UDEV:	E: DM_NAME=LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.887] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv6 /dev/disk/by-id/dm-name-LVMTEST500118pv6 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/dm-9
[ 0:31.887] ## UDEV:	M: dm-9
[ 0:31.887] ## UDEV:	R: 9
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 254:9
[ 0:31.887] ## UDEV:	N: dm-9
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	S: mapper/LVMTEST500118pv7
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	Q: 28775
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-9
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/dm-9
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=28775
[ 0:31.887] ## UDEV:	E: MAJOR=254
[ 0:31.887] ## UDEV:	E: MINOR=9
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=29990198957
[ 0:31.887] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.887] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.887] ## UDEV:	E: DM_NAME=LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.887] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv7 /dev/mapper/LVMTEST500118pv7 /dev/disk/by-id/dm-name-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop0
[ 0:31.887] ## UDEV:	M: loop0
[ 0:31.887] ## UDEV:	R: 0
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:0
[ 0:31.887] ## UDEV:	N: loop0
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/25865
[ 0:31.887] ## UDEV:	Q: 25880
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop0
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop0
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=25880
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=0
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=441626813
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/25865
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop1
[ 0:31.887] ## UDEV:	M: loop1
[ 0:31.887] ## UDEV:	R: 1
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:1
[ 0:31.887] ## UDEV:	N: loop1
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21380
[ 0:31.887] ## UDEV:	Q: 21671
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop1
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop1
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21671
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=1
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=457457416
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21380
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop2
[ 0:31.887] ## UDEV:	M: loop2
[ 0:31.887] ## UDEV:	R: 2
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:2
[ 0:31.887] ## UDEV:	N: loop2
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21381
[ 0:31.887] ## UDEV:	Q: 21669
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop2
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop2
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21669
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=2
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=776183912
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21381
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop3
[ 0:31.887] ## UDEV:	M: loop3
[ 0:31.887] ## UDEV:	R: 3
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:3
[ 0:31.887] ## UDEV:	N: loop3
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21382
[ 0:31.887] ## UDEV:	Q: 21670
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop3
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop3
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21670
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=3
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=776186533
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21382
[ 0:31.889] ## UDEV:	E: TAGS=:systemd:
[ 0:31.889] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.889] ## UDEV:	
[ 0:31.889] <======== Free space ========>
[ 0:31.889] ## DF_H:	Filesystem                              Size  Used Avail Use% Mounted on
[ 0:31.909] ## DF_H:	devtmpfs                                4.0M     0  4.0M   0% /dev
[ 0:31.909] ## DF_H:	tmpfs                                   7.7G     0  7.7G   0% /dev/shm
[ 0:31.909] ## DF_H:	tmpfs                                   3.1G   18M  3.1G   1% /run
[ 0:31.909] ## DF_H:	/dev/mapper/rhel_hp--dl380eg8--02-root   70G  4.1G   66G   6% /
[ 0:31.909] ## DF_H:	/dev/sda1                               960M  313M  648M  33% /boot
[ 0:31.909] ## DF_H:	/dev/mapper/rhel_hp--dl380eg8--02-home  853G   37G  816G   5% /home
[ 0:31.909] ## DF_H:	tmpfs                                   1.6G  4.0K  1.6G   1% /run/user/0
[ 0:31.909] <======== Script file "lvconvert-raid-reshape-stripes-load-reload.sh" ========>
[ 0:31.911] ## Line: 1 	 #!/usr/bin/env bash
[ 0:31.917] ## Line: 2 	 
[ 0:31.917] ## Line: 3 	 # Copyright (C) 2017 Red Hat, Inc. All rights reserved.
[ 0:31.917] ## Line: 4 	 #
[ 0:31.917] ## Line: 5 	 # This copyrighted material is made available to anyone wishing to use,
[ 0:31.917] ## Line: 6 	 # modify, copy, or redistribute it subject to the terms and conditions
[ 0:31.917] ## Line: 7 	 # of the GNU General Public License v.2.
[ 0:31.917] ## Line: 8 	 #
[ 0:31.917] ## Line: 9 	 # You should have received a copy of the GNU General Public License
[ 0:31.917] ## Line: 10 	 # along with this program; if not, write to the Free Software Foundation,
[ 0:31.917] ## Line: 11 	 # Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
[ 0:31.917] ## Line: 12 	 
[ 0:31.917] ## Line: 13 	 
[ 0:31.917] ## Line: 14 	 SKIP_WITH_LVMPOLLD=1
[ 0:31.917] ## Line: 15 	 
[ 0:31.917] ## Line: 16 	 . lib/inittest
[ 0:31.917] ## Line: 17 	 
[ 0:31.917] ## Line: 18 	 # Test reshaping under io load
[ 0:31.917] ## Line: 19 	 
[ 0:31.917] ## Line: 20 	 which md5sum || skip
[ 0:31.917] ## Line: 21 	 which mkfs.ext4 || skip
[ 0:31.917] ## Line: 22 	 aux have_raid 1 14 || skip
[ 0:31.917] ## Line: 23 	 
[ 0:31.917] ## Line: 24 	 mount_dir="mnt"
[ 0:31.917] ## Line: 25 	 
[ 0:31.917] ## Line: 26 	 cleanup_mounted_and_teardown()
[ 0:31.917] ## Line: 27 	 {
[ 0:31.917] ## Line: 28 	 	umount "$mount_dir" || true
[ 0:31.917] ## Line: 29 	 	aux teardown
[ 0:31.917] ## Line: 30 	 }
[ 0:31.917] ## Line: 31 	 
[ 0:31.917] ## Line: 32 	 checksum_()
[ 0:31.917] ## Line: 33 	 {
[ 0:31.917] ## Line: 34 	 	md5sum "$1" | cut -f1 -d' '
[ 0:31.917] ## Line: 35 	 }
[ 0:31.917] ## Line: 36 	 
[ 0:31.917] ## Line: 37 	 aux prepare_pvs 16 32
[ 0:31.917] ## Line: 38 	 
[ 0:31.917] ## Line: 39 	 get_devs
[ 0:31.917] ## Line: 40 	 
[ 0:31.917] ## Line: 41 	 vgcreate $SHARED -s 1M "$vg" "${DEVICES[@]}"
[ 0:31.917] ## Line: 42 	 
[ 0:31.917] ## Line: 43 	 trap 'cleanup_mounted_and_teardown' EXIT
[ 0:31.917] ## Line: 44 	 
[ 0:31.917] ## Line: 45 	 # Create 10-way striped raid5 (11 legs total)
[ 0:31.917] ## Line: 46 	 lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -n$lv1 $vg
[ 0:31.917] ## Line: 47 	 check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:31.917] ## Line: 48 	 check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:31.917] ## Line: 49 	 check lv_first_seg_field $vg/$lv1 data_stripes 10
[ 0:31.917] ## Line: 50 	 check lv_first_seg_field $vg/$lv1 stripes 11
[ 0:31.917] ## Line: 51 	 wipefs -a "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 52 	 mkfs -t ext4 "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 53 	 
[ 0:31.917] ## Line: 54 	 mkdir -p "$mount_dir"
[ 0:31.917] ## Line: 55 	 mount "$DM_DEV_DIR/$vg/$lv1" "$mount_dir"
[ 0:31.917] ## Line: 56 	 
[ 0:31.917] ## Line: 57 	 echo 3 >/proc/sys/vm/drop_caches
[ 0:31.917] ## Line: 58 	 # FIXME: This is filling up ram disk. Use sane amount of data please! Rate limit the data written!
[ 0:31.917] ## Line: 59 	 dd if=/dev/urandom of="$mount_dir/random" bs=1M count=4 conv=fdatasync
[ 0:31.917] ## Line: 60 	 checksum_ "$mount_dir/random" >MD5
[ 0:31.917] ## Line: 61 	 
[ 0:31.917] ## Line: 62 	 # FIXME: wait_for_sync - is this really testing anything under load?
[ 0:31.917] ## Line: 63 	 aux wait_for_sync $vg $lv1
[ 0:31.917] ## Line: 64 	 aux delay_dev "$dev2" 0 200
[ 0:31.917] ## Line: 65 	 
[ 0:31.917] ## Line: 66 	 # Reshape it to 15 data stripes
[ 0:31.917] ## Line: 67 	 lvconvert --yes --stripes 15 $vg/$lv1
[ 0:31.917] ## Line: 68 	 check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:31.917] ## Line: 69 	 check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:31.917] ## Line: 70 	 check lv_first_seg_field $vg/$lv1 data_stripes 15
[ 0:31.917] ## Line: 71 	 check lv_first_seg_field $vg/$lv1 stripes 16
[ 0:31.917] ## Line: 72 	 
[ 0:31.917] ## Line: 73 	 # Reload table during reshape to test for data corruption
[ 0:31.917] ## Line: 74 	 case "$(uname -r)" in
[ 0:31.917] ## Line: 75 	   5.[89]*|5.1[012].*|3.10.0-862*|4.18.0-*.el8*)
[ 0:31.917] ## Line: 76 	 	should not echo "Skipping table reload test on on unfixed kernel!!!" ;;
[ 0:31.917] ## Line: 77 	   *)
[ 0:31.917] ## Line: 78 	 for i in {0..5}
[ 0:31.917] ## Line: 79 	 do
[ 0:31.917] ## Line: 80 	 	dmsetup table $vg-$lv1|dmsetup load $vg-$lv1
[ 0:31.917] ## Line: 81 	 	dmsetup suspend --noflush $vg-$lv1
[ 0:31.917] ## Line: 82 	 	dmsetup resume $vg-$lv1
[ 0:31.917] ## Line: 83 	 	sleep .5
[ 0:31.917] ## Line: 84 	 done
[ 0:31.917] ## Line: 85 	 
[ 0:31.917] ## Line: 86 	 esac
[ 0:31.917] ## Line: 87 	 
[ 0:31.917] ## Line: 88 	 aux delay_dev "$dev2" 0
[ 0:31.917] ## Line: 89 	 
[ 0:31.917] ## Line: 90 	 kill -9 %% || true
[ 0:31.917] ## Line: 91 	 wait
[ 0:31.917] ## Line: 92 	 
[ 0:31.917] ## Line: 93 	 checksum_ "$mount_dir/random" >MD5_new
[ 0:31.917] ## Line: 94 	 
[ 0:31.917] ## Line: 95 	 umount "$mount_dir"
[ 0:31.917] ## Line: 96 	 
[ 0:31.917] ## Line: 97 	 fsck -fn "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 98 	 
[ 0:31.917] ## Line: 99 	 # Compare checksum is matching
[ 0:31.917] ## Line: 100 	 cat MD5 MD5_new
[ 0:31.917] ## Line: 101 	 diff MD5 MD5_new
[ 0:31.917] ## Line: 102 	 
[ 0:31.917] ## Line: 103 	 vgremove -ff $vg
[ 0:31.917] cleanup_mounted_and_teardown
[ 0:31.918] #lvconvert-raid-reshape-stripes-load-reload.sh:1+ cleanup_mounted_and_teardown
[ 0:31.918] #lvconvert-raid-reshape-stripes-load-reload.sh:28+ umount mnt
[ 0:31.918] umount: mnt: not mounted.
[ 0:31.922] #lvconvert-raid-reshape-stripes-load-reload.sh:28+ true
[ 0:31.922] #lvconvert-raid-reshape-stripes-load-reload.sh:29+ aux teardown
[ 0:31.922] ## teardown.......## removing stray mapped devices with names beginning with LVMTEST500118: 
[ 0:32.094] .6,17239,30029053038,-;brd: module unloaded
[ 0:33.728] .ok
  
Yu Kuai March 4, 2024, 1:07 a.m. UTC | #4
Hi,

在 2024/03/03 21:16, Xiao Ni 写道:
> Hi all
> 
> There is a error report from lvm regression tests. The case is
> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> tried to fix dmraid regression problems too. In my patch set,  after
> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> sync_thread for reshape directly), this problem doesn't appear.

How often did you see this tes failed? I'm running the tests for over
two days now, for 30+ rounds, and this test never fail in my VM.

Thanks,
Kuai

> 
> I put the log in the attachment.
> 
> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>>
>> part1 contains fixes for deadlocks for stopping sync_thread
>>
>> This set contains fixes:
>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>   - deadlocks that reshape concurrent with IO, patch 8;
>>   - a lockdep warning, patch 9;
>>
>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>
>> for t in `ls test/shell`; do
>>          if cat test/shell/$t | grep raid &> /dev/null; then
>>                  make check T=shell/$t
>>          fi
>> done
>>
>> There are no deadlock and no fs corrupt now, however, there are still four
>> failed tests:
>>
>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>
>> And failed reasons are the same:
>>
>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>
>> I have no clue yet, and it seems other folks doesn't have this issue.
>>
>> Yu Kuai (9):
>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>    md: export helpers to stop sync_thread
>>    md: export helper md_is_rdwr()
>>    md: add a new helper reshape_interrupted()
>>    dm-raid: really frozen sync_thread during suspend
>>    md/dm-raid: don't call md_reap_sync_thread() directly
>>    dm-raid: add a new helper prepare_suspend() in md_personality
>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>      concurrent with reshape
>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>
>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>   drivers/md/md.h      | 38 +++++++++++++++++-
>>   drivers/md/raid5.c   | 32 ++++++++++++++-
>>   4 files changed, 196 insertions(+), 40 deletions(-)
>>
>> --
>> 2.39.2
>>
  
Yu Kuai March 4, 2024, 1:23 a.m. UTC | #5
Hi,

在 2024/03/04 9:07, Yu Kuai 写道:
> Hi,
> 
> 在 2024/03/03 21:16, Xiao Ni 写道:
>> Hi all
>>
>> There is a error report from lvm regression tests. The case is
>> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
>> tried to fix dmraid regression problems too. In my patch set,  after
>> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
>> sync_thread for reshape directly), this problem doesn't appear.
> 
> How often did you see this tes failed? I'm running the tests for over
> two days now, for 30+ rounds, and this test never fail in my VM.

Take a quick look, there is still a path from raid10 that
MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
triggered. Can you test the following patch on the top of this set?
I'll keep running the test myself.

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index a5f8419e2df1..7ca29469123a 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
         return 0;

  abort:
-       mddev->recovery = 0;
+       if (mddev->gendisk)
+               mddev->recovery = 0;
         spin_lock_irq(&conf->device_lock);
         conf->geo = conf->prev;
         mddev->raid_disks = conf->geo.raid_disks;

Thanks,
Kuai
> 
> Thanks,
> Kuai
> 
>>
>> I put the log in the attachment.
>>
>> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> link to part1: 
>>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/ 
>>>
>>>
>>> part1 contains fixes for deadlocks for stopping sync_thread
>>>
>>> This set contains fixes:
>>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>>   - deadlocks that reshape concurrent with IO, patch 8;
>>>   - a lockdep warning, patch 9;
>>>
>>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>>
>>> for t in `ls test/shell`; do
>>>          if cat test/shell/$t | grep raid &> /dev/null; then
>>>                  make check T=shell/$t
>>>          fi
>>> done
>>>
>>> There are no deadlock and no fs corrupt now, however, there are still 
>>> four
>>> failed tests:
>>>
>>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>>
>>> And failed reasons are the same:
>>>
>>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>>
>>> I have no clue yet, and it seems other folks doesn't have this issue.
>>>
>>> Yu Kuai (9):
>>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>>    md: export helpers to stop sync_thread
>>>    md: export helper md_is_rdwr()
>>>    md: add a new helper reshape_interrupted()
>>>    dm-raid: really frozen sync_thread during suspend
>>>    md/dm-raid: don't call md_reap_sync_thread() directly
>>>    dm-raid: add a new helper prepare_suspend() in md_personality
>>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>>      concurrent with reshape
>>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>>
>>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>>   drivers/md/md.h      | 38 +++++++++++++++++-
>>>   drivers/md/raid5.c   | 32 ++++++++++++++-
>>>   4 files changed, 196 insertions(+), 40 deletions(-)
>>>
>>> -- 
>>> 2.39.2
>>>
> 
> 
> .
>
  
Xiao Ni March 4, 2024, 1:25 a.m. UTC | #6
On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2024/03/04 9:07, Yu Kuai 写道:
> > Hi,
> >
> > 在 2024/03/03 21:16, Xiao Ni 写道:
> >> Hi all
> >>
> >> There is a error report from lvm regression tests. The case is
> >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> >> tried to fix dmraid regression problems too. In my patch set,  after
> >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> >> sync_thread for reshape directly), this problem doesn't appear.
> >

Hi Kuai
> > How often did you see this tes failed? I'm running the tests for over
> > two days now, for 30+ rounds, and this test never fail in my VM.

I ran 5 times and it failed 2 times just now.

>
> Take a quick look, there is still a path from raid10 that
> MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> triggered. Can you test the following patch on the top of this set?
> I'll keep running the test myself.

Sure, I'll give the result later.

Regards
Xiao
>
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index a5f8419e2df1..7ca29469123a 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
>          return 0;
>
>   abort:
> -       mddev->recovery = 0;
> +       if (mddev->gendisk)
> +               mddev->recovery = 0;
>          spin_lock_irq(&conf->device_lock);
>          conf->geo = conf->prev;
>          mddev->raid_disks = conf->geo.raid_disks;
>
> Thanks,
> Kuai
> >
> > Thanks,
> > Kuai
> >
> >>
> >> I put the log in the attachment.
> >>
> >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>>
> >>> From: Yu Kuai <yukuai3@huawei.com>
> >>>
> >>> link to part1:
> >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> >>>
> >>>
> >>> part1 contains fixes for deadlocks for stopping sync_thread
> >>>
> >>> This set contains fixes:
> >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> >>>   - deadlocks that reshape concurrent with IO, patch 8;
> >>>   - a lockdep warning, patch 9;
> >>>
> >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> >>>
> >>> for t in `ls test/shell`; do
> >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> >>>                  make check T=shell/$t
> >>>          fi
> >>> done
> >>>
> >>> There are no deadlock and no fs corrupt now, however, there are still
> >>> four
> >>> failed tests:
> >>>
> >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> >>>
> >>> And failed reasons are the same:
> >>>
> >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> >>>
> >>> I have no clue yet, and it seems other folks doesn't have this issue.
> >>>
> >>> Yu Kuai (9):
> >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> >>>    md: export helpers to stop sync_thread
> >>>    md: export helper md_is_rdwr()
> >>>    md: add a new helper reshape_interrupted()
> >>>    dm-raid: really frozen sync_thread during suspend
> >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> >>>      concurrent with reshape
> >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> >>>
> >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> >>>
> >>> --
> >>> 2.39.2
> >>>
> >
> >
> > .
> >
>
  
Xiao Ni March 4, 2024, 8:27 a.m. UTC | #7
On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
>
> On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > Hi,
> >
> > 在 2024/03/04 9:07, Yu Kuai 写道:
> > > Hi,
> > >
> > > 在 2024/03/03 21:16, Xiao Ni 写道:
> > >> Hi all
> > >>
> > >> There is a error report from lvm regression tests. The case is
> > >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> > >> tried to fix dmraid regression problems too. In my patch set,  after
> > >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> > >> sync_thread for reshape directly), this problem doesn't appear.
> > >
>
> Hi Kuai
> > > How often did you see this tes failed? I'm running the tests for over
> > > two days now, for 30+ rounds, and this test never fail in my VM.
>
> I ran 5 times and it failed 2 times just now.
>
> >
> > Take a quick look, there is still a path from raid10 that
> > MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> > triggered. Can you test the following patch on the top of this set?
> > I'll keep running the test myself.
>
> Sure, I'll give the result later.

Hi all

It's not stable to reproduce this. After applying this raid10 patch it
failed once 28 times. Without the raid10 patch, it failed once 30
times, but it failed frequently this morning.

Regards
Xiao
>
> Regards
> Xiao
> >
> > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > index a5f8419e2df1..7ca29469123a 100644
> > --- a/drivers/md/raid10.c
> > +++ b/drivers/md/raid10.c
> > @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
> >          return 0;
> >
> >   abort:
> > -       mddev->recovery = 0;
> > +       if (mddev->gendisk)
> > +               mddev->recovery = 0;
> >          spin_lock_irq(&conf->device_lock);
> >          conf->geo = conf->prev;
> >          mddev->raid_disks = conf->geo.raid_disks;
> >
> > Thanks,
> > Kuai
> > >
> > > Thanks,
> > > Kuai
> > >
> > >>
> > >> I put the log in the attachment.
> > >>
> > >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > >>>
> > >>> From: Yu Kuai <yukuai3@huawei.com>
> > >>>
> > >>> link to part1:
> > >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> > >>>
> > >>>
> > >>> part1 contains fixes for deadlocks for stopping sync_thread
> > >>>
> > >>> This set contains fixes:
> > >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> > >>>   - deadlocks that reshape concurrent with IO, patch 8;
> > >>>   - a lockdep warning, patch 9;
> > >>>
> > >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> > >>>
> > >>> for t in `ls test/shell`; do
> > >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> > >>>                  make check T=shell/$t
> > >>>          fi
> > >>> done
> > >>>
> > >>> There are no deadlock and no fs corrupt now, however, there are still
> > >>> four
> > >>> failed tests:
> > >>>
> > >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> > >>>
> > >>> And failed reasons are the same:
> > >>>
> > >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> > >>>
> > >>> I have no clue yet, and it seems other folks doesn't have this issue.
> > >>>
> > >>> Yu Kuai (9):
> > >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> > >>>    md: export helpers to stop sync_thread
> > >>>    md: export helper md_is_rdwr()
> > >>>    md: add a new helper reshape_interrupted()
> > >>>    dm-raid: really frozen sync_thread during suspend
> > >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> > >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> > >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> > >>>      concurrent with reshape
> > >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> > >>>
> > >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> > >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> > >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> > >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> > >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> > >>>
> > >>> --
> > >>> 2.39.2
> > >>>
> > >
> > >
> > > .
> > >
> >
  
Xiao Ni March 4, 2024, 11:06 a.m. UTC | #8
On Mon, Mar 4, 2024 at 4:27 PM Xiao Ni <xni@redhat.com> wrote:
>
> On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
> >
> > On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > >
> > > Hi,
> > >
> > > 在 2024/03/04 9:07, Yu Kuai 写道:
> > > > Hi,
> > > >
> > > > 在 2024/03/03 21:16, Xiao Ni 写道:
> > > >> Hi all
> > > >>
> > > >> There is a error report from lvm regression tests. The case is
> > > >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> > > >> tried to fix dmraid regression problems too. In my patch set,  after
> > > >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> > > >> sync_thread for reshape directly), this problem doesn't appear.
> > > >
> >
> > Hi Kuai
> > > > How often did you see this tes failed? I'm running the tests for over
> > > > two days now, for 30+ rounds, and this test never fail in my VM.
> >
> > I ran 5 times and it failed 2 times just now.
> >
> > >
> > > Take a quick look, there is still a path from raid10 that
> > > MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> > > triggered. Can you test the following patch on the top of this set?
> > > I'll keep running the test myself.
> >
> > Sure, I'll give the result later.
>
> Hi all
>
> It's not stable to reproduce this. After applying this raid10 patch it
> failed once 28 times. Without the raid10 patch, it failed once 30
> times, but it failed frequently this morning.

Hi all

After running 152 times with kernel 6.6, the problem can appear too.
So it can return the state of 6.6. This patch set can make this
problem appear quickly.

Best Regards
Xiao


>
> Regards
> Xiao
> >
> > Regards
> > Xiao
> > >
> > > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > > index a5f8419e2df1..7ca29469123a 100644
> > > --- a/drivers/md/raid10.c
> > > +++ b/drivers/md/raid10.c
> > > @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
> > >          return 0;
> > >
> > >   abort:
> > > -       mddev->recovery = 0;
> > > +       if (mddev->gendisk)
> > > +               mddev->recovery = 0;
> > >          spin_lock_irq(&conf->device_lock);
> > >          conf->geo = conf->prev;
> > >          mddev->raid_disks = conf->geo.raid_disks;
> > >
> > > Thanks,
> > > Kuai
> > > >
> > > > Thanks,
> > > > Kuai
> > > >
> > > >>
> > > >> I put the log in the attachment.
> > > >>
> > > >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > > >>>
> > > >>> From: Yu Kuai <yukuai3@huawei.com>
> > > >>>
> > > >>> link to part1:
> > > >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> > > >>>
> > > >>>
> > > >>> part1 contains fixes for deadlocks for stopping sync_thread
> > > >>>
> > > >>> This set contains fixes:
> > > >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> > > >>>   - deadlocks that reshape concurrent with IO, patch 8;
> > > >>>   - a lockdep warning, patch 9;
> > > >>>
> > > >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> > > >>>
> > > >>> for t in `ls test/shell`; do
> > > >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> > > >>>                  make check T=shell/$t
> > > >>>          fi
> > > >>> done
> > > >>>
> > > >>> There are no deadlock and no fs corrupt now, however, there are still
> > > >>> four
> > > >>> failed tests:
> > > >>>
> > > >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostlysh
> > > >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > > >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > > >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> > > >>>
> > > >>> And failed reasons are the same:
> > > >>>
> > > >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> > > >>>
> > > >>> I have no clue yet, and it seems other folks doesn't have this issue.
> > > >>>
> > > >>> Yu Kuai (9):
> > > >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> > > >>>    md: export helpers to stop sync_thread
> > > >>>    md: export helper md_is_rdwr()
> > > >>>    md: add a new helper reshape_interrupted()
> > > >>>    dm-raid: really frozen sync_thread during suspend
> > > >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> > > >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> > > >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> > > >>>      concurrent with reshape
> > > >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> > > >>>
> > > >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> > > >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> > > >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> > > >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> > > >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> > > >>>
> > > >>> --
> > > >>> 2.39.2
> > > >>>
> > > >
> > > >
> > > > .
> > > >
> > >
  
Yu Kuai March 4, 2024, 11:52 a.m. UTC | #9
Hi,

在 2024/03/04 19:06, Xiao Ni 写道:
> On Mon, Mar 4, 2024 at 4:27 PM Xiao Ni <xni@redhat.com> wrote:
>>
>> On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
>>>
>>> On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> 在 2024/03/04 9:07, Yu Kuai 写道:
>>>>> Hi,
>>>>>
>>>>> 在 2024/03/03 21:16, Xiao Ni 写道:
>>>>>> Hi all
>>>>>>
>>>>>> There is a error report from lvm regression tests. The case is
>>>>>> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
>>>>>> tried to fix dmraid regression problems too. In my patch set,  after
>>>>>> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
>>>>>> sync_thread for reshape directly), this problem doesn't appear.
>>>>>
>>>
>>> Hi Kuai
>>>>> How often did you see this tes failed? I'm running the tests for over
>>>>> two days now, for 30+ rounds, and this test never fail in my VM.
>>>
>>> I ran 5 times and it failed 2 times just now.
>>>
>>>>
>>>> Take a quick look, there is still a path from raid10 that
>>>> MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
>>>> triggered. Can you test the following patch on the top of this set?
>>>> I'll keep running the test myself.
>>>
>>> Sure, I'll give the result later.
>>
>> Hi all
>>
>> It's not stable to reproduce this. After applying this raid10 patch it
>> failed once 28 times. Without the raid10 patch, it failed once 30
>> times, but it failed frequently this morning.
> 
> Hi all
> 
> After running 152 times with kernel 6.6, the problem can appear too.
> So it can return the state of 6.6. This patch set can make this
> problem appear quickly.

I verified in my VM that after testing 100+ times, this problem can both
triggered with v6.6 and v6.8-rc5 + this set.

I think we can merge this patchset, and figure out why the test can fail
later.

Thanks,
Kuai


> 
> Best Regards
> Xiao
> 
> 
>>
>> Regards
>> Xiao
>>>
>>> Regards
>>> Xiao
>>>>
>>>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>>>> index a5f8419e2df1..7ca29469123a 100644
>>>> --- a/drivers/md/raid10.c
>>>> +++ b/drivers/md/raid10.c
>>>> @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
>>>>           return 0;
>>>>
>>>>    abort:
>>>> -       mddev->recovery = 0;
>>>> +       if (mddev->gendisk)
>>>> +               mddev->recovery = 0;
>>>>           spin_lock_irq(&conf->device_lock);
>>>>           conf->geo = conf->prev;
>>>>           mddev->raid_disks = conf->geo.raid_disks;
>>>>
>>>> Thanks,
>>>> Kuai
>>>>>
>>>>> Thanks,
>>>>> Kuai
>>>>>
>>>>>>
>>>>>> I put the log in the attachment.
>>>>>>
>>>>>> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>>
>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>
>>>>>>> link to part1:
>>>>>>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>>>>>>>
>>>>>>>
>>>>>>> part1 contains fixes for deadlocks for stopping sync_thread
>>>>>>>
>>>>>>> This set contains fixes:
>>>>>>>    - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>>>>>>    - deadlocks that reshape concurrent with IO, patch 8;
>>>>>>>    - a lockdep warning, patch 9;
>>>>>>>
>>>>>>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>>>>>>
>>>>>>> for t in `ls test/shell`; do
>>>>>>>           if cat test/shell/$t | grep raid &> /dev/null; then
>>>>>>>                   make check T=shell/$t
>>>>>>>           fi
>>>>>>> done
>>>>>>>
>>>>>>> There are no deadlock and no fs corrupt now, however, there are still
>>>>>>> four
>>>>>>> failed tests:
>>>>>>>
>>>>>>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>>>>>>
>>>>>>> And failed reasons are the same:
>>>>>>>
>>>>>>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>>>>>>
>>>>>>> I have no clue yet, and it seems other folks doesn't have this issue.
>>>>>>>
>>>>>>> Yu Kuai (9):
>>>>>>>     md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>>>>>>     md: export helpers to stop sync_thread
>>>>>>>     md: export helper md_is_rdwr()
>>>>>>>     md: add a new helper reshape_interrupted()
>>>>>>>     dm-raid: really frozen sync_thread during suspend
>>>>>>>     md/dm-raid: don't call md_reap_sync_thread() directly
>>>>>>>     dm-raid: add a new helper prepare_suspend() in md_personality
>>>>>>>     dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>>>>>>       concurrent with reshape
>>>>>>>     dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>>>>>>
>>>>>>>    drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>>>>>>    drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>>>>>>    drivers/md/md.h      | 38 +++++++++++++++++-
>>>>>>>    drivers/md/raid5.c   | 32 ++++++++++++++-
>>>>>>>    4 files changed, 196 insertions(+), 40 deletions(-)
>>>>>>>
>>>>>>> --
>>>>>>> 2.39.2
>>>>>>>
>>>>>
>>>>>
>>>>> .
>>>>>
>>>>
> 
> .
>