selftests/zram : add non-zero data

Message ID 20230810010313.493384-2-iwienand@redhat.com
State New
Headers
Series selftests/zram : add non-zero data |

Commit Message

Ian Wienand Aug. 10, 2023, 1:03 a.m. UTC
  In diagnosing some issues with the partner test in LTP I noticed that
because this fills the device with zeros, it doesn't really activate
the compressed allocator path as it is designed to do.  This is a
"lite" version of the LTP patch that simply perturbs the zero writes to
avoid having all pages hit the same-page detection, and adds a sync so
that we're looking at a more quiescent system to get the final stats.

While we're here, this rewords and expands some of the mm_stat flag
docs to be a bit more explicit about what's going on.

Link: https://lore.kernel.org/ltp/20230808035641.364676-2-iwienand@redhat.com/T/#u
Signed-off-by: Ian Wienand <iwienand@redhat.com>
---
 Documentation/admin-guide/blockdev/zram.rst | 22 ++++++++++++++-------
 tools/testing/selftests/zram/zram01.sh      | 18 +++++++++++++++--
 2 files changed, 31 insertions(+), 9 deletions(-)
  

Patch

diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst
index e4551579cb12..a39a01870f40 100644
--- a/Documentation/admin-guide/blockdev/zram.rst
+++ b/Documentation/admin-guide/blockdev/zram.rst
@@ -253,17 +253,25 @@  line of text and contains the following stats separated by whitespace:
  orig_data_size   uncompressed size of data stored in this disk.
                   Unit: bytes
  compr_data_size  compressed size of data stored in this disk
- mem_used_total   the amount of memory allocated for this disk. This
-                  includes allocator fragmentation and metadata overhead,
-                  allocated for this disk. So, allocator space efficiency
-                  can be calculated using compr_data_size and this statistic.
-                  Unit: bytes
+ mem_used_total   the amount of memory currently used by the compressed
+                  memory allocator to hold compressed data. This
+                  includes allocator fragmentation and metadata
+                  overhead.  The device space efficiency can be
+                  calculated as a ratio of the compr_data_size /
+                  mem_used_total.  Note this value may be zero;
+                  particularly if all pages are filled with identical
+                  data (see same_pages).
  mem_limit        the maximum amount of memory ZRAM can use to store
                   the compressed data
  mem_used_max     the maximum amount of memory zram has consumed to
                   store the data
- same_pages       the number of same element filled pages written to this disk.
-                  No memory is allocated for such pages.
+ same_pages       pages identified as being filled exclusivley with an
+                  identicial unsigned-long value are recorded
+                  specially by zram and thus are not stored via the
+                  compression allocator.  This avoids fragmentation
+                  and metadata overheads for common cases such as
+                  zeroed or poision data.  same_pages is the current
+                  number of pages kept in this de-duplicated form.
  pages_compacted  the number of pages freed during compaction
  huge_pages	  the number of incompressible pages
  huge_pages_since the number of incompressible pages since zram set up
diff --git a/tools/testing/selftests/zram/zram01.sh b/tools/testing/selftests/zram/zram01.sh
index 8f4affe34f3e..122625d744c2 100755
--- a/tools/testing/selftests/zram/zram01.sh
+++ b/tools/testing/selftests/zram/zram01.sh
@@ -33,16 +33,30 @@  zram_algs="lzo"
 
 zram_fill_fs()
 {
+	local page_size=$(getconf PAGE_SIZE)
 	for i in $(seq $dev_start $dev_end); do
 		echo "fill zram$i..."
 		local b=0
 		while [ true ]; do
-			dd conv=notrunc if=/dev/zero of=zram${i}/file \
+			# If we fill with all zeros, every page hits
+			# the same-page detection and never makes it
+			# to compressed backing.  Filling the first 1K
+			# of the page with (likely lowly compressible)
+			# random data ensures we hit the compression
+			# paths, but the highly compressible rest of
+			# the page also ensures we get a sufficiently
+			# high ratio to assert on below.
+			local input_file='/dev/zero'
+			if [ $(( (b * 1024) % page_size )) -eq 0 ]; then
+			    input_file='/dev/urandom'
+			fi
+			dd conv=notrunc if=${input_file} of=zram${i}/file \
 				oflag=append count=1 bs=1024 status=none \
 				> /dev/null 2>&1 || break
 			b=$(($b + 1))
 		done
-		echo "zram$i can be filled with '$b' KB"
+		echo "zram$i was filled with '$b' KB"
+		sync
 
 		local mem_used_total=`awk '{print $3}' "/sys/block/zram$i/mm_stat"`
 		local v=$((100 * 1024 * $b / $mem_used_total))