[lustre-discuss] Constant small writes on ZFS backend even while idle
Alex Vodeyko
alex.vodeyko at gmail.com
Mon Oct 6 23:45:42 PDT 2025
Hi,
I'm in the process of testing lustre-2.15.7_next on rocky-9.6, kernel
5.14.0-570.17.1.el9_6.x86_64, zfs-2.3.4.
84 disk shelf, multipath.
2x OSTs per OSS
OST is on the zpool with 4x(8+2)raidz2=40 hdds configuration (btw -
also tested on draid - the same problem).
atime=off (also tested with relatime=on)
recordsize=1M, compression=off
During benchmarks I've found that even on the completely idle system,
zpool-iostat shows 40-160 4k (ashift=12) writes (1-4 per hdd) every
second.
# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
..
ost00 482G 145T 0 158 0 634K
ost01 401G 145T 0 40 0 161K
---------- ----- ----- ----- ----- ----- -----
ost00 482G 145T 0 40 0 161K
ost01 401G 145T 0 157 0 629K
---------- ----- ----- ----- ----- ----- -----
ost00 482G 145T 0 40 0 161K
ost01 401G 145T 0 40 0 161K
---------- ----- ----- ----- ----- ----- -----
ost00 482G 145T 0 38 0 153K
ost01 401G 145T 0 39 0 157K
Could you please advise if I can turn off something (probably in
lustre, because local zfs does not show this behaviour) to avoid these
writes because they affect read performance (and cause huge cpu load
average and iowait numbers especially during multiple concurrent reads
from the single OST).
Many thanks,
Alex
More information about the lustre-discuss
mailing list