[lustre-discuss] Constant small writes on ZFS backend even while idle
Andreas Dilger
adilger at ddn.com
Tue Oct 7 10:03:42 PDT 2025
It could be Multi-Mount Protection (MMP)?
Cheers, Andreas
> On Oct 7, 2025, at 08:52, Alex Vodeyko via lustre-discuss <lustre-discuss at lists.lustre.org> wrote:
>
> Hi,
>
> I'm in the process of testing lustre-2.15.7_next on rocky-9.6, kernel
> 5.14.0-570.17.1.el9_6.x86_64, zfs-2.3.4.
> 84 disk shelf, multipath.
> 2x OSTs per OSS
> OST is on the zpool with 4x(8+2)raidz2=40 hdds configuration (btw -
> also tested on draid - the same problem).
> atime=off (also tested with relatime=on)
> recordsize=1M, compression=off
>
> During benchmarks I've found that even on the completely idle system,
> zpool-iostat shows 40-160 4k (ashift=12) writes (1-4 per hdd) every
> second.
> # zpool iostat 1
> capacity operations bandwidth
> pool alloc free read write read write
> ---------- ----- ----- ----- ----- ----- -----
> ..
> ost00 482G 145T 0 158 0 634K
> ost01 401G 145T 0 40 0 161K
> ---------- ----- ----- ----- ----- ----- -----
> ost00 482G 145T 0 40 0 161K
> ost01 401G 145T 0 157 0 629K
> ---------- ----- ----- ----- ----- ----- -----
> ost00 482G 145T 0 40 0 161K
> ost01 401G 145T 0 40 0 161K
> ---------- ----- ----- ----- ----- ----- -----
> ost00 482G 145T 0 38 0 153K
> ost01 401G 145T 0 39 0 157K
>
> Could you please advise if I can turn off something (probably in
> lustre, because local zfs does not show this behaviour) to avoid these
> writes because they affect read performance (and cause huge cpu load
> average and iowait numbers especially during multiple concurrent reads
> from the single OST).
>
> Many thanks,
> Alex
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
More information about the lustre-discuss
mailing list