[lustre-discuss] varying sequential read performance.

John Bauer bauerj at iodoctors.com
Mon Apr 2 17:06:23 PDT 2018

I am running dd 10 times consecutively to  read a 64GB file ( 
stripeCount=4 stripeSize=4M ) on a Lustre client(version 2.10.3) that 
has 64GB of memory.
The client node was dedicated.

*for pass in 1 2 3 4 5 6 7 8 9 10
    of=/dev/null if=${file} count=128000 bs=512K
Instrumentation of the I/O from dd reveals varying performance.  In the 
plot below, the bottom frame has wall time
on the X axis, and file position of the dd reads on the Y axis, with a 
dot plotted at the wall time and starting file position of every read.
The slopes of the lines indicate the data transfer rate, which vary from 
475MB/s to 1.5GB/s.  The last 2 passes have sharp breaks
in the performance, one with increasing performance, and one with 
decreasing performance.

The top frame indicates the amount of memory used by each of the file's 
4 OSCs over the course of the 10 dd runs.  Nothing terribly odd here 
except that
one of the OSC's eventually has its entire stripe ( 16GB ) cached and 
then never gives any up.

I should mention that the file system has 320 OSTs.  I found LU-6370 
which eventually started discussing LRU management issues on systems 
with high
numbers of OST's leading to reduced RPC sizes.

Any explanations for the varying performance?

I/O Doctors, LLC
bauerj at iodoctors.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180402/a90e35c5/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: johbmffmkkegkbkh.png
Type: image/png
Size: 34134 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180402/a90e35c5/attachment-0001.png>

More information about the lustre-discuss mailing list