[Lustre-discuss] Lustre v1.8.0.1 slower than expected large-file, sequential-buffered-file-read speed

Rick Rothstein rickrsr at gmail.com
Tue Aug 4 07:30:12 PDT 2009


Hi -

I'm new to Lustre (v1.8.0.1),
and I've verified that
I can get about 1000-megabytes-per-second aggregate throughput
for large file sequential reads using direct-I/O.
(only limited by the speed of my 10gb NIC with TCP offload engine).

My simple I/O test
has the client on a separate machine than the OST's, and
16 separate background "dd's" reading
16 separate files, residing on
16 separate disks (OST's);
e.g. running on the client machine:
dd if=/mnt/lustre/testfile01 of=/dev/null bs=2097152 count=500 iflag=direct
&
...                               ...
dd if=/mnt/lustre/testfile16 of=/dev/null bs=2097152 count=500 iflag=direct
&

As I said,
the above direct-I/O "dd" tests achieve about a 1000-megabyte-per-second
aggregate throughput,
but
when I try the same tests with normal buffered I/O,
(by just running "dd" without "iflag=direct"),
the runs
only get about a 550-megabyte-per-second aggregate throughput.

I suspect that this slowdown may have something to do with
client-side-caching,
but normal buffered reads have not speeded up,
even after I've tried such adjustments as:
lowering the value of max_cached_mb;
turning off server-side-caching via read_cache_enable;
turning off Linux caching via /proc/sys/vm/drop_caches;
turning debugging off via /proc/sys/lnet/debug.

I have also tried similar suggestions
discussed in the lustre-discussion list
July 22 entry "Lustre client memory usage very high";
and
these suggestions did not change my slower than expected results.

I'm now going to spend some time reading detailed Lustre tuning info,
and running the Lustre testing programs;
and
I'd also appreciate any advice from experienced Lustre users
on how to speed up these large-file, buffered-I/O, sequential reads.

Thanks for any help.

Rick Rothstein
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20090804/982eab90/attachment.htm>


More information about the lustre-discuss mailing list