[Lustre-discuss] Lustre read performance decay when OSSes are assigned in two different subnet

Peter Grandi pg_lus at lus.for.sabi.co.UK
Thu Mar 15 16:14:05 PDT 2012

[ ... ]

> 0) Use 3 subnets to assign the 3 nodes.
> 1) Run "netperf" in the two OSS separately, run "netserver" in
>    "client";this step could simulate the networking scenario:
>    "client" reads data from two OSS, but here is no disk i/o or
>    other r/w;

It might be useful for you to see what is the OSS-OSS transfer
rate too, and the transfer rate in the client->OSS direction is
too, but since this is purely a networking issue it is a bit
offtopic here.

> 2) two OSS netperf's results are about 200 M/s, totally are
>    400M/s. so low - -!

I usually prefer to use 'nuttcp' for this, it is far more
convenient to run.

> 3) run only netperf at one OSS, the test result is 950M/s..
>    this res is ok.

That is unusually good. 

> 4) All the upper steps prove that, the networking is the
>    bottleneck of the read performance.

> When 2 NODEs send TCP stream at the same time, and only 1 NODE
> recv TCP stream. The total throughput is half of normal value.

Well, if you introduce routing, and your router is not perhaps
fully non-blocking 10Gb/s, or it has insufficient or excessive
buffering, or it subtly changes latency patterns, that's pretty
unexeceptional. A lot of studying 'wireshark' traces will show
which particular limitation applies. For example enabling TSO
can give very bad results with some NICs.

But when setting up Lustre usually one tries to engineer the
simplest/best networking case, not a more complicated one.

> so oddball..  What induced that? Thanks a lot

Q: "Doctor, if I stab my hand with a fork it really hurts".
A: "Don't do it".


More information about the lustre-discuss mailing list