[Lustre-discuss] Line rate performance for clients

Isaac Huang isaac_huang at xyratex.com
Tue Aug 2 17:21:07 PDT 2011

On Mon, Aug 01, 2011 at 02:52:07PM +0200, Peter Kjellström wrote:
> > > On 2011-07-29, at 11:33 AM, Brock Palen wrote:
> > ......
> > Does that make sense?  Is it even right for me to expect that I could
> > combine the performance together and expect full speed in and full speed
> > out if I can consistently get them independent of each other?

I believe yes. I remember that we once did a test on 1GigE where one
client read from and another wrote to a same server and observed
about 223MB/s aggregate read/write throughput.

> Can your setup do wirespeed full duplex in the simplest case (never mind with 
> lustre)? I'd try iperf or something similar before investing too much time 
> looking for "lost" performance in higher layers.

Agree. And if 'iperf' results look good, I'd suggest to move on to the
LNet selftest, and it'd tell you whether the Lustre networking stack
is capable of saturating the link in both directions.

Here's a script we once used, with outputs:

[root at sata16 ~]# export LST_SESSION=$$
[root at sata16 ~]# lst new_session --timeout 100 read/write
SESSION: read/write TIMEOUT: 100 FORCE: No
[root at sata16 ~]# lst add_group servers sata14 at tcp
sata14 at tcp are added to session
[root at sata16 ~]# lst add_group readers sata16 at tcp
sata16 at tcp are added to session
[root at sata16 ~]# lst add_group writers sata16 at tcp
sata16 at tcp are added to session
[root at sata16 ~]# lst add_batch bulk_rw
[root at sata16 ~]# lst add_test --batch bulk_rw --concurrency 8 --from
readers --to servers brw read size=1M
Test was added successfully
[root at sata16 ~]# lst add_test --batch bulk_rw --concurrency 8 --from
writers --to servers brw write size=1M
Test was added successfully
[root at sata16 ~]# lst run bulk_rw
bulk_rw is running now
[root at sata16 ~]# lst stat servers
[LNet Rates of servers]
[R] Avg: 335      RPC/s Min: 335      RPC/s Max: 335      RPC/s
[W] Avg: 446      RPC/s Min: 446      RPC/s Max: 446      RPC/s
[LNet Bandwidth of servers]
[R] Avg: 111.83   MB/s  Min: 111.83   MB/s  Max: 111.83   MB/s
[W] Avg: 111.23   MB/s  Min: 111.23   MB/s  Max: 111.23   MB/s

The script can be easily adapted to run on your system. Please load
the lnet_selftest kernel module on all test nodes before running it.
Lustre needs not to be running.

- Isaac
This email may contain privileged or confidential information, which should only be used for the purpose for which it was sent by Xyratex. No further rights or licenses are granted to use such information. If you are not the intended recipient of this message, please notify the sender by return and delete it. You may not use, copy, disclose or rely on the information contained in it.
Internet email is susceptible to data corruption, interception and unauthorised amendment for which Xyratex does not accept liability. While we have taken reasonable precautions to ensure that this email is free of viruses, Xyratex does not accept liability for the presence of any computer viruses in this email, nor for any losses caused as a result of viruses.
Xyratex Technology Limited (03134912), Registered in England & Wales, Registered Office, Langstone Road, Havant, Hampshire, PO9 1SA.
The Xyratex group of companies also includes, Xyratex Ltd, registered in Bermuda, Xyratex International Inc, registered in California, Xyratex (Malaysia) Sdn Bhd registered in Malaysia, Xyratex Technology (Wuxi) Co Ltd registered in The People's Republic of China and Xyratex Japan Limited registered in Japan.

More information about the lustre-discuss mailing list