[Lustre-discuss] LNET over multiple NICs

Sébastien Buisson sebastien.buisson at bull.net
Tue Dec 18 04:48:54 PST 2012

Hi Alexander,

If I understand correctly, you have two subnets. is for 
tcp0, and is for tcp1. So you have to configure two 
LNET networks on clients as well as servers. Could you please give the 
lnet configuration you have setup on your OSSes and clients?

What you have to understand is that multirail LNET provides static load 
balancing over several LNET networks. Because LNET is not able to choose 
between several available routes, you have to restrict each target to a 
specific network. This is the purpose of the '--network' mkfs.lustre option.

In your case, you would format half of the OSTs of each OSS with 
'--network=tcp0', and the other half with '--network=tcp1'. This would 
make clients use alternatively tcp0 or tcp1, depending on the targets 
they communicate with. So if you write or read on all the OSTs at the 
same time, you would aggregate performance of your two 10GbE links, on 
the clients and the servers.

And finally, we really do not care about LNET traffic with the MGS. This 
is not what defines the networks that will be used for the communication 
between the clients and the target servers.


Le 18/12/2012 11:36, Alexander Oltu a écrit :
> Hi all!
> I have the following setup:
> * 4 x OSS servers with 2 x 10GbE
> * 4 x clients with 2 x 10GbE (these are kind of NFS servers to
>    redistribute filesystem to all other clients)
> I would like to use multi-rail LNET over Ethernet instead of bonding
> for performance reasons (if there are any). I am using 192.168.110.x for
> one adapter and 192.168.111.x for another one.
> Mounting filesystem on the clients as:
> mount.lustre at tcp0, at tcp1:/lustre /mnt/lustre
> But all LNET traffic from any client goes to the at tcp0
> only, despite of current load on the nid. lctl ping is working fine for
> both nids.
> I know that I can mount 2 clients to tcp0 and 2 other to tcp1, but
> would like to use both interfaces on each client for performance
> reasons.
> I've been looking into Lustre manual and found multi-rail for
> Infiniband only.
> I wonder what is recommended now for LNET over Ethernet with multiple
> adapters, where fault-tolerance is less important than performance?
> Thank you,
> Alex.
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

More information about the lustre-discuss mailing list