[Lustre-discuss] LNET over multiple NICs

Alexander Oltu Alexander.Oltu at uni.no
Tue Dec 18 10:27:27 PST 2012

Hi Sébastien,

Please see my answer in-line

On Tue, 18 Dec 2012 13:48:54 +0100
Sébastien Buisson wrote:

> If I understand correctly, you have two subnets. is
> for tcp0, and is for tcp1. So you have to configure
> two LNET networks on clients as well as servers. Could you please
> give the lnet configuration you have setup on your OSSes and clients?

This is one of OSSes:
[root at oss1 ~]# lctl list_nids at tcp at tcp1

This is one of the clients:
nid00030:~ # lctl list_nids at tcp at tcp1

> What you have to understand is that multirail LNET provides static
> load balancing over several LNET networks. Because LNET is not able
> to choose between several available routes, you have to restrict each
> target to a specific network. This is the purpose of the '--network'
> mkfs.lustre option.
> In your case, you would format half of the OSTs of each OSS with 
> '--network=tcp0', and the other half with '--network=tcp1'. This
> would make clients use alternatively tcp0 or tcp1, depending on the
> targets they communicate with. So if you write or read on all the
> OSTs at the same time, you would aggregate performance of your two
> 10GbE links, on the clients and the servers.

Thank you for the clear explanation, I will try --network
option distributed over OSTs and will get back. BTW, do you know
if this setup has better performance than simply bonded NICs?

> And finally, we really do not care about LNET traffic with the MGS.
> This is not what defines the networks that will be used for the
> communication between the clients and the target servers.

Does this mean that it is enough to mount like this?
mount.lustre at tcp0:/lustre /mnt/lustre

Thank you,

More information about the lustre-discuss mailing list