[lustre-discuss] Lustre 2.10.0 multi rail configuration

Riccardo Veraldi Riccardo.Veraldi at cnaf.infn.it
Mon Aug 28 15:49:04 PDT 2017


Hello,
I am trying to deploy a multi rail configuration on Lustre 2.10.0 on RHEL73.
My goal is to use both the IB interfaces on OSSes and client.
I have one client and two OSSes and 1 MDS
My LNet network is labelled o2ib5 and tcp5 just for my own convenience.
What I did is to modify the configuration of lustre.conf

options lnet networks=o2ib5(ib0,ib1),tcp5(enp1s0f0)

lctl list_nids on either hte OSSes or the client shows me both local IB
interfaces:

*172.21.52.86 at o2ib5**
**172.21.52.118 at o2ib5*
172.21.42.211 at tcp5

anyway I can't run a LNet selftest using the new nids, it fails.

Seems like they are unused.
Any hint on the multi-rail configuration needed?
What I'd like to do is use both InfiniBand cards (ib0,ib1)  on my two
OSSes and on my client to leverage more bandwidth usage
since with only one InfiniBand I cannot saturate the disk performance.
thank you


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170828/019c4d6f/attachment.htm>


More information about the lustre-discuss mailing list