[lustre-discuss] Mounting Lustre over IB-to-Ethernet gateway

Philippe Weill Philippe.Weill at latmos.ipsl.fr
Thu Aug 4 02:10:14 PDT 2016



Le 02/08/2016 à 11:35, Martin Hecht a écrit :
> Hi Kevin,
>
> I think your proposed lnet config line is correct and it would add tcp0. If you add a new lnet on the servers you have to reload the
> lnet module, which implies that you have to restart lustre (you don't have to reboot if unloading the modules works smoothly, i.e.
> unmounting all targets, followed by lustre_rmmod, and then mounting targets again, you don't have to restart the ib clients though).
>
> If you have clients with an interface on both networks (which could act as lnet routers) you can do without restarting the servers.
> You don't have to add the lnet on the servers in that case, but you just have to add the routes to the new lnet on all servers which
> works in production with lctl --net tcp0 add_route client-ip at o2ib0. On the routers you need forwarding="enabled" and they need both
> lnets, each of them assigned to the appropriate interface (in order to configure this you have to reload the lnet module on the
> clients which will act as routers). On the tcp clients you would need the route across the routers in the opposite direction.
> However, in that scenario you wouldn't use the ib2eth gateway.

thank you for your information
this is the information i need to start some test (with lustre in production)
'lctl --net tcp0 add_route client-ip at o2ib0'

for kevin we have o2ib and tcp on same ib interface (no problem)

in our environment (server 2.5.3 client 2.7.0 ) we use some Virtual machine as a lustre client
we have infiniband and 10G on our ESXIs

using Mellanox driver in ESXI and ip over ib for lustre on vm on top vmxnet3 card
perf was not so good
vm-client lustre 2.7 SL6 on ipoib vmware 5.5
write 104856690688 octets (105 GB) copiés, 438,965 s, 239 MB/s
write 104857600000 octets (105 GB) copiés, 383,271 s, 274 MB/s (no checksum)

using lnet router for lustre on vm on top vmxnet3 card and 10G interface on esxi
vm-client lustre 2.7 SL6 with real lnet router  vmware 5.5
write 104857600000 octets (105 GB) copiés, 200,769 s, 522 MB/s
write 104857600000 octets (105 GB) copiés, 183,818 s, 570 MB/s (no checksum)

real o2ib client
write 104857600000 octets (105 GB) copiés, 166,029 s, 632 MB/s

quality of mellanox driver on esxi ?


very interesting


>
> Greetings,
> Martin
>
> On 08/01/2016 01:05 PM, Kevin M. Hildebrand wrote:
>> Our Lustre filesystem is currently set up to use the o2ib interface only-
>> all of the servers have
>> options lnet networks=o2ib0(ib0)
>>
>> We've just added a Mellanox IB-to-Ethernet gateway and would like to be
>> able to have clients on the Ethernet side also mount Lustre.  The gateway
>> extends the same layer-2 IP range that's being used for IPoIB out to the
>> Ethernet clients
>>
>> How should I go about doing this?  Since the clients don't have IB, it
>> doesn't appear that I can use o2ib0 to mount.  Do I need to add another
>> lnet network on the servers?  Something like
>> options lnet networks=o2ib0(ib0),tcp0(ib0)?  Can I have both protocols on
>> the same interface?
>> And if I do have to add another lnet network, is there any way to do so
>> without restarting the servers?
>>
>> Thanks,
>> Kevin
>>
>> --
>> Kevin Hildebrand
>> University of Maryland, College Park
>> Division of IT
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>

-- 
Weill Philippe -  Administrateur Systeme et Reseaux
CNRS/UPMC/IPSL   LATMOS (UMR 8190)
Tour 45/46 3e Etage B302|4 Place Jussieu|75252 Paris Cedex 05 -  FRANCE
Email:philippe.weill at latmos.ipsl.fr | tel:+33 0144274759


More information about the lustre-discuss mailing list