[Lustre-discuss] LNET over 2 x 10 GbE switches
Alexander.Oltu at uni.no
Mon Jul 8 06:42:10 PDT 2013
We are expanding our Lustre over 10GbE TCP setup. We are going to add
few more OSSes and another 10GbE switch because we need more ports.
All OSSes and MDS have 2 x 10GbE interfaces in bonding-alb (same for
For the new setup we have few choices to connect switches:
1. Just add a new switch move 2nd interfaces from all servers to the
new switch and reconfigure all clients and servers to use arp ping.
(maybe we will need to switch bonding to balance-rr?, will test it).
The scary part is that the network noise will increase with increasing
number of clients and servers, so I would prefer to keep MII monitoring.
2. Setup trunking on switches, connect them with 4 x 10GbE lines, move
2nd server interfaces to the new switch. In this case we can leave
bonding-alb and mii monitoring.
For me it looks like 2nd option should be better way to go. I expect
with the alb mode Lustre clients and servers will be able to go over
the other switch in case connection to local switch is to busy. And
should be able to keep fs online if one switch goes down.
Is there any recommended setup for Lnet how to connect switches?
Maybe someone already have experience?
The basic requirement is maximum throughput and to be able to have
access to the filesystem in case one switch goes down.
More information about the lustre-discuss