[lustre-discuss] Multiple IB interfaces
rajgautam at gmail.com
Thu Mar 11 04:20:27 PST 2021
Few scenarios which you may consider:
1) define 2 lnets one per IB interface (say o2ib1 and o2ib2) and share out
one OST through o2ib1 and other one through o2ib2. You can map HBA and disk
locality so that they are attached to the same cpu.
2) Same as above but share the ost/s from both lnets But configure odd
clients (clients with odd ips) to use o2ib1 and even clients to use o2ib2.
This may not be exactly what you are looking for but can efficiently
utilize both interfaces.
On Tue, Mar 9, 2021 at 9:18 AM Alastair Basden via lustre-discuss <
lustre-discuss at lists.lustre.org> wrote:
> We are installing some new Lustre servers with 2 InfiniBand cards, 1
> attached to each CPU socket. Storage is nvme, again, some drives attached
> to each socket.
> We want to ensure that data to/from each drive uses the appropriate IB
> card, and doesn't need to travel through the inter-cpu link. Data being
> written is fairly easy I think, we just set that OST to the appropriate IP
> address. However, data being read may well go out the other NIC, if I
> understand correctly.
> What setup do we need for this?
> I think probably not bonding, as that will presumably not tie
> NIC interfaces to cpus. But I also see a note in the Lustre manual:
> """If the server has multiple interfaces on the same subnet, the Linux
> kernel will send all traffic using the first configured interface. This is
> a limitation of Linux, not Lustre. In this case, network interface bonding
> should be used. For more information about network interface bonding, see
> Chapter 7, Setting Up Network Interface Bonding."""
> (plus, no idea if bonding is supported on InfiniBand).
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss