[lustre-discuss] two lustre fs on same lnet was: Re: lustre clients cannot access different OSS groups on TCP and infiniband at same time
Alexander I Kulyavtsev
aik at fnal.gov
Wed Aug 3 13:52:59 PDT 2016
Hi Andreas,
the network names need to be unique if the same clients are connecting to both filesystems.
What are complication having two lustre filesystems on the same lnet on the same IB? Does it have performance impact (broadcasts, credits, buffers)?
We have two (three) lustre fs facing clusters on the same lnet. I'm thinking if I need to change that - we have service window right now.
Initially I set up separate lnets for each lustre but as we were doing ethernet 'bridge" lnet1(ib)-rtr-eth-rtr-lnet2(ib) to remote cluster between IB networks the routing
get kind of complicated. As a practical matter we were able to move 1PB of data between two lustres plus IO from/to compute cluster in this configuration.
We have one lnet per IB fabric:
router(eth) -- tcp11...tcp14 -- (eth)routers - o2ib2 -- cluster2
|
+------ lustre1 ----+
| |
cluster0 -{o2ib0} -- lustre2 --{o2ib1} - cluster1
| |
+------ lustre3 ----+
Right now we are merging clusters 1 and 2 and retire lustre1;
It can be good time to reconsider and split lnets like o2ib0 -> (o2ib20,oi2ib30) and o2ib1->(o2ib30,o2ib31).
What would be a reason for such lnet split?
Alex.
On Jul 13, 2016, at 8:15 PM, Dilger, Andreas <andreas.dilger at intel.com<mailto:andreas.dilger at intel.com>> wrote:
It sounds like you have two different filesystems, each using the same LNet networks "tcp0" and "o2ib0". While "tcp" is a shorthand for network "tcp0", the network names need to be unique if the same clients are connecting to both filesystems. One of the filesystems will need to regenerate the configuration to use "tcp1" and "o2ib1" (or whatever) to allow the clients to distinguish between the different networks.
Cheers, Andreas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20160803/959b5dc8/attachment.htm>
More information about the lustre-discuss
mailing list