[Lustre-discuss] bonding and multiple OST per OSS

Klaus Steden klaus.steden at thomson.net
Tue Oct 23 14:31:40 PDT 2007


The manual says that Lustre assigns an operational thread to each logical
interface, so if you're using bonding, you'll get a single transaction
thread for your bond, and not one-per physical interface. There are
presumably both advantages and disadvantages to this approach, but I haven't
yet had a chance to test it in the field.

As far as configuration, you need to specify which interfaces in your
modprobe.conf lines, something like this, i.e.

-- cut --
options lnet networks=tcp0(bond0),tcp1(eth3)
-- cut --

And then in your mount statements, specify which lnet interface you wish to
use to communicate, i.e.

-- cut --
mount -t lustre mds0 at tcp0:/lustre
mount -t lustre mds1 at tcp1:/lustre
-- cut --

hth,
Klaus

On 10/23/07 12:16 PM, "Brock Palen" <brockp at umich.edu>did etch on stone
tablets:

> Hello,
> In reading the manual about bonding (we will need to bond our Gige)
> I get the impression that we should let lustre take care of it and
> not use the linux bonding drivers.  Am I correct in this assumption?
> If so I would have 2 interfaces each with their own IP,  how does the
> client know to use both interfaces?  Is this just advertised by the
> MGS to the client when mounted?  Or does a client choose a random
> interface and does all IO to that OSS though that interface for that
> transaction?
> 
> Also our OSS will have 2 separate devices,  I would like to be
> separate stripe targets for lustre.  (ie if using 1 OSS lfs
> getstripe, would show a large files striped over 2 OST's)  Is this
> fine?  or should we software raid them together and make one large
> OST on top?
> 
> Thank you
> 
> Brock Palen
> Center for Advanced Computing
> brockp at umich.edu
> (734)936-1985
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list