[Lustre-discuss] multihomed OST's configuration

Klaus Steden klaus.steden at thomson.net
Thu Jul 10 12:12:41 PDT 2008


Hi Mario,

Lustre will, if not instructed otherwise, bind to all available NICs on the
system. I've used Lustre extensively with LACP aggregate groups, and it
performs quite well.

Configuring multiple NICs from the same host into the same VLAN is something
of a non-sensical configuration unless you're running some kind of bizarre
failover scenario, but if they're all going to the same switch, that's an
impossibility. This kind of configuration would also make ordinary TCP/IP
routing somewhat funky.

Use NIC bonding, and configure your switch as appropriate to do likewise.
Cisco, Foundry, Extreme, Juniper, Alcatel, Netgear and a number of others
all support LACP in their L3 edge switches, and it's a standard feature of
any core switch.

Once you've set up the switch and the OS, instruct Lustre to use the bond by
putting "options lnet networks=tcp(bond0)" in your /etc/modprobe.conf and it
will take care of the rest.

cheers,
Klaus

On 7/9/08 5:07 AM, "mdavid" <david at lip.pt>did etch on stone tablets:

> hi Brian
> I was "mislead" by what it says in the ops manual, 12.1 chapter
> 
> Lustre can use multiple NICs without bonding. There is a difference in
> performance when Lustre uses multiple NICs versus when it uses bonding
> NICs.
> 
> though here it says "multiple NICS" not multihomed configurations.
> 
> Anyway I still don't know how to configure "multiple NICS" both from
> the point of view of the OS and Lustre
> note all the ethXX are in the same LAN, and connected to the same card
> in the switch
> if on the Lustre OST's I put
> options lnet networks=tcp(eth0,eth1,eth2,eth3)
> 
> how is it configured each ethX
> in principle I would have a single IP for the server
> 
> cheers
> 
> Mario David
> 
> On Jul 8, 1:25 pm, "Brian J. Murrell" <Brian.Murr... at Sun.COM> wrote:
>> On Mon, 2008-07-07 at 03:13 -0700, mdavid wrote:
>>> hi list
>>> I am a new to lustre (1 week old) and this list.
>>> I have some Dell PE1950 servers with MD1000 enclosures (scientific
>>> linux 5 == RHEL5 x86_54) on them and lustre 1.6.5, with lustre patched
>>> kernels on them
>> 
>>> on a first try (indeed it was the second), I managed to have a lustre
>>> up and running OK, now
>> 
>>> each dell server has 4 times 1Gb interfaces, and I want to take profit
>>> from them all
>>> either I try bonding them, or go for multihomed (which is my first
>>> try)
>> 
>> If what you want is to get the bandwidth of all 4 interfaces to the
>> Lustre servers then you really do want bonding.
>> 
>> Can you explain why you think you want multihoming vs. bonding?  Maybe
>> I'm misunderstanding your goal.
>> 
>> b.
>> 
>>  signature.asc
>> 1KDownload
>> 
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-disc... at lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustr
>> e-discuss
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list