[Lustre-discuss] Advice on system configuration

Balagopal Pillai pillai at mathstat.dal.ca
Tue Feb 12 06:28:24 PST 2008


Hi,


       You could use lacp if switch supports it (not the usual static 
trunking support on lower end switches) or
adaptive load balancing without any switch support. I moved from lacp to 
adaptive load balancing
as i was interested in increasing the aggregate throughout and didn't 
want to rely on a  bug free
switch firmware support for 802.3ad. I have a similar setup on one of 
our lustre installations with two
storage servers. So i assume that you would be having the MDS and MGS 
along with a few OSTs on
one of the two servers and the other a pure OSS. I have the same setup. 
If i had to do it all over again,
i would go with a dedicated MDS + MGS server and dedicated OSS servers. 
Also i got only 4 GB ram on
both storage servers and had to upgrade ram to 8 GB on both servers. 
Lustre installation could use quite a bit
of ram depending on the usage (in my case over night rsyncs of an almost 
full volume to another big lustre volume
was the tipping point) Also you should watch for some bugs that gives 
heart burns on production servers like this -
(https://bugzilla.lustre.org/show_bug.cgi?id=13438) The lustre servers 
were crashing randomly almost once a day
for two months and finally the patch makes them run stable again. 
Another thing with bonding is the number of interfaces
you put in the bond. The more the number of interfaces, the more the 
number of interrupts they would generate. Jumbo frames
could reduce that i assume. The ring parameters needs to bumped up so 
that the interfaces won't drop frames.

                Just my experiences with the situation, it may or may 
not be useful in your setup. Hope it helps!

Regards
Balagopal

Iain Grant wrote:
>
> We have an in house  27 node cluster. With no shared storage capability.
>
>  
>
> I have just order 2 storage nodes with extra drives in them and was 
> hoping I could use this with Lustre and provide a good shared storage 
> setup.
>
>  
>
> How would you suggest configuring this setup ?
>
>  
>
> Each node comes with dual network adapters so I was thinking about 
> bonding them together to provide more bandwidth ( our switch should 
> allow us to do this.)
>
>  
>
> Thanks
>
>  
>
> Iain
>
> **Iain Grant** 
> */Linux Administrator/*
> Scottish Crop Research Institute
> Dundee DD2 5DA
> Tel: +44 (0)1382 562731
> mailto:Iain.Grant at scri.ac.uk
>
> www.scri.ac.uk <http://www.scri.ac.uk/>
>
>  
>
>
>
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> SCRI, Invergowrie, Dundee, DD2 5DA.  
> The Scottish Crop Research Institute is a charitable company limited by guarantee. 
> Registered in Scotland No: SC 29367.
> Recognised by the Inland Revenue as a Scottish Charity No: SC 006662.
>
>
> DISCLAIMER:
>
> This email is from the Scottish Crop Research Institute, but the views 
> expressed by the sender are not necessarily the views of SCRI and its 
> subsidiaries.  This email and any files transmitted with it are confidential 
> to the intended recipient at the e-mail address to which it has been 
> addressed.  It may not be disclosed or used by any other than that addressee.
> If you are not the intended recipient you are requested to preserve this 
> confidentiality and you must not use, disclose, copy, print or rely on this 
> e-mail in any way. Please notify postmaster at scri.ac.uk quoting the 
> name of the sender and delete the email from your system.
>
> Although SCRI has taken reasonable precautions to ensure no viruses are 
> present in this email, neither the Institute nor the sender accepts any 
> responsibility for any viruses, and it is your responsibility to scan the email 
> and the attachments (if any).
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   



More information about the lustre-discuss mailing list