[Lustre-discuss] Switch for lustre clients

Minh Hien minhhien261 at yahoo.com
Tue Aug 26 22:59:12 PDT 2008


Dear Mike,
Thanks for your insights.
MH

--- On Wed, 8/27/08, Mike Berg <mike.berg at sun.com> wrote:

> From: Mike Berg <mike.berg at sun.com>
> Subject: Re: [Lustre-discuss] Switch for lustre clients
> To: minhhien261 at yahoo.com
> Cc: lustre-discuss at lists.lustre.org
> Date: Wednesday, August 27, 2008, 11:55 AM
> Hi,
> 
> There are important considerations on how you put your IB
> fabric  
> together otherwise you can have a congested IB fabric which
> could  
> result in overall cluster stability and performance issues.
> 
> If you need or want to maximize the bandwidth and latency  
> characteristics of Infiniband, then you need to consider a
> full CLOS  
> fabric. In short a full CLOS topology, also called
> "fat tree", is a  
> fully connected, non-blocking topology, which provides the
> same  
> latency and bandwidth to any endpoints of the fabric. If
> you Google  
> for "CLOS fabric" you will see many references.
> 
> So, a 288 port switch is typically 24 switch blades. Each
> blade is  
> really a 24 port switch, with 12 client ports and 12 ports
> plugging  
> into a 2 stage internal IB fabric. So a 288 port switch is
> a 3 stage  
> switch. This same switch can be created by using individual
> stand  
> alone 24 port switches, with the same cross section
> bandwidth and  
> latency characteristics as the 288 port switch. The problem
> with  
> building a 288 port fabric in this manner is the two stages
> that  
> normally would be part of the internal workings of a 288
> port switch  
> are now external with many connectors and cables. This adds
> complexity  
> for both the physical implementation as well as trouble
> shooting bad  
> cables. In this case, from a cost perspective you are
> likely better  
> off using the ISR 9288 switch. However, a Voltaire
> representative  
> should be able to go over the details and trade offs with
> you.
> 
> You can also create a CLOS fabric that has a blocking
> factor, which  
> when done correctly can reduce the number of switches
> required, and  
> still provide great bandwidth and consistent latency. This
> is also  
> something your Voltaire representative should be able to go
> over in  
> detail with you.
> 
> For the Sun Data Center Switch 3x24, consider this switch
> as it is  
> described, three 24 port switches but with the advantage of
> reducing  
> the number of cable connections when compared to
> traditional 24 port  
> switches. This reduction in number of cables can ease
> building the 288  
> port fabric I describe above using conventional 24 port
> switches. So  
> when building your fabric consideration needs to be taken
> with how  
> connections are made to maintain the blocking factor you
> choose. Be  
> sure to work out the details with your Sun representative,
> they should  
> be able to formulate the fabric layout that meets your
> performance  
> requirements.
> 
> Regards,
> Mike
> 
> On Aug 26, 2008, at 8:46 PM, Minh Hien wrote:
> 
> > Dear all,
> > I'll have around 230 lustre clients for 30TB on
> infinity band. I  
> > wonder what kind of switch should be used to maintain
> performance  
> > and reliability to that scale of clients.
> >
> > Currently, I have 4 choices. The first 3 choices are
> using 288-port  
> > switch, Voltaire Grid Director Switch ISR 9288 (10GB
> Infinity  
> > band) , Grid Director ISR 2012 (20GB Infinity band),
> or Sun Data  
> > Centre Switch 3x24.
> >
> > The last choice is we could use smaller switch and
> connect them up,  
> > such as 96-port Voltaire Grid Director ISR 9096. It
> seems a cheaper  
> > solution, however, i suspect it could raise
> performance issue.
> >
> > With your experience, could you give me some
> guidelines on how to  
> > design the lustre client in this case? Thank you very
> much.
> >
> >
> >
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at lists.lustre.org
> >
> http://lists.lustre.org/mailman/listinfo/lustre-discuss


      



More information about the lustre-discuss mailing list