[Lustre-discuss] Disk array setup opinions

Indivar Nair indivar.nair at techterra.in
Mon Mar 11 09:37:42 PDT 2013


Yes, thats good stuff. Anyone planning to implement Lustre for the first
time must surely read it.
It helps you visualize your storage requirements very nicely.

And yes, the examples are similar.
The configuration options are quite common when use Dell Storage.

It is also mentioned in the 'Lustre 2.0 Operations Manual'
(821-2076-10.pdf),
10.1.1 Selecting Storage for the MDS or OSTs, Page 175.
*(May have been moved to some other page in the newer version of the doc)*

Had implemented a similar configuration (with lesser disks) for one my
clients.

Regards,


Indivar Nair



On Mon, Mar 11, 2013 at 9:20 PM, Christopher J. Walker <
C.J.Walker at qmul.ac.uk> wrote:

> On 11/03/13 15:30, Indivar Nair wrote:
> > The best configuration to go for should be based on your file sizes,
> > file count and read / write patterns.
> > But as such you are right, Jerome.
> >
> > *In general -*
> > -----------------
> >
> > The best RAID Configuration would be to create one that aligns with the
> > 1MB I/O size of Lustre.
> >
> > Say you have 1 x MD3200 and 4 x MD1200 expansion arrays. That would give
> > you 60 Disks.
> > So the best option here would be to create 6  RAID6 Arrays of 10 Disks
> each.
> > In this case, you would end up having - 6 RAID6 Arrays = 6 LUNs  = 6
> OSTs.
> >
> > In each of the 10 Disk RAID6 Array, you would get 8 Data Disks and 2
> > Parity Disks.
> > Now if you divide 1MB (I/O Size) by 8 Data Disks, it gives you - *128KB*
> > - the segment / chunk size you should go for.
> > This config will align the I/O reads / writes, giving you the best
> > performance possible with the disk set.
> >
> > Divide the 6 OSTs  among 2 OSS Nodes, and configure the nodes to act as
> > failover to each other.
> > Just ensure that each node has enough RAM to support 6 OSTs, in case one
> > of them fails.
> >
>
>
>
> http://content.dell.com/uk/en/enterprise/d/hpcc/cambridge-hpc-solution-centre
>
> Has links to a couple of white papers with Dell MD3200s and MD1200s in a
> failover configuration. They use 2*(MD3200 + 4*MD1200 +server) to give a
> failover solution - which looks like exactly the setup you describe.
>
> Chris
>
>
> > Hope this helps.
> >
> > Regards,
> >
> >
> > Indivar Nair
> >
> >
> > On Mon, Mar 11, 2013 at 6:31 PM, Jerome, Ron <Ron.Jerome at ssc-spc.gc.ca
> > <mailto:Ron.Jerome at ssc-spc.gc.ca>> wrote:
> >
> >     I am currently having a debate about the best way to carve up Dell
> >     MD3200's to be used as OST's in a Lustre file system and I invite
> >     this community to weigh in...
> >
> >     I am of the opinion that it should be setup as multiple raid groups
> >     each having a single LUN, with each raid group representing an OST,
> >     while my colleague feels that it should be setup as a single raid
> >     group across the whole array with multiple LUNS, with each LUN
> >     representing an OST.
> >
> >     Does anyone in this group have an opinion (one way or another)?
> >
> >     Regards,
> >
> >     Ron Jerome
> >     _______________________________________________
> >     Lustre-discuss mailing list
> >     Lustre-discuss at lists.lustre.org <mailto:
> Lustre-discuss at lists.lustre.org>
> >     http://lists.lustre.org/mailman/listinfo/lustre-discuss
> >
> >
> >
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20130311/715a2027/attachment.htm>


More information about the lustre-discuss mailing list