[Lustre-discuss] Solid State MDT

Paul Cote paul.cote at sicortex.com
Mon Apr 13 06:31:34 PDT 2009


Hello

 > We compared the metadata rates of that with the ones we get from our 
MDT on a DDN 9500 with write back cache on

This is interesting since the same topic was raised here ... I'm curious 
though about the details of your MDT on the DDN array... did you 
dedicated one full tierr which would, unfortunately, waste a lot of 
capacity? or just allocate relative small capacity (100GB?) over many 
tiers ... which would, in turn, compete for I/Os via OSTs. Any insight 
would be appreciated ... i'm looking for best practices to configure the 
MDT on the DDN.

thanks,
/pgc

Sarp Oral (oad) wrote:
> We have played around with that idea in one way or the other for a few 
> times in the past. It didn’t seem to be cost effective.
>
> We tried a RamSan device (300, if I am not mistaken) as a MDT, almost 
> two years ago. We compared the metadata rates of that with the ones we 
> get from our MDT on a DDN 9500 with write back cache on. DDN (simply a 
> big cache with a RAID 5 magnetic disk set behind it) turned out to be 
> a more cost effective solution for our installation and use cases.
>
>
> We haven’t evaluated any SSDs as a MDT since then as far as I can 
> remember.
>
>
> Sarp
>
>
> On 4/9/09 6:07 PM, "Jordan Mendler" <jmendler at ucla.edu> wrote:
>
>     Has anyone done any testing of modern SSD drives as an MDT for
>     Lustre 1.6? Searching through the archives it seems that most of
>     the posts related to SSD are either incomplete or slightly dated.
>
>     Does anyone have any input as to how they would compare to 15k RPM
>     drives and at what deployment size the metadata performance gain
>     would become noticeable? We are currently using Lustre as a small
>     scratch space, and initially deployed our MDT as a 4x7200 RPM SATA
>     RAID10 internal to the MDS. Metadata slow downs have become
>     apparent during heavy use and/or small file operations, so we are
>     currently deliberating which upgrade path to take.
>
>     As of now, our deployment is pretty small:
>     4 OSS's each with a 4x1TB RAID10 OST on disks internal to the OSS.
>     Will increase the number of these as the system grows.
>     ~50 clients that read/write large files that are striped across
>     all OSS's. Will grow 2-4x in the next several months.
>     We are currently on GigE, but will be switching to DDR-4x IB very
>     soon.
>
>     Thanks,
>     Jordan
>
>     ------------------------------------------------------------------------
>     _______________________________________________
>     Lustre-discuss mailing list
>     Lustre-discuss at lists.lustre.org
>     http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list