[Lustre-discuss] Help needed for an upcoming HPC project

Mertol Ozyoney Mertol.Ozyoney at Sun.COM
Sun Oct 21 11:39:49 PDT 2007


Thanks for the quick answer

Yes I know that ZFS will be the prefered OSS file system sooner or later but
this may take some time (althoug Sun has very active development over ZFS )
ZFS is an excellent Volume manager and have a lot of advantages against
hardware raid controlers at the moment. ZFS can easly be configured to
tolarate enclosure and harddisk failures. However ZFS do only support active
passive failover and need sun cluster which is kind a expensive at the
moment. 

I agree on you about SAS connected Jbod enclosures are the future and Sun
have a few on the road map , however X4500 is the only available HW other
then common expensive arrays we do have in the portfolio. (X4500 performans
better in HPC environment better then hiend disk systems) 

We will soon have an extensive jbod familiy tailored to run ZFS , but I dont
know if I can align them with this project. 

Thanks for sharing your thoughts

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +902123352222
Email mertol.ozyoney at Sun.COM



-----Original Message-----
From: lustre-discuss-bounces at clusterfs.com
[mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of Jim Garlick
Sent: Sunday, October 21, 2007 9:20 PM
To: Mertol Ozyoney
Cc: lustre-discuss at clusterfs.com
Subject: Re: [Lustre-discuss] Help needed for an upcoming HPC project

If I were purchasing 800TB of storage for Lustre soon, I would want
to take into consideration the fact that ZFS is the planned back end
file system for Lustre in the next year or so.

>From what I understand, the best configuration for ZFS is lots of 
independent disks with ZFS doing RAID in software.  The thumper provides
this but with no ability to fail the disks over to another node.

We have used DDN S2A fibre-channel attached storage in our large Lustre
file systems to date.  These are expensive, I guess because of the 
hardware RAID implementation, which is not needed with ZFS.  However, 
we don't have to worry about JBOD enclosure management, detecting failed 
disks, etc. because this is all handled by the S2A firmware.

So it seems like a configuration based on dual-hosted SAS JBOD's may
be the way of the future, but I wouldn't do 800 TB of that without a good
plan for enclosure and drive hardware management!

Maybe Sun can help us out here with a supported product since they have
taken over CFS and are also in the storage/ZFS business.  (It must run Linux
though :-)

Jim

On Sun, 21 Oct 2007, Mertol Ozyoney wrote:

> Hi all;
>
>
>
> One of our HPC customers will be adding 800 TB of storage to their HPC
> environment.
>
>
>
> I am in need of some expert advice regarding following questions.
>
>
>
> .         I know that some installiations have used Sun thumper X4500's ,
> I'd like to learn pro's of cons of using Sun X4500's
>
> .         If I use X4500 how can I provide redundancy ? What happens if a
> node fails and how can I restore the node ?
>
> .         What are the supported backup applications ? (Veritas, legato
> etc..) Can we use incremantal backups on lustre ?
>
>
>
> regards
>
>
>
>
> <http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif
>
> Mertol Ozyoney
> Storage Practice - Sales Manager
>
> Sun Microsystems, TR
> Istanbul TR
> Phone +902123352200
> Mobile +905339310752
> Fax +902123352222
> Email  <mailto:Ayca.Yalcin at Sun.COM> mertol.ozyoney at Sun.COM
>
>
>
>
>
>

_______________________________________________
Lustre-discuss mailing list
Lustre-discuss at clusterfs.com
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list