[Lustre-discuss] Fwd: Lustre Thumper Fault Tolerance

Mertol Ozyoney Mertol.Ozyoney at Sun.COM
Thu Mar 6 04:40:29 PST 2008


Hi,

You cant do this right now. Network striping will be intruduced later.

If you realy thing you need this kind of redundancy i reccomend you to  
wait for upcoming jbods.

Normally lustre can fail off the nodes when required and on hpc  
applications speed moght be more important then reliability.

Regards
Mertol

Sent from a mobile device

Mertol Ozyoney

On 06.Mar.2008, at 12:57, Brennan <James.E.Brennan at Sun.COM> wrote:

> Forwarding.
>
> Begin forwarded message:
>
>> From: Brennan <James.E.Brennan at Sun.COM>
>> Date: March 6, 2008 2:36:44 AM PST
>> To: lustre-solutions at sun.com, hpc-aces at sun.com, hpc- 
>> storage at sun.com, lustre-discuss at sun.com
>> Subject: Lustre Thumper Fault Tolerance
>>
>> IHAC that wants about 150 TB usable of Thumpers+Lustre specifically  
>> to feed a compute cluster and use SAMFS to go to an SL3000.
>> They want the Lustre filesystem to be at least single fault  
>> tolerant for a complete Thumper failure. They are willing to double  
>> the number of
>> Thumpers to achieve this. What are the best practices for this  
>> configuration?
>>
>> Jim Brennan
>> Digital Media Systems
>> Sun Systems Group
>> Universal City, CA
>> (310)901-8677
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080306/f45dc2d3/attachment.htm>


More information about the lustre-discuss mailing list