[Lustre-discuss] Future of LusterFS?

Mag Gam magawake at gmail.com
Mon Apr 26 04:29:39 PDT 2010


Speaking of the future. Is there any more news about SNS? I think
thats the only thing Lustre is missing to make it "production" ready
and not just for research labs.



On Fri, Apr 23, 2010 at 12:07 PM, Stuart Midgley <sdm900 at gmail.com> wrote:
> Yes, we suffer hardware failures.  All the time.  That is sort of the point of Lustre and a clustered file system :)
>
> We have had double-disk failures with raid5 (recovered everything except ~1MB of data), server failures, MDS failures etc.  We successfully recovered from them all.  Sure, it can be a little stressful... but it all works.
>
> If server hardware fails, our file systems basically hangs until we fix it.  Our most common failure is obviously disks... and they are all covered by raid.  Since we have mostly direct attached disk, you have a few minutes downtime of a server while you replace the disk... everything continues as normal when the server comes back.
>
> --
> Dr Stuart Midgley
> sdm900 at gmail.com
>
>
>
> On 23/04/2010, at 18:41 , Janne Aho wrote:
>
>> On 23/04/10 11:42, Stu Midgley wrote:
>>
>>>> Would lustre have issues if using cheap off the shell components or
>>>> would people here think you need to have high end machines with built in
>>>> redundancy for everything?
>>>
>>> We run lustre on cheap off the shelf gear.  We have 4 generations of
>>> cheapish gear in a single 300TB lustre config (40 oss's)
>>>
>>> It has been running very very well for about 3.5 years now.
>>
>> This sounds promising.
>>
>> Have you had any hardware failures?
>> If yes, how well has the cluster cooped with the loss of the machine(s)?
>>
>>
>> Any advice you can share from your initial setup of lustre?
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



More information about the lustre-discuss mailing list