[Lustre-discuss] OST redundancy between nodes?

Gary Gogick gary at workhabit.com
Fri Jun 19 11:19:29 PDT 2009


Okay - that's what I feared; glad to have it confirmed.

Thanks Kevin, appreciate the quick response. :)

-- 
--------------------------------------------------------------------------------------------------
Gary Gogick
senior systems administrator  |  workhabit,inc.


On Fri, Jun 19, 2009 at 2:15 PM, Kevin Van Maren <Kevin.Vanmaren at sun.com>wrote:

> Gary Gogick wrote:
>
>> Heya all,
>>
>> I'm investigating potential solutions for a storage deployment.  Lustre
>> piqued my interest due to ease of scalability and awesome aggregate
>> throughput potential.
>> Wondering if there's any provision in Lustre for handling catastrophic
>> loss of a node containing an OST; eg. replication/mirroring of OSTs to other
>> nodes?
>>
>> I'm gathering from the 1.8.0 documentation that there's no protection of
>> this sort for data other than underlying RAID configs on any individual
>> node, at least not without attempting to do some interesting stuff with
>> DRDB.  Just started looking at Lustre over the past day though, so I'd
>> totally appreciate an authoritative answer in case I'm misinterpreting the
>> documentation. :)
>>
>
> Correct.
>
> Lustre failover can be used to support catastrophic failure of a _node_,
> but not the _storage_.  If your configuration makes LUNs available to two
> nodes, it is possible to configure Lustre to operate across the failure of a
> server.
>
> If your LUN fails catastrophically, all the data on that lun is gone.  It
> is possible to bring Lustre up without it, but none of the files on that OST
> would be available.  If you are concerned about this case, then backups are
> your friend.
>
> While drdb could be used to make a lun "available" to two nodes, it will
> have a significant impact on performance, and (AFAIK) does not do
> synchronous replication, so an fsck would be required prior to mounting the
> OST on the second node, and there would be some data loss.
>
> Kevin
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20090619/5433255a/attachment.htm>


More information about the lustre-discuss mailing list