[lustre-discuss] How to solve issue when OSS is turned off?

Patrick Farrell paf at cray.com
Sun Nov 11 06:39:59 PST 2018


Default Lustre striping is just straight RAID0, so the data on (say) OST0 is not anywhere else.  You can still access data and files on other OSTs, and you can create files that live on other OSTs, so I don’t think the MDS is useless.  But this is the reason for failover - to ensure you can still access your data despite this sort of issue.

The FLR feature in Lustre 2.11 does allow mirroring of files, but requires manual resyncing of mirrors, so it’s powerful but limited.

To be honest, if you’re in a situation where you have highly unreliable hardware/power, I would say there are other file systems (such as Ceph) that will serve you better.  Lustre has significant resiliency capabilities but it is designed first for performance and does require failover (and the extra setup and cabling it requires).  Systems like Ceph are designed specifically with reliability as the first priority, using things like erasure coding to provide data availability through disk target failure.  (They can’t match Lustre on scalability and high end performance.)

________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of shirshak bajgain <shirshak55 at gmail.com>
Sent: Sunday, November 11, 2018 7:49:19 AM
To: lustre-discuss at lists.lustre.org
Subject: [lustre-discuss] How to solve issue when OSS is turned off?

We frequently have power cut and we are on testing phase. Suppose one OSS is poweroff. It means we cannot mount anything on lusture client right? And cannot lusture work on another poweron oss?

Like

OSS1 -> OST0 OST1 OST2
OSS2 -> OST3 OST4 OST5
OSS3 -> OST6 OST7 OST8

Is is due to striping like a file is stripped to parts and stored on multiple OST. So if one OSS fail (without failover oss etc.) it means that mdt/mgs is useless?

Thanks.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20181111/c3924c86/attachment.html>


More information about the lustre-discuss mailing list