[Lustre-discuss] Regarding redundancy
Brian J. Murrell
Brian.Murrell at Sun.COM
Mon Apr 6 12:55:10 PDT 2009
On Mon, 2009-04-06 at 15:27 -0400, Christopher Deneen wrote:
> A quick note on my setup, 40 oss's each with a md0 raid1 (The OST)
> (2x500gb) (amd x2 5400 8gb ddr2) and a single mdt/mgs/mds (dual xeon
> quad core 32gb ddr2 4tb raid6). I have everything mounted and working
> properly and am still going through the basic tests for performance
> but I am confused about the safety of the cluster. I not concerned
> about the OST's because they are a raid 1 which I can recover quickly
> and monitor but my question is, what if an OSS goes down. Will that
> cause corruption of the data?
Unless you also lose clients, no. In the event of an OSS going down,
the client will not have gotten the reply back from the OST to say that
it's data was actually written to disk. Until the client gets such a
reply, it holds on to that data so that if an OSS does crash, it can
"replay" that transaction. Thus, all data is either physically on-disk,
on in client memory ready to be replayed to disk.
The one exception to this is if you have some cache between the OST and
the disk that the OSS doesn't know about. It might think it's written
to disk but yet only written to a disk's cache. Should that disk go
down, that is lost data and possible corruption. This is why we
typically recommend disabling write caching on disk arrays unless they
can survive a power event and recover so that the disk is fully coherent
with what the host thinks should be there.
> I
> also would like to know if you can dynamically add to the cluster new
> OSS/OST's or do you have to unmount the client then remount after
> doing so.
No. You just add them as you need.
b.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20090406/a0f11d7b/attachment.pgp>
More information about the lustre-discuss
mailing list