[Lustre-discuss] MD1000 woes and OSS migration suggestions
nick at creativemotiondesign.com
Tue Dec 29 18:33:11 PST 2009
-----BEGIN PGP SIGNED MESSAGE-----
We've been using an MD1000 as our storage array for close to a year
now, just hooked up to one OSS (LVM+ldiskfs). I recently ordered 2 more
servers, one to be hooked up to the MD1000 to help distribute the load,
the other to act as a lustre client (web node).
The hosting company informs me that the MD1000 was never setup to
operate in split mode (which I asked for in the beginning) so basically
only one server can be connected to it.
I now am faced with a tough call, we can't bring the filesystem down
for any extended period of time (a few minutes is OK, though 0 downtime
would be perfect!) and I'm not sure how to proceed in a way that would
make things cause the least amount of headache.
The only thing I can think of is to set up a second MD1000 (configured
for split mode) connect it to OSS2 (the new one which is not yet being
used), add it to the Lustre filesystem and then somehow migrate the data
from OSS1 (old MD1000) to OSS2 (new MD1000) ... then, bring OSS1
offline, and connect it to the second partition of new MD1000 and bring
that end online once more.
I've never done anything like this and am not entirely sure if this is
the best method. Any suggestions, alternatives, docs or things to look
out for would be greatly appreciated.
Director of Technology
Creative Motion Design
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----
More information about the lustre-discuss