[Lustre-discuss] Clustered MDS & OSS Servers
Jagga Soorma
jagga13 at gmail.com
Tue Jan 19 09:18:43 PST 2010
How would the OSS's and client's communicate with the MDS server in a
failover situation?
This is how I am doing things:
mds01: mkfs.lustre --fsname=fsname --mdt --mgs /dev/vgname/lvname
oss01: mkfs.lustre --ost --fsname=fsname
--failnode=oss02 at o2ib3--mgsnode=mds01 at o2ib3/dev/mapper/mpath0
oss02: mkfs.lustre --ost --fsname=fsname
--failnode=oss01 at o2ib3--mgsnode=mds01 at o2ib3/dev/mapper/mpath0
client01: mount -t lustre mds01-ib at o2ib3:/fsname /mnt
Now, if mds01 fails over to mds02, how would the client communicate with the
new MDS server if the IP changes?
What would the mkfs.lustre commands look like for a HA setup for MDS & OSS.
Also, is there a downfall for using a virtual IP for the MDS's?
Thanks in advance for your assistance.
-J
On Tue, Jan 19, 2010 at 2:43 AM, Andreas Dilger <adilger at sun.com> wrote:
> On 2010-01-19, at 13:01, Jagga Soorma wrote:
>
>> I am working on clustering our MDS & OSS servers and wanted to make sure I
>> understand this correctly. Can you please let me know if this sounds right:
>>
>> a) Planning on having a floating virtual IP setup on the active MDS server
>> (ib1:1). This is what the OSS's will use when doing their mkfs. In an
>> outage this virtual IP address will migrate to the standby node.
>>
>
> This is not how Lustre failover works. You need to assign a separate IP
> address for each MDS server. Lustre handles multiple MDS failover nodes
> itself.
>
>
> b) On the oss's there is no need for a virtual IP that would need to fail
>> over in an outage. I would simply have heartbeat mount the filesystems on
>> the other OSS node.
>>
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100119/171a50fa/attachment.htm>
More information about the lustre-discuss
mailing list