[Lustre-discuss] Enabling mds failover after filesystem creation
jeff.johnson at aeoncomputing.com
Tue Jun 14 12:18:25 PDT 2011
Apologies, I should have been more descriptive.
I am running a dedicated MGS node and MGT device. The MDT is a
standalone RAID-10 shared via SAS between two nodes, one being the
current MDS and the second being the planned secondary MDS. Heartbeat
and stonith w/ ipmi control is currently configured but not started
between the two nodes.
On 6/14/11 12:12 PM, Cliff White wrote:
> It depends - are you using a combined MGS/MDS?
> If so, you will have to update the mgsnid on all servers to reflect
> the failover node,
> plus change the client mount string to show the failover node.
> otherwise, it's the same procedure as with an OST.
> On Tue, Jun 14, 2011 at 12:06 PM, Jeff Johnson
> <jeff.johnson at aeoncomputing.com
> <mailto:jeff.johnson at aeoncomputing.com>> wrote:
> I am attempting to add mds failover operation to an existing v1.8.4
> filesystem. I have heartbeat/stonith configured on the mds nodes. What
> is unclear is what to change in the lustre parameters. I have read
> the 1.8.x and 2.0 manuals and they are unclear as exactly how to
> failover mds operation on an existing filesystem.
> Do I simply run the following on the primary mds node and specify the
> NID of the secondary mds node?
> tunefs.lustre --param="failover.node=10.0.1.3 at o2ib" /dev/<mdt device>
> where: 10.0.1.2=primary mds, 10.0.1.3=secondary mds
> All of the examples for enabling failover via tunefs.lustre are
> for OSTs
> and I want to be sure that there isn't a different procedure for
> the MDS
> since it can only be active/passive.
> Jeff Johnson
> Aeon Computing
> www.aeoncomputing.com <http://www.aeoncomputing.com>
> 4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> <mailto:Lustre-discuss at lists.lustre.org>
> Support Guy
> WhamCloud, Inc.
> www.whamcloud.com <http://www.whamcloud.com>
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss