[Lustre-discuss] [EXTERNAL] Lustre on ZFS MDS/MDT failover

Mervini, Joseph A jamervi at sandia.gov
Mon Dec 1 10:43:19 PST 2014


Oh - BTW. You will need to do the same thing with your OSTs for setting both the mgsnodes. 

Also, you can use zfs show <zpool name>/<zpool volume> to get the same info as you would with tunefs.lustre


====

Joe Mervini
Sandia National Laboratories
High Performance Computing
505.844.6770
jamervi at sandia.gov



On Dec 1, 2014, at 11:38 AM, Joe Mervini <jamervi at sandia.gov> wrote:

> I just ran into this same issue last week. There is a JIRA ticket on it at Intel but in a nutshell mkfs.lustre on zfs will only record the last mgsnode you specify in your command. To add an additional fail node you can use the zfs command to update the configuration:
> 
> zfs set lustre:failover.node=<mgsnode1@<network>:<mgsnode2>@<network> <zpool name>/<zpool volume>
> 
> Hope this helps.
> ====
> 
> Joe Mervini
> Sandia National Laboratories
> High Performance Computing
> 505.844.6770
> jamervi at sandia.gov
> 
> 
> 
> On Dec 1, 2014, at 10:41 AM, Ron Croonenberg <ronc at lanl.gov> wrote:
> 
>> Hello,
>> 
>> We're running/building Lustre on ZFS and I noticed that when using mkfs.lustre on a zpool, when creating the MDT, with two --mgsnid parameters, one for the MGS and one for the MGS fail over, causes a problem resulting in not being able to mount the MDT. (I think it tries to connect to the fail over instead of the actual MGS)
>> 
>> In ldiskfs it just works and I can mount the MDT.
>> 
>> For MDT/MDS fail over, is it enough to just specify the --failnode parameter or does the --mgsnid parameter need to be specified too?
>> 
>> thanks,
>> 
>> Ron
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 




More information about the lustre-discuss mailing list