[Lustre-discuss] [EXTERNAL] Lustre on ZFS MDS/MDT failover

Ron Croonenberg ronc at lanl.gov
Tue Dec 2 09:29:36 PST 2014


ah...  cool....

yes that would definitely help.

thanks!!




On 12/01/2014 11:38 AM, Mervini, Joseph A wrote:
> I just ran into this same issue last week. There is a JIRA ticket on it at Intel but in a nutshell mkfs.lustre on zfs will only record the last mgsnode you specify in your command. To add an additional fail node you can use the zfs command to update the configuration:
>
> zfs set lustre:failover.node=<mgsnode1@<network>:<mgsnode2>@<network> <zpool name>/<zpool volume>
>
> Hope this helps.
> ====
>
> Joe Mervini
> Sandia National Laboratories
> High Performance Computing
> 505.844.6770
> jamervi at sandia.gov
>
>
>
> On Dec 1, 2014, at 10:41 AM, Ron Croonenberg <ronc at lanl.gov> wrote:
>
>> Hello,
>>
>> We're running/building Lustre on ZFS and I noticed that when using mkfs.lustre on a zpool, when creating the MDT, with two --mgsnid parameters, one for the MGS and one for the MGS fail over, causes a problem resulting in not being able to mount the MDT. (I think it tries to connect to the fail over instead of the actual MGS)
>>
>> In ldiskfs it just works and I can mount the MDT.
>>
>> For MDT/MDS fail over, is it enough to just specify the --failnode parameter or does the --mgsnid parameter need to be specified too?
>>
>> thanks,
>>
>> Ron
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



More information about the lustre-discuss mailing list