[Lustre-discuss] More failover issues
Wojciech Turek
wjt27 at cam.ac.uk
Mon Nov 12 14:23:34 PST 2007
Yes but in given example in section 2.2.2.1 two mgsnodes are
specified for --ost and you are specifying it for --mdt maybe that is
the problem? Do you have combined mgs with mdt ? Do you have one file
system or more?
Wojciech
On 12 Nov 2007, at 21:56, Robert LeBlanc wrote:
> Yes only one MGS per site, but you should be able to specify
> multiple MGS nodes. We have done it before with 1.6.0. See http://
> manual.lustre.org/manual/LustreManual16_HTML/DynamicHTML-05-1.html
> section 2.2.2.1.
>
> Robert
>
>
> On 11/12/07 2:48 PM, "Wojciech Turek" <wjt27 at cam.ac.uk> wrote:
>
>> Hi,
>>
>> I think this is because there can be only one MGS per lustre
>> installation (this is what manual says).
>>
>> Wojciech Turek
>> On 12 Nov 2007, at 21:18, Robert LeBlanc wrote:
>>
>>>
>>>
>>> This is what I'm getting:
>>>
>>> head2-2:~# mkfs.lustre --mkfsoptions="-O dir_index" --reformat --
>>> mdt --fsname=home --mgsnode=192.168.1.252 at o2ib --
>>> mgsnode=192.168.1.253 at o2ib --failnode=192.168.1.252 at o2ib /dev/
>>> mapper/ldiskd-part1
>>>
>>> Permanent disk data:
>>> Target: home-MDTffff
>>> Index: unassigned
>>> Lustre FS: home
>>> Mount type: ldiskfs
>>> Flags: 0x71
>>> (MDT needs_index first_time update )
>>> Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
>>> Parameters: mgsnode=192.168.1.253 at o2ib
>>> failover.node=192.168.1.252 at o2ib mdt.group_upcall=/usr/sbin/
>>> l_getgroups
>>>
>>> device size = 972MB
>>> formatting backing filesystem ldiskfs on /dev/mapper/ldiskd-part1
>>> target name home-MDTffff
>>> 4k blocks 0
>>> options -O dir_index -i 4096 -I 512 -q -F
>>> mkfs_cmd = mkfs.ext2 -j -b 4096 -L home-MDTffff -O dir_index -i
>>> 4096 -I 512 -q -F /dev/mapper/ldiskd-part1
>>> Writing CONFIGS/mountdata
>>>
>>>
>>> For some reason, only the last --mgsnode option is being kept.
>>>
>>> Robert
>>>
>>>
>>> -----Original Message-----
>>> From: Nathan Rutman [mailto:Nathan.Rutman at Sun.COM]
>>> Sent: Mon 11/12/2007 1:51 PM
>>> To: Robert LeBlanc
>>> Cc: lustre
>>> Subject: Re: [Lustre-discuss] More failover issues
>>>
>>> Robert LeBlanc wrote:
>>> > In 1.6.0, when creating a MDT, you could specify multiple --
>>> mgsnode options
>>> > and it would failover between them. 1.6.3 only seems to take
>>> the last one
>>> > and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn't
>>> seem to failover
>>> > to the other node. Any ideas how to get around this?
>>> >
>>> Multiple --mgsnode parameters should work:
>>> mkfs.lustre --mkfsoptions="-O dir_index" --reformat --mdt
>>> --mgsnode=192.168.1.253 at o2ib --mgsnode=1 at elan --device-
>>> size=10000 /tmp/foo
>>>
>>> Permanent disk data:
>>> Target: lustre-MDTffff
>>> Index: unassigned
>>> Lustre FS: lustre
>>> Mount type: ldiskfs
>>> Flags: 0x71
>>> (MDT needs_index first_time update )
>>> Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
>>> Parameters: mgsnode=192.168.1.253 at o2ib mgsnode=1 at elan
>>>
>>> > Robert
>>> >
>>> > Robert LeBlanc
>>> > College of Life Sciences Computer Support
>>> > Brigham Young University
>>> > leblanc at byu.edu
>>> > (801)422-1882
>>> >
>>> >
>>> > _______________________________________________
>>> > Lustre-discuss mailing list
>>> > Lustre-discuss at clusterfs.com
>>> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>>> >
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Lustre-discuss mailing list
>>> Lustre-discuss at clusterfs.com
>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>>>
>>>
>>>
>>> Mr Wojciech Turek
>>> Assistant System Manager
>>> University of Cambridge
>>> High Performance Computing service
>>> email: wjt27 at cam.ac.uk
>>> tel. +441223763517
>>>
>>>
>>>
>>>
>>>
>
>
> Robert LeBlanc
> College of Life Sciences Computer Support
> Brigham Young University
> leblanc at byu.edu
> (801)422-1882
>
Mr Wojciech Turek
Assistant System Manager
University of Cambridge
High Performance Computing service
email: wjt27 at cam.ac.uk
tel. +441223763517
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20071112/e313be84/attachment.htm>
More information about the lustre-discuss
mailing list