[Lustre-discuss] More failover issues

Robert LeBlanc robert at leblancnet.us
Mon Nov 12 14:11:35 PST 2007


Moreover, tunefs returns:

head2-2:~# tunefs.lustre --mgsnode=192.168.1.253 at o2ib --mgsnode=192.168.1.252 at o2ib --writeconf /dev/mapper/ldiskd-part1 
checking for existing Lustre data: found CONFIGS/mountdata
Reading CONFIGS/mountdata

   Read previous values:
Target:     home-MDT0000
Index:      0
Lustre FS:  home
Mount type: ldiskfs
Flags:      0x101
              (MDT writeconf )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:  failover.node=192.168.1.252 at o2ib mdt.group_upcall=/usr/sbin/l_getgroups mgsnode=192.168.1.253 at o2ib


   Permanent disk data:
Target:     home-MDT0000
Index:      0
Lustre FS:  home
Mount type: ldiskfs
Flags:      0x101
              (MDT writeconf )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:  failover.node=192.168.1.252 at o2ib mdt.group_upcall=/usr/sbin/l_getgroups   mgsnode=192.168.1.252 at o2ib

Writing CONFIGS/mountdata


Notice how there are two spaces between the mdt.group_upcall and the mgsnode parameters. If you only specify one mgsnode, then there is only one space. I wonder if there is something buggy with the parser.

Robert


-----Original Message-----
From: lustre-discuss-bounces at clusterfs.com on behalf of Robert LeBlanc
Sent: Mon 11/12/2007 2:18 PM
To: Nathan Rutman
Cc: lustre
Subject: Re: [Lustre-discuss] More failover issues
 
This is what I'm getting:

head2-2:~# mkfs.lustre --mkfsoptions="-O dir_index" --reformat --mdt --fsname=home --mgsnode=192.168.1.252 at o2ib --mgsnode=192.168.1.253 at o2ib --failnode=192.168.1.252 at o2ib /dev/mapper/ldiskd-part1

   Permanent disk data:
Target:     home-MDTffff
Index:      unassigned
Lustre FS:  home
Mount type: ldiskfs
Flags:      0x71
              (MDT needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:  mgsnode=192.168.1.253 at o2ib failover.node=192.168.1.252 at o2ib mdt.group_upcall=/usr/sbin/l_getgroups

device size = 972MB
formatting backing filesystem ldiskfs on /dev/mapper/ldiskd-part1
        target name  home-MDTffff
        4k blocks     0
        options       -O dir_index -i 4096 -I 512 -q -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L home-MDTffff -O dir_index -i 4096 -I 512 -q -F /dev/mapper/ldiskd-part1
Writing CONFIGS/mountdata


For some reason, only the last --mgsnode option is being kept.

Robert


-----Original Message-----
From: Nathan Rutman [mailto:Nathan.Rutman at Sun.COM]
Sent: Mon 11/12/2007 1:51 PM
To: Robert LeBlanc
Cc: lustre
Subject: Re: [Lustre-discuss] More failover issues

Robert LeBlanc wrote:
> In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options
> and it would failover between them. 1.6.3 only seems to take the last one
> and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn't seem to failover
> to the other node. Any ideas how to get around this?
>  
Multiple --mgsnode parameters should work:
mkfs.lustre --mkfsoptions="-O dir_index" --reformat --mdt
--mgsnode=192.168.1.253 at o2ib --mgsnode=1 at elan --device-size=10000 /tmp/foo

   Permanent disk data:
Target:     lustre-MDTffff
Index:      unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:      0x71
              (MDT needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mgsnode=192.168.1.253 at o2ib mgsnode=1 at elan

> Robert
> 
> Robert LeBlanc
> College of Life Sciences Computer Support
> Brigham Young University
> leblanc at byu.edu
> (801)422-1882
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>  






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20071112/62b2d63c/attachment.htm>


More information about the lustre-discuss mailing list