[Lustre-discuss] Need help

Cliff White cliffw at whamcloud.com
Fri Jul 1 10:30:13 PDT 2011


Did you also install the correct e2fsprogs?
cliffw


On Fri, Jul 1, 2011 at 5:45 PM, Mervini, Joseph A <jamervi at sandia.gov>wrote:

> Hi,
>
> I just upgraded our servers from RHEL 5.4 -> RHEL 5.5 and went from lustre
> 1.8.3 to 1.8.5.
>
> Now when I try to mount the OSTs I'm getting:
>
> [root at aoss1 ~]# mount -t lustre /dev/disk/by-label/scratch2-OST0001
> /mnt/lustre/local/scratch2-OST0001
> mount.lustre: mount /dev/disk/by-label/scratch2-OST0001 at
> /mnt/lustre/local/scratch2-OST0001 failed: No such file or directory
> Is the MGS specification correct?
> Is the filesystem name correct?
> If upgrading, is the copied client log valid? (see upgrade docs)
>
> tunefs.lustre looks okay on both the MDT (which is mounted) and the OSTs:
>
> [root at amds1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-MDT0000
> checking for existing Lustre data: found CONFIGS/mountdata
> Reading CONFIGS/mountdata
>
>   Read previous values:
> Target:     scratch2-MDT0000
> Index:      0
> Lustre FS:  scratch2
> Mount type: ldiskfs
> Flags:      0x5
>              (MDT MGS )
> Persistent mount opts:
> errors=panic,iopen_nopriv,user_xattr,maxdirsize=20000000
> Parameters: lov.stripecount=4 failover.node=<failnode>@tcp1
> failover.node=<failnode>@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups
>
>
>   Permanent disk data:
> Target:     scratch2-MDT0000
> Index:      0
> Lustre FS:  scratch2
> Mount type: ldiskfs
> Flags:      0x5
>              (MDT MGS )
> Persistent mount opts:
> errors=panic,iopen_nopriv,user_xattr,maxdirsize=20000000
> Parameters: lov.stripecount=4 failover.node=<failnode>@tcp1
> failover.node=<failnode>@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups
>
> exiting before disk write.
>
>
> [root at aoss1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-OST0001
> checking for existing Lustre data: found CONFIGS/mountdata
> Reading CONFIGS/mountdata
>
>   Read previous values:
> Target:     scratch2-OST0001
> Index:      1
> Lustre FS:  scratch2
> Mount type: ldiskfs
> Flags:      0x2
>              (OST )
> Persistent mount opts: errors=panic,extents,mballoc
> Parameters: mgsnode=<mds-server1>@tcp1 mgsnode=<mds-server1>@o2ib1
> mgsnode=<mds-server2>@tcp1 mgsnode=<mds-server2>@o2ib1
> failover.node=<failnode>@tcp1 failover.node=<failnode>@o2ib1
>
>
>   Permanent disk data:
> Target:     scratch2-OST0001
> Index:      1
> Lustre FS:  scratch2
> Mount type: ldiskfs
> Flags:      0x2
>              (OST )
> Persistent mount opts: errors=panic,extents,mballoc
> Parameters: mgsnode=<mds-server1>@tcp1 mgsnode=<mds-server1>@o2ib1
> mgsnode=<mds-server2>@tcp1 mgsnode=<mds-server2>@o2ib1
> failover.node=<falnode>@tcp1 failover.node=<failnode>@o2ib1
>
> exiting before disk write.
>
>
> I am really stuck and could really use some help.
>
> Thanks.
>
> ==
>
> Joe Mervini
> Sandia National Laboratories
> Dept 09326
> PO Box 5800 MS-0823
> Albuquerque NM 87185-0823
>
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



-- 
cliffw
Support Guy
WhamCloud, Inc.
www.whamcloud.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110701/291df846/attachment.htm>


More information about the lustre-discuss mailing list