[Lustre-discuss] New 1.8 install - can't mount MDT

Bill Wichser bill at Princeton.EDU
Fri Jun 19 12:06:31 PDT 2009


New MDS.  Kernel 2.6.18-92.1.17.el5_lustre.1.8.0smp on a base RH5.3 
system.  Nehalem CPU.

I'm stuck.  Did I miss some step here?  The filesystem is about 300G.

Thanks,
Bill

=======================================================

I make the filesystem:

[root at mds ~]# mkfs.lustre --fsname=lustre --mgs --mdt /dev/sdb1

    Permanent disk data:
Target:     lustre-MDTffff
Index:      unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:      0x75
               (MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mdt.group_upcall=/usr/sbin/l_getgroups

checking for existing Lustre data: not found
device size = 285561MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sdb1
         target name  lustre-MDTffff
         4k blocks     0
         options        -J size=400 -i 4096 -I 512 -q -O 
dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustre-MDTffff  -J size=400 -i 4096 
-I 512 -q -O dir_index,uninit_groups -F /dev/sdb1
Writing CONFIGS/mountdata

===================================================
Then try to mount it:

[root at mds ~]# mount -t lustre /dev/sdb1 /MDT
mount.lustre: mount /dev/sdb1 at /MDT failed: Operation not supported

====================================================

The errors in /var/log/messages are:
Jun 19 14:01:25 mds kernel: kjournald starting.  Commit interval 5 seconds
Jun 19 14:01:25 mds kernel: LDISKFS FS on sdb1, internal journal
Jun 19 14:01:25 mds kernel: LDISKFS-fs: mounted filesystem with ordered 
data mode.
Jun 19 14:01:25 mds kernel: kjournald starting.  Commit interval 5 seconds
Jun 19 14:01:25 mds kernel: LDISKFS FS on sdb1, internal journal
Jun 19 14:01:25 mds kernel: LDISKFS-fs: mounted filesystem with ordered 
data mode.
Jun 19 14:01:25 mds kernel: Lustre: MGS MGS started
Jun 19 14:01:25 mds kernel: Lustre: MGC172.23.10.4 at tcp: Reactivating import
Jun 19 14:01:25 mds kernel: Lustre: Setting parameter 
lustre-MDT0000.mdt.group_upcall in log lustre-MDT0000
Jun 19 14:01:25 mds kernel: Lustre: Enabling user_xattr
Jun 19 14:01:25 mds kernel: Lustre: lustre-MDT0000: new disk, initializing
Jun 19 14:01:25 mds kernel: Lustre: MDT lustre-MDT0000 now serving 
lustre-MDT0000_UUID 
(lustre-MDT0000/dc93cffc-68e5-8351-46eb-52215bd7a771) with recovery enabled
Jun 19 14:01:25 mds kernel: Lustre: 
3177:0:(lproc_mds.c:271:lprocfs_wr_group_upcall()) lustre-MDT0000: group 
upcall set to /usr/sbin/l_getgroups
Jun 19 14:01:25 mds kernel: Lustre: lustre-MDT0000.mdt: set parameter 
group_upcall=/usr/sbin/l_getgroups
Jun 19 14:01:25 mds kernel: Lustre: Server lustre-MDT0000 on device 
/dev/sdb1 has started
Jun 19 14:01:25 mds kernel: SELinux: (dev lustre, type lustre) has no 
xattr support
Jun 19 14:01:25 mds kernel: Lustre: Failing over lustre-MDT0000
Jun 19 14:01:25 mds kernel: Lustre: Skipped 1 previous similar message
Jun 19 14:01:25 mds kernel: Lustre: *** setting obd lustre-MDT0000 
device 'sdb1' read-only ***
Jun 19 14:01:25 mds kernel: Turning device sdb (0x800011) read-only
Jun 19 14:01:25 mds kernel: Lustre: Failing over lustre-mdtlov
Jun 19 14:01:25 mds kernel: Lustre: lustre-MDT0000: shutting down for 
failover; client state will be preserved.
Jun 19 14:01:25 mds kernel: Lustre: MDT lustre-MDT0000 has stopped.
Jun 19 14:01:25 mds kernel: LustreError: 
3072:0:(ldlm_request.c:1043:ldlm_cli_cancel_req()) Got rc -108 from 
cancel RPC: canceling anyway
Jun 19 14:01:25 mds kernel: LustreError: 
3072:0:(ldlm_request.c:1632:ldlm_cli_cancel_list()) 
ldlm_cli_cancel_list: -108
Jun 19 14:01:25 mds kernel: Lustre: MGS has stopped.
Jun 19 14:01:25 mds kernel: Removing read-only on unknown block (0x800011)
Jun 19 14:01:25 mds kernel: Lustre: server umount lustre-MDT0000 complete

[root at mds ~]# lctl network up
LNET configured
[root at mds ~]# lctl list_nids
172.23.10.4 at tcp



More information about the lustre-discuss mailing list