[Lustre-discuss] help needed.
Aaron Knister
aaron at iges.org
Sun Dec 23 07:32:51 PST 2007
On the oss can you ping the mds/mgs using this command--
lctl ping 132.66.176.211 at tcp0
If it doesn't ping, list the nids on each node by running
lctl list_nids
and tell me what comes back.
-Aaron
On Dec 23, 2007, at 9:22 AM, Avi Gershon wrote:
> HI I could use some help.
> I installed lustre on 3 computers
> mdt/mgs :
>
> ************************************************************************************8
> [root at x-math20 ~]#mkfs.lustre --reformat --fsname spfs --mdt --mgs /
> dev/hdb
>
> Permanent disk data:
> Target: spfs-MDTffff
> Index: unassigned
> Lustre FS: spfs
> Mount type: ldiskfs
> Flags: 0x75
> (MDT MGS needs_index first_time update )
> Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
> Parameters:
>
> device size = 19092MB
> formatting backing filesystem ldiskfs on /dev/hdb
> target name spfs-MDTffff
> 4k blocks 0
> options -J size=400 -i 4096 -I 512 -q -O dir_index -F
> mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-MDTffff -J size=400 -i 4096
> -I 512 -q -O dir_index -F /dev/hdb
> Writing CONFIGS/mountdata
> [root at x-math20 ~]# df
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/hda1 19228276 4855244 13396284 27% /
> none 127432 0 127432 0% /dev/shm
> /dev/hdb 17105436 455152 15672728 3% /mnt/test/mdt
> [root at x-math20 ~]# cat /proc/fs/lustre/devices
> 0 UP mgs MGS MGS 5
> 1 UP mgc MGC132.66.176.211 at tcp
> 5f5ba729-6412-3843-2229-1310a0b48f71 5
> 2 UP mdt MDS MDS_uuid 3
> 3 UP lov spfs-mdtlov spfs-mdtlov_UUID 4
> 4 UP mds spfs-MDT0000 spfs-MDT0000_UUID 3
> [root at x-math20 ~]#
> *************************************************************end
> mdt******************************8
> so you can see that the MGS is up
> ond on the ost's I get an error!! plz help...
>
> ost:
> **********************************************************************
> [root at x-mathr11 ~]# mkfs.lustre --reformat --fsname spfs --ost --
> mgsnode=132.66. 176.211 at tcp0 /dev/hdb1
>
> Permanent disk data:
> Target: spfs-OSTffff
> Index: unassigned
> Lustre FS: spfs
> Mount type: ldiskfs
> Flags: 0x72
> (OST needs_index first_time update )
> Persistent mount opts: errors=remount-ro,extents,mballoc
> Parameters: mgsnode=132.66.176.211 at tcp
>
> device size = 19594MB
> formatting backing filesystem ldiskfs on /dev/hdb1
> target name spfs-OSTffff
> 4k blocks 0
> options -J size=400 -i 16384 -I 256 -q -O dir_index -F
> mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-OSTffff -J size=400 -i
> 16384 -I 256 -q -O dir_index -F /dev/hdb1
> Writing CONFIGS/mountdata
> [root at x-mathr11 ~]# /CONFIGS/mountdata
> -bash: /CONFIGS/mountdata: No such file or directory
> [root at x-mathr11 ~]# mount -t lustre /dev/hdb1 /mnt/test/ost1
> mount.lustre: mount /dev/hdb1 at /mnt/test/ost1 failed: Input/output
> error
> Is the MGS running?
> ***********************************************end
> ost********************************
>
> can any one point out the problem?
> thanks Avi.
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
Aaron Knister
Associate Systems Administrator/Web Designer
Center for Research on Environment and Water
(301) 595-7001
aaron at iges.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20071223/81b16457/attachment.htm>
More information about the lustre-discuss
mailing list