[Lustre-discuss] system disk with external journals for OSTs formatted

Alexander Bugl alexander.bugl at zmaw.de
Tue Oct 26 12:42:11 PDT 2010


Hi,

we had an accident with a Sun Fire X4540 "Thor" System with 48 HDDs:

The first two disks sda and sdb contain several partitions, one for the / file 
system, one for swap (not used) and 5 small partitions used as external 
journals for the OSTs, which reside on the 46 other HDDs.

[root at soss10 ~]# fdisk -l /dev/sda /dev/sdb

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        6527    52428096   fd  Linux raid autodetect
/dev/sda2            6528       10704    33551752+  fd  Linux raid autodetect
/dev/sda3           10705      121601   890780152+   5  Extended
/dev/sda5           10705       10953     2000061   fd  Linux raid autodetect
/dev/sda6           10954       11202     2000061   fd  Linux raid autodetect
/dev/sda7           11203       11451     2000061   fd  Linux raid autodetect
/dev/sda8           11452       11700     2000061   fd  Linux raid autodetect
/dev/sda9           11701       11949     2000061   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1        6527    52428096   fd  Linux raid autodetect
/dev/sdb2            6528       10704    33551752+  fd  Linux raid autodetect
/dev/sdb3           10705      121601   890780152+   5  Extended
/dev/sdb5           10705       10953     2000061   fd  Linux raid autodetect
/dev/sdb6           10954       11202     2000061   fd  Linux raid autodetect
/dev/sdb7           11203       11451     2000061   fd  Linux raid autodetect
/dev/sdb8           11452       11700     2000061   fd  Linux raid autodetect
/dev/sdb9           11701       11949     2000061   fd  Linux raid autodetect

The md devices are:
md14 : active raid6 sdw[0] sdav[9] sdan[8] sdaf[7] sdx[6] sdp[5] sdh[4] 
sdau[3] sdam[2] sdae[1]
      7814099968 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      
md13 : active raid6 sdak[0] sdo[9] sdg[8] sdat[7] sdal[6] sdad[5] sdv[4] 
sdn[3] sdf[2] sdas[1]
      7814099968 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      
md12 : active raid6 sdd[0] sdac[9] sdu[8] sdm[7] sde[6] sdar[5] sdaj[4] 
sdab[3] sdt[2] sdl[1]
      7814099968 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      
md11 : active raid6 sdah[0] sdaq[7] sdai[6] sdaa[5] sds[4] sdk[3] sdc[2] 
sdap[1]
      5860574976 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
      
md10 : active raid6 sdi[0] sdz[7] sdao[6] sdag[5] sdy[4] sdr[3] sdq[2] sdj[1]
      5860574976 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
      
md1 : active raid1 sdb2[1] sda2[0]
      33551680 blocks [2/2] [UU]
      
md20 : active raid1 sdb5[1] sda5[0]
      1999936 blocks [2/2] [UU]
      
md21 : active raid1 sdb6[1] sda6[0]
      1999936 blocks [2/2] [UU]
      
md22 : active raid1 sdb7[1] sda7[0]
      1999936 blocks [2/2] [UU]
      
md23 : active raid1 sdb8[1] sda8[0]
      1999936 blocks [2/2] [UU]
      
md24 : active raid1 sdb9[1] sda9[0]
      1999936 blocks [2/2] [UU]
      
md0 : active raid1 sdb1[1] sda1[0]
      52428032 blocks [2/2] [UU]

The original OSTs had been created using a command like:
mkfs.lustre --ost --fsname=${FSNAME} --mgsnode=${MGSNODE}@o2ib \
    --reformat --mkfsoptions="-m 0 -J device=/dev/md20" \
    --param ost.quota_type=ug /dev/md10 &
(the pairs md21/md11, md22/md12, ..., respectively)

Accidentally we started a fresh installation, which could not be aborted fast 
enough -- the partition information on sda and sdb was erased.
The other 46 disks should not have been harmed, though.

We started a reinstallation which only formatted the first 2 partitions and 
which recreated the partition layout on sda and sdb, all of the md devices 
resynced without problems.

When we now try to mount any of the 5 OSTs, we get the following error:

[root at soss10 ~]# mount /dev/md14
mount.lustre: mount /dev/md14 at /lustre/ost4 failed: Invalid argument
This may have multiple causes.
Are the mount options correct?
Check the syslog for more info.

syslog says:
Oct 26 21:34:55 soss10 kernel: LDISKFS-fs error (device md14): 
ldiskfs_check_descriptors: Block bitmap for group 1920 not in group (block 
268482810)!
Oct 26 21:34:55 soss10 kernel: LDISKFS-fs: group descriptors corrupted!
Oct 26 21:34:55 soss10 kernel: LustreError: 10719:0:
(obd_mount.c:1292:server_kernel_mount()) premount /dev/md14:0x0 ldiskfs 
failed: -22, ldiskfs2 failed: -19.  Is the ldiskfs module available?
Oct 26 21:34:56 soss10 kernel: LustreError: 10719:0:
(obd_mount.c:1618:server_fill_super()) Unable to mount device /dev/md14: -22
Oct 26 21:34:56 soss10 kernel: LustreError: 10719:0:
(obd_mount.c:2050:lustre_fill_super()) Unable to mount  (-22)

Trying to mount the partition as ldiskfs does not work, either:
[root at soss10 ~]# mount -t ldiskfs /dev/md14 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/md14,
       missing codepage or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
syslog only says:
Oct 26 21:35:54 soss10 kernel: LDISKFS-fs error (device md14): 
ldiskfs_check_descriptors: Block bitmap for group 1920 not in group (block 
268482810)!
Oct 26 21:35:54 soss10 kernel: LDISKFS-fs: group descriptors corrupted!

Trying to run e2fsck -n yields:
[root at soss10 ~]# e2fsck -n /dev/md10
e2fsck 1.41.10.sun2 (24-Feb-2010)
e2fsck: Group descriptors look bad... trying backup blocks...
Error writing block 1 (Attempt to write block from filesystem resulted in 
short write).  Ignore error? no
Error writing block 2 (Attempt to write block from filesystem resulted in 
short write).  Ignore error? no
Error writing block 3 (Attempt to write block from filesystem resulted in 
short write).  Ignore error? no
Error writing block 4 (Attempt to write block from filesystem resulted in 
short write).  Ignore error? no
... [continues up to block 344]
One or more block group descriptor checksums are invalid.  Fix? no
Group descriptor 0 checksum is invalid.  IGNORED.
Group descriptor 1 checksum is invalid.  IGNORED.
Group descriptor 2 checksum is invalid.  IGNORED.
Group descriptor 3 checksum is invalid.  IGNORED.
... [continues up to Group descriptor 44712]
squall-OST0019 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes

(the rest of e2fsck is till running ...)

Question: What could be the problem, I thought that no data on the OSTs and 
insode the journal partitions should have been overwritten. Is there any 
chance to repair these problems without data loss?

Thank you in advance for any suggestions about how to continue ...
With regards, Alex

-- 
Alexander Bugl,  Central IT Services, ZMAW
Max  Planck  Institute   for   Meteorology
Bundesstrasse 53, D-20146 Hamburg, Germany
tel +49-40-41173-351, fax -298, room PE048



More information about the lustre-discuss mailing list