[Lustre-discuss] two problems

Stefano Elmopi stefano.elmopi at sociale.it
Wed May 26 08:43:25 PDT 2010



Hi,

My version of Lustre is 1.8.3
My filesystem is composed of one MGS/MDS server and two OSS.
By testing, I tried to delete a OST and replace it with another OST
and now the situation is this:

cat /proc/fs/lustre/lov/lustre01-mdtlov/target_obd
0: lustre01-OST0000_UUID ACTIVE
2: lustre01-OST0002_UUID ACTIVE

- first problem
lustre01-OST0001_UUID ACTIVE is the OST was canceled and it had files,
which of course now there are not more:

ls -lrt
total 12475312
?--------- ? ?    ?             ?            ? zero.dat
?--------- ? ?    ?             ?            ? ubuntu-9.10-dvd-i386.iso
?--------- ? ?    ?             ?            ? XXXXXXXXX_CentOS-5.4- 
x86_64-bin-DVD.iso
?--------- ? ?    ?             ?            ? Windows_XP-Capodarco.iso
?--------- ? ?    ?             ?            ? UBUNTU_CentOS-5.4- 
x86_64-bin-DVD.iso
?--------- ? ?    ?             ?            ? KK_CentOS-5.4-x86_64- 
bin-DVD.iso
?--------- ? ?    ?             ?            ? FFFFF_CentOS-5.4-x86_64- 
bin-DVD.iso
?--------- ? ?    ?             ?            ? CentOS-5.3-i386-bin- 
DVD.iso
?--------- ? ?    ?             ?            ? BBBBB_CentOS-5.4-x86_64- 
bin-DVD.iso
?--------- ? ?    ?             ?            ? BAK_CentOS-5.4-x86_64- 
bin-DVD.iso
?--------- ? ?    ?             ?            ? 2.iso


I to delete them, follow these steps:

on MGS/MDS server:

e2fsck -n -v --mdsdb /root/mds_home_db /dev/mpath/mpath2

copy the file mds_home_db on OSS_1 and, one OSS_1 launch the following  
command:

e2fsck -n -v --mdsdb /root/mds_home_db --ostdb /root/home_ost00db /dev/ 
mpath/mpath1

and do the same thing on the OSS_2:

e2fsck -n -v --mdsdb /root/mds_home_db --ostdb /root/home_ost01db /dev/ 
mpath/mpath2

then copy the files mds_home_db, home_ost00db and home_ost01db on the  
Lustre Client,
mount the lustre filesystem and run the commnand:

lfsck -c -v --mdsdb /root/mds_home_db --ostdb /root/home_ost00db /root/ 
home_ost02db /LUSTRE

but the command hangs:
	
	.
	.
	.
	.
[0] zero-length orphan objid 1182
[0] zero-length orphan objid 1214
[0] zero-length orphan objid 1246
[0] zero-length orphan objid 1183
[0] zero-length orphan objid 1215
[0] zero-length orphan objid 1247
lfsck: ost_idx 0: pass3 OK (218 files total)
MDS: max_id 161 OST: max_id 65
lfsck: ost_idx 1: pass1: check for duplicate objects
lfsck: ost_idx 1: pass1 OK (11 files total)
lfsck: ost_idx 1: pass2: check for missing inode objects


and the server MGS/MDS go to in Kernel Panic
and the Lustre Client log say:
May 26 17:39:35 mdt02prdpom kernel: LustreError: 7105:0:(lov_ea.c: 
248:lsm_unpackmd_v1()) OST index 1 missing
May 26 17:39:35 mdt02prdpom kernel: LustreError: 7105:0:(lov_ea.c: 
248:lsm_unpackmd_v1()) Skipped 21 previous similar messages
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20003, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x2
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20005, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x3
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20006, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x4
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20008, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x5
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b2000a, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x6
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b2000c, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x7
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b2000e, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x8
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20014, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x23
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20015, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x42
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20017, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x62
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
64:lov_dump_lmm_common()) objid 0x1b20018, magic 0x0bd10bd0, pattern 0x1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
67:lov_dump_lmm_common()) stripe_size 1048576, stripe_count 1
May 26 17:39:35 mdt02prdpom kernel: Lustre: 7105:0:(lov_pack.c: 
84:lov_dump_lmm_objects()) stripe 0 idx 1 subobj 0x0/0x82


- second problem
doing tests with Quotas, when I go to run the command:

lfs quotacheck -ug /LUSTRE/
quotacheck failed: Input/output error


and the log say:

kernel: LustreError: 7103:0:(quota_check.c:251:lov_quota_check()) lov  
idx 1 inactive



Thanks !!




Ing. Stefano Elmopi
Gruppo Darco - Resp. ICT Sistemi
Via Ostiense 131/L Corpo B, 00154 Roma

cell. 3466147165
tel.  0657060500
email:stefano.elmopi at sociale.it

"Ai sensi e per effetti della legge sulla tutela  della  riservatezza  
personale
(D.lgs n. 196/2003),  questa @mail e' destinata  unicamente alle  
persone sopra
indicate e le informazioni in essa contenute sono da considerarsi  
strettamente
riservate. E' proibito leggere, copiare, usare o diffondere il  
contenuto della
presente @mail  senza  autorizzazione. Se avete ricevuto  questo  
messaggio per
errore, siete pregati di rispedire la stessa al mittente. Grazie"

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100526/e747742f/attachment.htm>


More information about the lustre-discuss mailing list