[Lustre-discuss] big problem: read-only fs
Papp Tamás
tompos at martos.bme.hu
Thu Sep 18 10:39:16 PDT 2008
Papp Tamas wrote:
> Andreas Dilger wrote:
>
>> On Sep 16, 2008 18:47 +0200, Papp Tam�s wrote:
>>
>>
>>> I run fsck on the node, and remounted it. Fsck found a lot of errors.
>>>
>>>
>>> After this I see this on the logs again:
>>> Sep 16 18:30:08 node1 kernel: LDISKFS-fs error (device sdb1):
>>> ldiskfs_ext_find_extent: bad header in inode #58262056: invalid magic -
>>> magic 0, entries 0, max 0(0), depth 0(0)
>>> Sep 16 18:30:08 node1 kernel: Remounting filesystem read-only
>>>
>>> But why does -30 is here? I hoped, it will disappear after fsck, but I
>>> see again. What could cause this problem? How can I solve it?
>>>
>>>
>> -30 = -EROFS, caused by the extent header error. This was fixed in
>> very recent Lustre e2fsprogs, do you have the latest released version?
>>
>>
>>
>
> Well, the recent e2fsprogs from Sun did not help.
>
> So I tried to move away the files from the node, but it's not so simple,
> I have some question.
>
> 1.
> $ lfs df|grep OST0002
> cubefs-OST0002_UUID 1845110624 1512955404 332155220 81% /W[OST:2]
> $ lctl dl|grep OST0002
> 4 UP osc cubefs-OST0002-osc-ffff81002b2b5000
> 345f312a-51e9-b9de-b462-35a56ae76341 5
>
> Which one should I use?
>
> Anyway:
>
> $ lfs find --obd cubefs-OST0002-osc-ffff81002b2b5000 -r .
> error: setup_obd_uuids: unknown obduuid: cubefs-OST0002-osc-ffff81002b2b5000
> ./1 2
> ./1 23
> ./1 234
> ./1 2345
>
> $ lfs find --obd cubefs-OST0002_UUID -r .
> error: setup_obd_uuids: unknown obduuid: cubefs-OST0002_UUID
> ./1 2
> ./1 23
> ./1 234
> ./1 2345
>
> But:
>
> $ lfs getstripe .
> OBDS:
> . has no stripe info
> ./1 2
> obdidx objid objid group
> 3 455101 0x6f1bd 0
>
> ./1 23
> obdidx objid objid group
> 3 455125 0x6f1d5 0
>
> ./1 234
> obdidx objid objid group
> 4 448480 0x6d7e0 0
>
> ./1 2345
> obdidx objid objid group
> 2 455201 0x6f221 0
>
>
> 2.
> # mount|grep lustre
> /dev/sdb1 on /mnt/cubefs/ost-1 type lustre (rw)
>
> # grep lustre /proc/mounts
> /dev/*sdb */mnt/cubefs/ost-1 lustre ro 0 0
>
> Why don't I see sdb1 in /proc?
> Also why do I see ro in /proc?
>
> 3.
> samba:~$ cat /proc/fs/lustre/lov/cubefs-clilov-ffff8100330c4800/target_obd
> samba:~$
>
> I have an other cluster, on that it shows the right values (on the same
> machine at the same time too).
>
>
There is no answer about any of these issues?
Did I do something wrong?
tamas
More information about the lustre-discuss
mailing list