[lustre-discuss] missing option mgsnode

Paul Edmon pedmon at cfa.harvard.edu
Wed Jul 20 11:41:05 PDT 2022


[root at holylfs02oss06 ~]# mount -t ldiskfs /dev/mapper/mpathd 
/mnt/holylfs2-OST001f
mount: wrong fs type, bad option, bad superblock on /dev/mapper/mpathd,
        missing codepage or helper program, or other error

        In some cases useful info is found in syslog - try
        dmesg | tail or so.

e2fsck did not look good:

[root at holylfs02oss06 ~]# less OST001f.out
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
e2fsck: Group descriptors look bad... trying backup blocks...
MMP interval is 10 seconds and total wait time is 42 seconds. Please wait...
Superblock needs_recovery flag is clear, but journal has data.
Recovery flag not set in backup superblock, so running journal anyway.
Clear journal? no

Block bitmap for group 8128 is not in group.  (block 3518518062363072290)
Relocate? no

Inode bitmap for group 8128 is not in group.  (block 12235298632209565410)
Relocate? no

Inode table for group 8128 is not in group.  (block 17751685088477790304)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? no

Block bitmap for group 8129 is not in group.  (block 2193744380193356980)
Relocate? no

Inode bitmap for group 8129 is not in group.  (block 4102707059848926418)
Relocate? no

It continues at length like that.

-Paul Edmon-

On 7/20/2022 2:31 PM, Colin Faber wrote:
> Can you mount the target directly with -t ldiskfs ?
>
> Also what does e2fsck report?
>
> On Wed, Jul 20, 2022, 11:48 AM Paul Edmon via lustre-discuss 
> <lustre-discuss at lists.lustre.org> wrote:
>
>     We have a filesystem that we have running Lustre 2.10.4 in HA mode
>     using
>     IML.  One of our OST's had some disk failures and after
>     reconstruction
>     of the RAID set it won't remount but gives:
>
>     [root at holylfs02oss06 ~]# mount -t lustre /dev/mapper/mpathd
>     /mnt/holylfs2-OST001f
>     Failed to initialize ZFS library: 256
>     mount.lustre: missing option mgsnode=<nid>
>
>     The weird thing is that we didn't build this with ZFS, the devices
>     are
>     all ldiskfs.  We suspect some of the data is corrupt on the disk
>     but we
>     were wondering if anyone had seen this error before and if there
>     was a
>     solution.
>
>     -Paul Edmon-
>
>     _______________________________________________
>     lustre-discuss mailing list
>     lustre-discuss at lists.lustre.org
>     http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220720/5eb8859e/attachment.htm>


More information about the lustre-discuss mailing list