[lustre-discuss] Cannot mount lustre filesystem anymore

Stefano Turolla turolla at genzentrum.lmu.de
Tue Apr 25 03:25:30 PDT 2017


Dear all, I am a newbie in lustre, I set up a simple configuration to
mount a filesystem from a Dell Powervault MD3800i (iscsi + multipath
enabled)
It was working properly but, after the last reboot I cannot mount the
lustre filesystem anymore
I am running lustre 3.10.0 on scientific linux 7.3.
I put MDT/MDS on the server
together with OST

Here is the relevant /etc/fstab

# Lustre MDT / MDS (Manage filenames, directories etc and Block devices

/dev/sda1                       /mnt/lustre-mdt-mds     lustre         
noauto,_netdev        0 0

/dev/mapper/seqdata             /mnt/lustre-ost         lustre         
noauto,_netdev        0 0

# Lustre Client

master-mds at tcp:/seqdata         /seq_data               lustre         
noauto,_netdev        0 0


I can mount the /mnt/lustre-mdt-mds  filesystem but not the OST, and of
course no  client


here are the devices
[root at newmaster lustre]# cat /proc/fs/lustre/devices

  0 UP osd-ldiskfs seqdata-MDT0000-osd seqdata-MDT0000-osd_UUID 9

  1 UP mgs MGS MGS 5

  2 UP mgc MGC10.163.85.99 at tcp 69e92317-78f6-eef7-1764-57da5aadafe2 5

  3 UP mds MDS MDS_uuid 3

  4 UP lod seqdata-MDT0000-mdtlov seqdata-MDT0000-mdtlov_UUID 4

  5 UP mdt seqdata-MDT0000 seqdata-MDT0000_UUID 5

  6 UP mdd seqdata-MDD0000 seqdata-MDD0000_UUID 4

  7 UP qmt seqdata-QMT0000 seqdata-QMT0000_UUID 4

  8 UP osp seqdata-OST0000-osc-MDT0000 seqdata-MDT0000-mdtlov_UUID 5

  9 UP lwp seqdata-MDT0000-lwp-MDT0000 seqdata-MDT0000-lwp-MDT0000_UUID 5



Here are the errors when I try to mount the OST

[root at newmaster lustre]# mount /mnt/lustre-ost


Apr 25 12:13:41 newmaster kernel: LDISKFS-fs (dm-0): file extents
enabled, maximum tree depth=5

Apr 25 12:13:42 newmaster kernel: LDISKFS-fs (dm-0): mounted filesystem
with ordered data mode. Opts: ,errors=remount-ro,no_mbcache,nodelalloc

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(llog_osd.c:246:llog_osd_read_header()) seqdata-OST0000-osd:
error reading [0xa:0x14:0x0] log header size 8192: rc = -14

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(llog_osd.c:246:llog_osd_read_header()) Skipped 1 previous
similar message

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(mgc_request.c:1832:mgc_llog_local_copy()) MGC10.163.85.99 at tcp:
failed to copy remote log seqdata-client: rc = -14

Apr 25 12:13:42 newmaster kernel: LustreError: 13a-8: Failed to get MGS
log seqdata-client and no local copy.

Apr 25 12:13:42 newmaster kernel: LustreError: 15c-8:
MGC10.163.85.99 at tcp: The configuration from log 'seqdata-client' failed
(-2). This may be the result of communication errors between this node
and the MGS, a bad configuration, or other errors. See the syslog for
more information.

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(obd_mount_server.c:1369:server_start_targets())
seqdata-OST0000: failed to start LWP: -2

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(obd_mount_server.c:1844:server_fill_super()) Unable to start
targets: -2

Apr 25 12:13:42 newmaster kernel: Lustre: Failing over seqdata-OST0000

Apr 25 12:13:42 newmaster kernel: Lustre: server umount seqdata-OST0000
complete

Apr 25 12:13:42 newmaster kernel: LustreError:
11242:0:(obd_mount.c:1449:lustre_fill_super()) Unable to mount  (-2)

mount.lustre: mount /dev/mapper/seqdata at /mnt/lustre-ost failed: No
such file or directory

Is the MGS specification correct?

Is the filesystem name correct?

If upgrading, is the copied client log valid? (see upgrade docs)



Here is the lnet configuration, currently only the server is listed
[root at newmaster lustre]# cat /etc/modprobe.d/lustre.conf

options lnet networks=tcp0(eth2)


I tried to search in the mailing list some good explanation of what
happened but I could not find any.
Could you please help me to debug the problem?
Thanks a lot in advance
Stefano Turolla

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170425/a636f21d/attachment.htm>


More information about the lustre-discuss mailing list