[lustre-discuss] request for help - ZFS based Lustre, MDT disk not mounting
Hebenstreit, Michael
michael.hebenstreit at intel.com
Tue Jun 19 07:16:25 PDT 2018
Yes, both mgs and mdt modules are present
Module Size Used by
osp 341968 27
mdd 408582 17
lod 508629 17
mdt 818340 18
lfsck 735096 20 lod,mdd,mdt
mgs 351348 1
mgc 94061 2 mgs
osd_zfs 363587 31
lquota 363475 77 mdt,osd_zfs
From: Feng Zhang [mailto:prod.feng at gmail.com]
Sent: Tuesday, June 19, 2018 8:13 AM
To: Hebenstreit, Michael <michael.hebenstreit at intel.com>
Cc: lustre-discuss <lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] request for help - ZFS based Lustre, MDT disk not mounting
Is lustre module loaded already in kernel and started?
On Tue, Jun 19, 2018 at 10:07 AM, Hebenstreit, Michael <michael.hebenstreit at intel.com<mailto:michael.hebenstreit at intel.com>> wrote:
Mount commands hangs, no error messages in kernel log, tried already rebooting (twice) - any ideas?
Thanks
Michael
root 4137 0.0 0.0 123520 1048 pts/0 S+ 07:56 0:00 mount -t lustre mgsmdt/mdt /lfs/lfs11/mdt
root 4138 0.0 0.0 81728 3272 pts/0 S+ 07:56 0:00 /sbin/mount.lustre mgsmdt/mdt /lfs/lfs11/mdt -o rw
[root at elfs11m1 ~]# dmesg | grep -i MDT
[ 107.238438] LustreError: 137-5: lfs11-MDT0000_UUID: not available for connect from 36.101.16.38 at tcp<mailto:36.101.16.38 at tcp> (no target). If you are running an HA pair check that the target is mounted on the other server.
[ 107.656990] Lustre: lfs11-MDT0000: Not available for connect from 36.101.16.39 at tcp<mailto:36.101.16.39 at tcp> (not set up)
ZFS does not report any issues, mgs (on the same zpool) mounted without issues
[root at elfs11m1 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mgsmdt 149G 18.9T 40.8K /mgsmdt
mgsmdt/mdt 147G 18.9T 147G /mgsmdt/mdt
mgsmdt/mgs 4.48M 18.9T 4.48M /mgsmdt/mgs
[root at elfs11m1 ~]# zpool status
pool: mgsmdt
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 2h14m with 0 errors on Thu May 31 13:36:58 2018
config:
NAME STATE READ WRITE CKSUM
mgsmdt ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
sdu ONLINE 0 0 0
sdv ONLINE 0 0 0
sdw ONLINE 0 0 0
sdx ONLINE 0 0 0
sdy ONLINE 0 0 0
errors: No known data errors
------------------------------------------------------------------------
Michael Hebenstreit Senior Cluster Architect
Intel Corporation, MS: RR1-105/H14 Core and Visual Compute Group (DCE)
4100 Sara Road Tel.: +1 505-794-3144
Rio Rancho, NM 87124
UNITED STATES E-mail: michael.hebenstreit at intel.com<mailto:michael.hebenstreit at intel.com>
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
--
Best,
Feng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180619/ab25a0a3/attachment-0001.html>
More information about the lustre-discuss
mailing list