[lustre-discuss] new install client locks up on ls /lustre
Sid Young
sid.young at gmail.com
Wed Jul 8 16:36:05 PDT 2020
Hi all,
I'm new ish to lustre and I've just created a lustre 2.12.5 cluster using
the RPMs from whamcloud for Centos 7.8 with 1 MDT/MGS and 1 OSS with 3
OST's (20GB each)
Everything is formatted as ldiskfs and it's running on a vmware platform as
a test bed using tcp.
The MDT mounts ok, the OST's mount and on my client I can mount the /lustre
mount point (58GB) and I can ping everything via the lnet however as soon
as I try to do an ls -l /lustre or any kind of I/O the client locks solid
till I reboot it.
I've tried to work out how to run basic diagnostics to no avail so I am
stupped why I don't see a directory listing for what should be an empty 60G
disk.
On the MDS I ran this:
[root at lustre-mds tests]# lctl dl
0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 10
1 UP mgs MGS MGS 8
2 UP mgc MGC10.140.95.118 at tcp acdb253b-b7a8-a949-0bf2-eaa17dc8dca4 4
3 UP mds MDS MDS_uuid 2
4 UP lod lustre-MDT0000-mdtlov lustre-MDT0000-mdtlov_UUID 3
5 UP mdt lustre-MDT0000 lustre-MDT0000_UUID 12
6 UP mdd lustre-MDD0000 lustre-MDD0000_UUID 3
7 UP qmt lustre-QMT0000 lustre-QMT0000_UUID 3
8 UP lwp lustre-MDT0000-lwp-MDT0000 lustre-MDT0000-lwp-MDT0000_UUID 4
9 UP osp lustre-OST0000-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
10 UP osp lustre-OST0001-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
11 UP osp lustre-OST0002-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
[root at lustre-mds tests]#
So it looks like I have everything is running even dmesg on the client
reports:
[ 7.998649] Lustre: Lustre: Build Version: 2.12.5
[ 8.016113] LNet: Added LNI 10.140.95.65 at tcp [8/256/0/180]
[ 8.016214] LNet: Accept secure, port 988
[ 10.992285] Lustre: Mounted lustre-client
Any pointer where to look? /var/log/messages shows no errors
Sid Young
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200709/aa90aa3c/attachment.html>
More information about the lustre-discuss
mailing list