[lustre-discuss] new install client locks up on ls /lustre

Chad DeWitt ccdewitt at uncc.edu
Wed Jul 8 16:54:19 PDT 2020


Hi Sid,

Hope you're doing well.

This link may help:

http://wiki.lustre.org/Mounting_a_Lustre_File_System_on_Client_Nodes


Just for general troubleshooting, you may want to ensure that both the
firewall and SELinux are disabled for all your Lustre virtuals.

Cheers,
Chad

------------------------------------------------------------

Chad DeWitt, CISSP

UNC Charlotte *| *OneIT – University Research Computing

ccdewitt at uncc.edu *| *www.uncc.edu

------------------------------------------------------------


If you are not the intended recipient of this transmission or a person
responsible for delivering it to the intended recipient, any disclosure,
copying, distribution, or other use of any of the information in this
transmission is strictly prohibited. If you have received this transmission
in error, please notify me immediately by reply email or by telephone at
704-687-7802. Thank you.


On Wed, Jul 8, 2020 at 7:36 PM Sid Young <sid.young at gmail.com> wrote:

> Hi all,
>
> I'm new ish to lustre and I've just created a lustre 2.12.5 cluster using
> the RPMs from whamcloud for Centos 7.8 with 1 MDT/MGS and 1 OSS with 3
> OST's (20GB each)
> Everything is formatted as ldiskfs and it's running on a vmware platform
> as a test bed using tcp.
> The MDT mounts ok, the OST's mount and on my client I can mount the
> /lustre mount point (58GB) and I can ping everything via the lnet however
> as soon as I try to do an ls -l /lustre or any kind of I/O the client locks
> solid till I reboot it.
>
> I've tried to work out how to run basic diagnostics to no avail so I am
> stupped why I don't see a directory listing for what should be an empty 60G
> disk.
>
> On the MDS I ran this:
> [root at lustre-mds tests]# lctl  dl
>   0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 10
>   1 UP mgs MGS MGS 8
>   2 UP mgc MGC10.140.95.118 at tcp acdb253b-b7a8-a949-0bf2-eaa17dc8dca4 4
>   3 UP mds MDS MDS_uuid 2
>   4 UP lod lustre-MDT0000-mdtlov lustre-MDT0000-mdtlov_UUID 3
>   5 UP mdt lustre-MDT0000 lustre-MDT0000_UUID 12
>   6 UP mdd lustre-MDD0000 lustre-MDD0000_UUID 3
>   7 UP qmt lustre-QMT0000 lustre-QMT0000_UUID 3
>   8 UP lwp lustre-MDT0000-lwp-MDT0000 lustre-MDT0000-lwp-MDT0000_UUID 4
>   9 UP osp lustre-OST0000-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
>  10 UP osp lustre-OST0001-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
>  11 UP osp lustre-OST0002-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> [root at lustre-mds tests]#
>
> So it looks like I have everything is running even dmesg on the client
> reports:
>
> [    7.998649] Lustre: Lustre: Build Version: 2.12.5
> [    8.016113] LNet: Added LNI 10.140.95.65 at tcp [8/256/0/180]
> [    8.016214] LNet: Accept secure, port 988
> [   10.992285] Lustre: Mounted lustre-client
>
>
> Any pointer where to look? /var/log/messages shows no errors
>
>
> Sid Young
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200708/6b97ab58/attachment.html>


More information about the lustre-discuss mailing list