[lustre-discuss] Inaccessible directory

Vicker, Darby (JSC-EG311) darby.vicker-1 at nasa.gov
Thu Sep 1 13:49:58 PDT 2016


Thanks.  This is happening on all the clients so its not a DLM lock problem.

We’ll try the fsck soon. If that come back clean is this likely a hardware corruption?

Could this have anything to do with our corruption?

Aug 29 12:13:34 hpfs2-eg3-mds0 kernel: LustreError: 0-0: hpfs2eg3-MDT0000: trigger OI scrub by RPC for [0x2000079cd:0x3988:0x0], rc = 0 [1]

We make a file level backup of the MDT using the procedure in the lustre manual on a daily basis and keep a history of those.  We’ve never had a problem so we’ve never had to restore anything from the backups.  Is the device level backup (dd) necessary or is file level sufficient?


From: "Dilger, Andreas" <andreas.dilger at intel.com>
Date: Thursday, September 1, 2016 at 11:48 AM
To: Darby Vicker <darby.vicker-1 at nasa.gov>
Cc: "lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Inaccessible directory

The first thing to try is just cancel the DLM locks on this client, to see if the problem is just a stale lock cached there:

    lctl set_param ldlm.namespaces.*.lru_size=clear

That likely won't help if this same problem is being hit on all of your clients.

Next to try is running e2fsck on the MDT. It is recommended to use the most recent e2fsprogs-1.42.12-wc5, since this fixes a number of bugs in older versions.

It is also strongly recommended to make a "dd" backup of the whole MDT before the e2fsck run, and to do this on a regular basis. This is convenient in case of MDT corruption (now or in the future), and you can restore only the MDT instead of having to restore 400TB of data as well.  I was just updating the Lustre User Manual about this http://review.whamcloud.com/21726 .

Running "e2fsck -fn" first can also give you an idea of what kind of corruption is present before Making the fix.

Cheers, Andreas

On Aug 31, 2016, at 10:54, Vicker, Darby (JSC-EG311) <darby.vicker-1 at nasa.gov<mailto:darby.vicker-1 at nasa.gov>> wrote:
Hello,

We’ve run into a problem where an entire directory on our lustre file system has become inaccessible.

# mount | grep lustre2
192.52.98.142 at tcp:/hpfs2eg3 on /lustre2 type lustre (rw,flock)
# ls -l /lustre2/mirrors/cpas
ls: cannot access /lustre2/mirrors/cpas: Stale file handle
# ls -l /lustre2/mirrors/
ls: cannot access /lustre2/mirrors/cpas: Stale file handle
4 drwxr-s--- 5 root     g27122      4096 Dec 23  2014 cev-repo/
? d????????? ? ?        ?              ?            ? cpas/
4 drwxrwxr-x 5 root     eg3         4096 Aug 21  2014 sls/



Fortunately, we have a backup of this directory from about a week ago.  However, I would like to figure out how this happened to prevent any further damage.  I’m not sure if we’re dealing with corruption in the LFS, damage to the underlying RAID or something else and I’d appreciate some help figuring this out.  Here’s some info on our lustre servers:

CentOS 6.4 2.6.32-358.23.2.el6_lustre.x86_64
Lustre 2.4.3 (I know - we need to upgrade...)
Hardware RAID 10 MDT (Adaptec 6805Q – SSD’s)
(19x) OSS’s - Hardware RAID 6 OST’s (Adaptec 6445)
1 27TB OST per OSS, ldiskfs
Dual homed via Ethernet and IB

Most Ethernet clients (~50 total) are CentOS 7 using lustre-client-2.8.0-3.10.0_327.13.1.el7.x86_64.x86_64.  Our compute nodes (~400 total) connect over IB and are still CentOS 6 using lustre-client-2.7.0-2.6.32_358.14.1.el6.x86_64.x86_64.


The lustre server hardware is about 4 years old now.  All RAID arrays are reported as healthy.  Searched JIRA and the mailing lists and couldn’t find anything related.  This sounded close at first:

https://jira.hpdd.intel.com/browse/LU-3550

But, as shown above, the issue shows up on a native lustre client, not an NFS export.  We are exporting this LFS via SMB but I don’t think that’s related.

I think the next step is to run an e2fsck but we haven’t done that yet and would appreciate advise on stepping through this.

Thanks,
Darby





_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20160901/abf40450/attachment.htm>


More information about the lustre-discuss mailing list