[lustre-discuss] Lustre Orphaned Chunks

DeWitt, Chad ccdewitt at uncc.edu
Tue Oct 18 04:18:08 PDT 2016


Hi All,

I wanted to follow up and explain how I solved the issue in case anyone
else encounters this situation.  All environments are different, so YMMV.

(Please note per my initial email, the problematic OST is marked as active
before performing these steps.)

First, I marked the problematic OST as degraded (on the OSS):
# lctl set_param obdfilter.<OST>.degraded=1

Second, I kicked off a lfsck for the Lustre filesystem (on the MDS):
# lctl lfsck_start --orphan --device <MDS>

Once the lfsck had deleted the orphaned chunks, I marked the OST as normal
(on the OSS):
# lctl set_param obdfilter.<OST>.degraded=0

In /var/log/messages on the OSS, I could see the orphans were deleted:
kernel: Lustre: <OST>: deleting orphan objects from 0x0:162957594 to
0x0:162957873

I would like to thank Shawn [Hall] and Bob [Ball] for their responses,
which lead me in the right direction.

Thank you,
Chad

------------------------------------------------------------

Chad DeWitt, CISSP | HPC Storage Administrator

UNC Charlotte *| *ITS – University Research Computing

------------------------------------------------------------

On Mon, Oct 17, 2016 at 2:32 PM, DeWitt, Chad <ccdewitt at uncc.edu> wrote:

> Hi All.
>
> I am still learning Lustre and I have run into an issue.  I have referred
> to both the Lustre admin manual and Google, but I've had no luck in finding
> the answer.  We are using Lustre 2.8.0.
>
> We had an OST fill due to a single large file.  I took the OST offline via
> the lctl deactivate command to prevent new files from being created on the
> OST.  While the OST was deactivated, the user deleted the file.  Now it
> appears the metadata is gone from the MDS (which makes sense), but the data
> chunks on the OST remain even after I reactivated the OST.
>
> I believe that running lfsck would resolve this issue, but I am not sure
> if I should run it on the MDS or the OST?  If this is the fix, what options
> would I need to use?
>
> Thank you in advance,
> Chad
>
>
> ------------------------------------------------------------
>
> Chad DeWitt, CISSP | HPC Storage Administrator
>
> UNC Charlotte *| *ITS – University Research Computing
>
> ------------------------------------------------------------
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20161018/4e82c608/attachment.htm>


More information about the lustre-discuss mailing list