[lustre-discuss] Migrating files doesn't free space on the OST

Jason Williams jasonw at jhu.edu
Thu Jan 17 12:18:49 PST 2019


Chad hit the nail on the head.  I thought about the fact that it was still deactivated yesterday but was afraid to reactivate it until I verified the space was free.


FWIW, the URL about handling full OSTs does not include the fact that the space will not be free until you reactivate the OST.  It actually implies the opposite.


http://wiki.lustre.org/Handling_Full_OSTs




--
Jason Williams
Assistant Director
Systems and Data Center Operations.
Maryland Advanced Research Computing Center (MARCC)
Johns Hopkins University
jasonw at jhu.edu<mailto:jasonw at jhu.edu>



________________________________
From: Chad DeWitt <ccdewitt at uncc.edu>
Sent: Thursday, January 17, 2019 3:07 PM
To: Jason Williams
Cc: Alexander I Kulyavtsev; lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Migrating files doesn't free space on the OST

Hi Jason,

I do not know if this will help you or not, but I had a situation in 2.8.0 where an OST filled up and I marked it as disabled on the MDS:

lctl dl | grep osc
...Grab the device_id of the full OST and then deactivate it...
lctl --device device_id deactivate

IIRC, this allowed the data to be read, but deletes were not processed.  When I re-activated the OST, then the deletes were processed and space started clearing.  I think you stated you had the OST deactivated.  If you still do, try to reactive it.

lctl --device device_id activate

Once you reactivate the OST, the deletes will start processing within 10 - 30 seconds...  Just use lfs df -h to watch...

-cd


------------------------------------------------------------

Chad DeWitt, CISSP

UNC Charlotte | ITS – University Research Computing

9201 University City Blvd. | Charlotte, NC 28223

ccdewitt at uncc.edu<mailto:ccdewitt at uncc.edu> | www.uncc.edu

------------------------------------------------------------


If you are not the intended recipient of this transmission or a person responsible for delivering it to the intended recipient, any disclosure, copying, distribution, or other use of any of the information in this transmission is strictly prohibited. If you have received this transmission in error, please notify me immediately by reply email or by telephone at 704-687-7802. Thank you.


On Thu, Jan 17, 2019 at 2:38 PM Jason Williams <jasonw at jhu.edu<mailto:jasonw at jhu.edu>> wrote:

Hello Alexander,


Thank you for your reply.

- We are not using zfs, it's an LDISKFS backing store, so no snapshots.

- I have re-run lfs getstripe to make sure the file is indeed moving

- I just looked for lfsck but I don't seem to have it.  We are running 2.10.4 so I don't know what version that appeared in.

- I will try to have a look into the jobstats and see what I can find, but I made sure the files I moved were not in use when I moved them.



--
Jason Williams
Assistant Director
Systems and Data Center Operations.
Maryland Advanced Research Computing Center (MARCC)
Johns Hopkins University
jasonw at jhu.edu<mailto:jasonw at jhu.edu>



________________________________
From: Alexander I Kulyavtsev <aik at fnal.gov<mailto:aik at fnal.gov>>
Sent: Thursday, January 17, 2019 12:56 PM
To: Jason Williams; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: Migrating files doesn't free space on the OST


- you can re-run command to find files residing on ost to see if files are new or old.

- zfs may have snapshots if you ever did snapshots; it takes space.

- removing data or snapshots has some lag to release the blocks (tens of minutes) but I guess that is completed by now.

- there are can be orphan objects on OST if you had crashes. On older lustre versions if the ost was emptied out you can mount underlying fs as ext4 or zfs; set mount to readonly and browse ost objects - you may see if there are some orphan objects left. On newer lustre releases you probably can run lfsck (lustre scanner).

- to find what hosts / jobs currently writing to lustre you may enable lustre jobstats; clear counters and parse stats files in /proc . There was xltop tool on github for older versions of lustre not having implemented jobstats but it was not updated for a while.

- depending on lustre version you have the implementation of lfs migrate is different. The older version copied file with other name to other ost, renamed files and removed old file. If migration done on file open for write by application the data will not be released until file closed (and data in new file are wrong). Recent implementation of migrate does swap of the file objects with file layout lock taken. I can not tell if it is safe for active write.

- not releasing space can be a bug - did you check jira on whamcloud? What version of lustre do you have? Is it ldiskfs or zfs based? zfs version?


Alex.


________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of Jason Williams <jasonw at jhu.edu<mailto:jasonw at jhu.edu>>
Sent: Wednesday, January 16, 2019 10:25 AM
To: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: [lustre-discuss] Migrating files doesn't free space on the OST


I am trying to migrate files I know are not in use off of the full OST that I have using lfs migrate.  I have verified up and down that the files I am moving are on that OST and that after the migrate lfs getstripe indeed shows they are no longer on that OST since it's disabled in the MDS.


The problem is, the used space on the OST is not going down.


I see one of at least two issues:

- the OST is just not freeing the space for some reason or another ( I don't know)

- Or someone is writing to existing files just as fast as I am clearing the data (possible, but kind of hard to find)


Is there possibly something else I am missing? Also, does anyone know a good way to see if some client is writing to that OST and determine who it is if it's more probable that that is what is going on?



--
Jason Williams
Assistant Director
Systems and Data Center Operations.
Maryland Advanced Research Computing Center (MARCC)
Johns Hopkins University
jasonw at jhu.edu<mailto:jasonw at jhu.edu>


_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190117/d0a03665/attachment-0001.html>


More information about the lustre-discuss mailing list