[lustre-discuss] [EXTERNAL] Re: Use of lazystatfs

Mike Mosley Mike.Mosley at charlotte.edu
Thu Jul 6 04:29:05 PDT 2023


Andreas,

Thank you for the information.  We appreciate it.

Mike



On Wed, Jul 5, 2023 at 8:46 PM Andreas Dilger <adilger at whamcloud.com> wrote:

> [*Caution*: Email from External Sender. Do not click or open links or
> attachments unless you know this sender.]
>
>
> On Jul 5, 2023, at 07:14, Mike Mosley via lustre-discuss <
> lustre-discuss at lists.lustre.org> wrote:
>
> Hello everyone,
>
> We have drained some of our OSS/OSTs and plan to deactivate them soon.
> The process ahead leads us to a couple of questions that we hope somebody
> can advise us on.
>
> Scenario
> We have fully drained the target OSTs using * 'lfs find'* to identify all
> files located on the targets and then feeding the list to '*lfs migrate*.
> ' A final scan shows there are no files left on the targets.
>
> Questions
> 1) Running '*lfs df -h'* still shows some space being used even though we
> have drained all of the data.   Is that normal?  i.e.
>
> UUID                       bytes        Used   Available Use% Mounted on
> hydra-OST0010_UUID         84.7T      583.8M       80.5T   1%
> /dfs/hydra[OST:16]
> hydra-OST0011_UUID         84.7T      581.4M       80.5T   1%
> /dfs/hydra[OST:17]
> hydra-OST0012_UUID         84.7T      581.7M       80.5T   1%
> /dfs/hydra[OST:18]
> hydra-OST0013_UUID         84.7T      582.4M       80.5T   1%
> /dfs/hydra[OST:19]
> hydra-OST0014_UUID         84.7T      584.1M       80.5T   1%
> /dfs/hydra[OST:20]
> hydra-OST0015_UUID         84.7T      583.4M       80.5T   1%
> /dfs/hydra[OST:21]
> hydra-OST0016_UUID         84.7T      583.6M       80.5T   1%
> /dfs/hydra[OST:22]
> hydra-OST0017_UUID         84.7T      581.8M       80.5T   1%
> /dfs/hydra[OST:23]
> hydra-OST0018_UUID         84.7T      582.6M       80.5T   1%
> /dfs/hydra[OST:24]
> hydra-OST0019_UUID         84.7T      582.7M       80.5T   1%
> /dfs/hydra[OST:25]
> hydra-OST001a_UUID         84.7T      580.0M       80.5T   1%
> /dfs/hydra[OST:26]
> hydra-OST001b_UUID         84.7T      580.4M       80.5T   1%
> /dfs/hydra[OST:27]
> hydra-OST001c_UUID         84.7T      582.1M       80.5T   1%
> /dfs/hydra[OST:28]
> hydra-OST001d_UUID         84.7T      583.2M       80.5T   1%
> /dfs/hydra[OST:29]
> hydra-OST001e_UUID         84.7T      583.7M       80.5T   1%
> /dfs/hydra[OST:30]
> hydra-OST001f_UUID         84.7T      587.7M       80.5T   1%
> /dfs/hydra[OST:31]
>
>
> I would suggest to unmount the OSTs from Lustre and mount via ldiskfs,
> then run "find $MOUNT/O -type f -ls" to find if there are any in-use files
> left.  It is likely that the 580M used by all of the OSTs is just residual
> logs and large directories under O/*.  There might be some hundreds or
> thousands of zero-length object files that were precreated but never used,
> that will typically have an unusual file access mode 07666 and can be
> ignored.
>
> 2) According to some comments, prior to deactivating the OSS/OSTs, we
> should add the *'lazystatfs'* option to all of our client mounts so that
> they do not hang once we deactivate some of the OSTs.   Is that correct?
> If so, why would you not just always have that option set?    What are the
> ramifications of doing it well in advance of the OST deactivations?
>
>
> The lazystatfs feature has been enabled by default since Lustre 2.9 so I
> don't think you need to do anything with it anymore.  The "lfs df" command
> will automatically skip unconfigured OSTs.
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20230706/538363e9/attachment.htm>


More information about the lustre-discuss mailing list