[lustre-discuss] Disabling max creates and migrating data doesn't seem to be reducing the usage on an OST

Kurt Strosahl strosahl at jlab.org
Tue Feb 16 08:46:18 PST 2021


During a maintenance window today I revooted the OSS that OST had been mounted on, after it came up the usage dropped significantly


________________________________
From: Iannetti, Gabriele <G.Iannetti at gsi.de>
Sent: Tuesday, February 16, 2021 10:02 AM
To: Kurt Strosahl <strosahl at jlab.org>; lustre-discuss at lists.lustre.org <lustre-discuss at lists.lustre.org>
Subject: [EXTERNAL] Re: Disabling max creates and migrating data doesn't seem to be reducing the usage on an OST

Hi Kurt,

one more thing...

The I-Node count should not increase, since new files should not be created on that OST.
You can check the I-Node count with `lfs df -i | grep "\[OST:15\]"`.

Modifications on existing files are still possible e.g. file increases in it's size...

~Gabriele

________________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Kurt Strosahl <strosahl at jlab.org>
Sent: Thursday, February 11, 2021 16:51
To: lustre-discuss at lists.lustre.org
Subject: [lustre-discuss] Disabling max creates and migrating data doesn't seem to be reducing the usage on an OST

Good Morning,

One of the OSTs in a lustre file system I manage is showing a higher usage.  I attempted to stop writes by setting the max_create_count to zero and then moving data off it but that doesn't seem to be working.

> lfs df | grep OST:15lustre19-OST000f_UUID 71145018368 62653382656  8491631616  89% /lustre19[OST:15]

MDS> lctl set_param osp.lustre19-OST000f*.max_create_count=0
MDS> lctl get_param osp.lustre19-OST000f*.max_create_count
losp.lustre19-OST000f-osc-MDT0000.max_create_count=0

lfs find /lustre19/expphy/volatile --ost lustre19-OST000f -size +50M | lfs_migrate -y

I've been watching it, and the ost in question isn't shrinking.  I can see its size usage go down a bit... and then tick up a bit

w/r,

Kurt J. Strosahl
System Administrator: Lustre, HPC
Scientific Computing Group, Thomas Jefferson National Accelerator Facility

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210216/50bd1c35/attachment-0001.html>


More information about the lustre-discuss mailing list