[lustre-discuss] Quota issue after OST removal

Daniel Szkola dszkola at fnal.gov
Wed Oct 26 07:52:39 PDT 2022


Hello all,

We recently removed an OSS/OST node that was spontaneously shutting down so
hardware testing could be performed. I have no idea how long it will be out,
so I followed the procedure for permanent removal.

Since then space usage is being calculated correctly, but 'lfs quota' will
show groups as exceeding quota, despite being under both soft and hard
limits. A verbose listing shows that all OST limits are met and I have no
idea how to reset the limits now that the two OSTs on the removed OSS node
are not part of the equation.

Due to the heavy usage of the Lustre filesystem, no clients have been
unmounted and no MDS or OST nodes have been restarted. The underlying
filesystem is ZFS.

Looking for ideas on how to correct this.

Example:

# lfs quota -gh somegroup -v /lustre1
Disk quotas for grp somegroup (gid NNNN):
     Filesystem    used   quota   limit   grace   files   quota   limit  
grace
       /lustre1  21.59T*    27T     30T 6d23h39m15s 2250592  2621440 3145728
-
lustrefs-MDT0000_UUID
                 1.961G       -  1.962G       - 2250592       - 2359296    
-
lustrefs-OST0000_UUID
                 2.876T       -  2.876T       -       -       -       -    
-
lustrefs-OST0001_UUID
                 2.611T*      -  2.611T       -       -       -       -    
-
lustrefs-OST0002_UUID
                 4.794T       -  4.794T       -       -       -       -    
-
lustrefs-OST0003_UUID
                 4.587T       -  4.587T       -       -       -       -    
-
quotactl ost4 failed.
quotactl ost5 failed.
lustrefs-OST0006_UUID
                  3.21T       -   3.21T       -       -       -       -    
-
lustrefs-OST0007_UUID
                 3.515T       -  3.515T       -       -       -       -    
-
Total allocated inode limit: 2359296, total allocated block limit: 21.59T
Some errors happened when getting quota info. Some devices may be not
working or deactivated. The data in "[]" is inaccurate.

--
Dan Szkola
FNAL


More information about the lustre-discuss mailing list