[Lustre-discuss] Questions concerning quotacheck.

Theodoros Stylianos Kondylis kondil at gmail.com
Wed Jan 23 02:09:36 PST 2013


Thank you for your answer,

About the results quotacheck gives.

In our production system we are using v1.8.4 and we have enabled the quotas
per user and group. So I tried the following commands for a specific group
and its users ::

# lfs quota -g group_name /lustre/
Disk quotas for group group_name (gid 8863):
     Filesystem  kbytes              quota                limit
     grace   files        quota      limit         grace
     /lustre/        44291610536  104857600000 104910028800 -
 176573  2000000 2200000  -

# lfs quota -u user5 /lustre/
Disk quotas for user user5 (uid 7511):
     Filesystem  kbytes                  quota   limit   grace   files
 quota  limit   grace
     /lustre/        32131767752       0         0        -          65139
  0        0       -

# lfs quota -u user2 /lustre/
Disk quotas for user user2 (uid 8874):
     Filesystem  kbytes                  quota limit   grace   files
quota limit   grace
     /lustre/        12159883112       0       0       -           112054 0
       0       -

# lfs quota -u user3 /lustre/
Disk quotas for user user3 (uid 8875):
     Filesystem  kbytes  quota limit   grace files quota  limit   grace
     /lustre/        152       0       0       -         5     0         0
      -

# lfs quota -u user1 /lustre/
Disk quotas for user user1 (uid 8873):
     Filesystem kbytes quota limit grace files quota limit grace
     /lustre/       68       0        0      -        5     0        0
 -

# lfs quota -u user0 /lustre/
Disk quotas for user user0 (uid 8864):
     Filesystem kbytes quota limit grace files quota limit grace
     /lustre/       4          0        0     -        1     0         0
  -

# lfs quota -u user4 /lustre/
Disk quotas for user user4 (uid 8890):
     Filesystem kbytes quota limit grace files quota limit grace
     /lustre/       4          0        0     -        1     0         0
  -

# lfs quota -u user6 /lustre/
Disk quotas for user user6 (uid 8037):
     Filesystem kbytes quota limit grace files quota limit grace
     /lustre/       4          0        0     -        1     0         0
  -

# lfs quota -u user7 /lustre/
Disk quotas for user user7 (uid 9319):
     Filesystem kbytes quota limit grace files quota  limit grace
     /lustre/       4          0       0      -        1     0         0
   -


But when I accumulated all the users amount of files that was larger of the
group's amount of files ::

176573 -(65139 +112054 +5 +5 +1 +1 +1 +1) = -634

Respectively there was a difference on the amount of kB.

Shouldn't the accumulated quotas be the same for the group and its users?

Does this mean that the quotas are not coherent and I have to run
quotacheck again?

Thank you in advance for any reply/guidance,
Stelios.


On Wed, Jan 23, 2013 at 8:10 AM, Adrian Ulrich <adrian at blinkenlights.ch>wrote:

> Hi,
>
>
> > But apart from that, when else is quotaceck required? For example is
> there
> > any case where the quotas will not be coherent? And have to run
> quotacheck
> > again in order to recheck them?
>
> Lustre versions prior to 1.6.5 required you to run quotacheck after server
> crashes.
> Recent versions (1.8, 2.x) use journaled quotas and will survive crashes
> just fine.
>
> We are running Lustre 1.8 and 2.2 servers and i never had to re-run
> quotacheck on them.
>
>
>
> > Secondly, inside the operations manual v1.8.4 section 9.1.2 it states
> that
> > the required time for quotacheck to complete its task, is proportional to
> > amount of files in the fs. So is there a practical way to get an
> indication
> > of the amount of time quotacheck will need?
>
> I can't give you exact timings (or a formula) but it's pretty fast:
>
> We enabled quotas on our 1.8.x system while about 100TB were already used.
> (Medium sized files). The initial `quotacheck' run finished within 30
> minutes.
>
>
>
> Regards,
>  Adrian
>
>
> --
>  RFC 1925:
>    (11) Every old idea will be proposed again with a different name and
>         a different presentation, regardless of whether it works.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20130123/112a6ec4/attachment.htm>


More information about the lustre-discuss mailing list