[lustre-discuss] Any way to dump Lustre quota data?

Kevin M. Hildebrand kevin at umd.edu
Thu Sep 5 08:08:08 PDT 2019


Interesting.  So the files under qmt don't appear to be useful for this,
however the ones under quota_slave do appear to have what I want, though it
looks like I'll have to pull the data from every OST and sum them together
myself.  This actually isn't too bad and can give me more useful
information.

Thanks!
Kevin

On Thu, Sep 5, 2019 at 10:00 AM Jeff Johnson <jeff.johnson at aeoncomputing.com>
wrote:

> Kevin,
>
> There are files in /proc/fs/lustre/qmt/yourfsname-QMT0000/ that you can
> pull it all from based on UID and GID. Look for files like md-0x0/glb-usr
>  dt-0x0/glb-usr and files in
> /proc/fs/lustre/osd-zfs/yourfsname-MDT0000/quota_slave.
>
> I’m not in front of a keyboard, I’m cooking breakfast but I’ll follow up
> with the exact files. You can cat them and maybe find what you’re looking
> for.
>
> —Jeff
>
> On Thu, Sep 5, 2019 at 05:07 Kevin M. Hildebrand <kevin at umd.edu> wrote:
>
>> Is there any way to dump the Lustre quota data in its entirety, rather
>> than having to call 'lfs quota' individually for each user, group, and
>> project?
>>
>> I'm currently doing this on a regular basis so we can keep graphs of how
>> users and groups behave over time, but it's problematic for two reasons:
>> 1.  Getting a comprehensive list of users and groups to iterate over is
>> difficult- sure I can use the passwd/group files, but if a user has been
>> deleted there may still be files owned by a now orphaned userid or groupid
>> which I won't see.  We may also have thousands of users in the passwd file
>> that don't have files on a particular Lustre filesystem, and doing lfs
>> quota calls for those users wastes time.
>> 2.  Calling lfs quota hundreds of times for each of the users, groups,
>> and projects takes a while.  This reduces my ability to collect the data at
>> the frequency I want.  Ideally I'd like to be able to collect every minute
>> or so.
>>
>> I have two different Lustre installations, one running 2.8.0 with
>> ldiskfs, the other running 2.10.8 with ZFS.
>>
>> Thanks,
>> Kevin
>>
>> --
>> Kevin Hildebrand
>> University of Maryland
>> Division of IT
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> ------------------------------
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.johnson at aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190905/533bb6fe/attachment.html>


More information about the lustre-discuss mailing list