[Lustre-discuss] Quota not functioning on all OSTs

Michael Losapio mike.losapio at nyu.edu
Thu Feb 28 05:32:08 PST 2013


1.8.7 Wham Cloud release...

lustre-1.8.7-wc1_2.6.18_274.3.1.el5



On Wed, Feb 27, 2013 at 8:53 PM, Colin Faber <colin_faber at xyratex.com> wrote:
> Hi,
>
> What version of lustre are you running?
>
> -cf
>
>
> Michael Losapio <mike.losapio at nyu.edu> wrote:
>
> Hey folks,
>
> I have a bit of an anomaly...
>
> Lustre quotas are only working on a portion of my OSTs despite having
> the correct parameters set...
>
> [root at balki ~]# lfs quota -u mjl19 -v /scratch
> Disk quotas for user mjl19 (uid 1552540):
>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
> grace
>        /scratch 3095592*     10      20       -       3  1000000 1001000
> -
> scratch-MDT0000_UUID
>                     264       -    1024       -       3       -    5120
> -
> scratch-OST0000_UUID
>                       0       -    1024       -       -       -       -
> -
> scratch-OST0001_UUID
>                    6936*      -    1024       -       -       -       -
> -
> scratch-OST0002_UUID
>                       0       -    1024       -       -       -       -
> -
> ....
> scratch-OST001d_UUID
>                 3095560       -       0       -       -       -       -
> -
> scratch-OST001e_UUID
>                       0       -       0       -       -       -       -
> -
> ....
>
> [root at oss2 ~]# tunefs.lustre --dryrun --param /dev/mapper/ost_scratch_29
> checking for existing Lustre data: found CONFIGS/mountdata
> Reading CONFIGS/mountdata
>
>    Read previous values:
> Target:     scratch-OST001d
> Index:      29
> Lustre FS:  scratch
> Mount type: ldiskfs
> Flags:      0x1002
>               (OST no_primnode )
> Persistent mount opts:
> errors=remount-ro,extents,mballoc,nodelalloc,nobarrier
> Parameters: mgsnode=10.0.1.240 at o2ib mgsnode=10.0.1.239 at o2ib
> failover.node=10.0.1.236 at o2ib failover.node=10.0.1.235 at o2ib
> ost.quota_type=ug
>
> I even tried to reset the user quota (thinking that perhaps this OSS
> was offline during the initial setting)... no change and this isn't an
> anomaly for the user but for all users on those OSTs
>
> Does this mean I have to run a lfs quotacheck across the entire
> filesystem? If yes, how dangerous is it to do on a live system?
>
> Thanks,
>
> Mike
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss



More information about the lustre-discuss mailing list