[Lustre-discuss] Question about quotas.

Θεόδωρος Στυλιανός Κονδύλης theodoros.stylianos.kondylis at gmail.com
Tue Jan 22 02:54:44 PST 2013


Thank you very much for your reply.

So the problem was that the OST2 was inactive.

mds2# cat /proc/fs/lustre/lov/jtest1-mdtlov/target_obd
0: jtest1-OST0000_UUID ACTIVE
1: jtest1-OST0001_UUID ACTIVE
2: jtest1-OST0002_UUID INACTIVE
3: jtest1-OST0003_UUID ACTIVE

And I fixed it by ::

mds2# lctl dl
  0 UP mgs MGS MGS 15
  1 UP mgc MGCXX.XX.XX.XX at o2ib 6cf8b24c-b023-4593-7c82-efb04c681aaa 5
  2 UP mdt MDS MDS_uuid 3
  3 UP lov jtest1-mdtlov jtest1-mdtlov_UUID 4
  4 UP mds jtest1-MDT0000 jtest1-MDT0000_UUID 11
  5 UP osc jtest1-OST0001-osc jtest1-mdtlov_UUID 5
  6 UP osc jtest1-OST0003-osc jtest1-mdtlov_UUID 5
  7 UP osc jtest1-OST0000-osc jtest1-mdtlov_UUID 5
  8 IN osc jtest1-OST0002-osc jtest1-mdtlov_UUID 5

mds2# lctl --device 8 activate
mds2# cat /proc/fs/lustre/lov/jtest1-mdtlov/target_obd
0: jtest1-OST0000_UUID ACTIVE
1: jtest1-OST0001_UUID ACTIVE
2: jtest1-OST0002_UUID ACTIVE
3: jtest1-OST0003_UUID ACTIVE

And finally I had to make a quotacheck through a client in order to have
the correct quotas.

clie1# lfs quotacheck -ug /lustre/jtest1/
clie1# lfs quota /lustre/jtest1/
.....


On Tue, Jan 22, 2013 at 10:26 AM, Niu, Yawei <yawei.niu at intel.com> wrote:

>  Looks your OST 2 is administratively disabled,  you'd activate all your
> OSTs if you want to get accurate quota usage. Please refer to manual to see
> how activate/deactivate OSTs.
>
>  Thanks
> - Niu
>
>   From: ???????? ????????? ???????? <
> theodoros.stylianos.kondylis at gmail.com>
> Date: Friday, January 18, 2013 5:40 PM
> To: Yawei Niu <yawei.niu at intel.com>
> Subject: Re: [Lustre-discuss] Question about quotas.
>
>   Thank you all for your information.
>
>  I tried your proposals, and I found the following ::
>
>
>  clie1# lfs check servers
> jtest1-MDT0000-mdc-ffff88033d143400 active.
> jtest1-OST0001-osc-ffff88033d143400 active.
> jtest1-OST0003-osc-ffff88033d143400 active.
> jtest1-OST0000-osc-ffff88033d143400 active.
> jtest1-OST0002-osc-ffff88033d143400 active.
>
>  But when I do a ::
>
>   # lfs quota /lustre/jtest1/
> Disk quotas for user root (uid 0):
>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
> grace
> /lustre/jtest1/ [15751753680]  30786325577728 31555983717171       -
>  [1280]   10000   11000       -
> Some errors happened when getting quota info. Some devices may be not
> working or deactivated. The data in "[]" is inaccurate.
> Disk quotas for group root (gid 0):
>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
> grace
> /lustre/jtest1/ [15751753680]  30786325577728 31555983717171       -
>  [1280]   10000   11000       -
> Some errors happened when getting quota info. Some devices may be not
> working or deactivated. The data in "[]" is inaccurate.
>
>   At the same time in the mds I see ::
>
>   mds2# dmesg | tail
> LustreError: 7116:0:(quota_ctl.c:463:lov_quota_ctl()) ost 2 is inactive
> LustreError: 7116:0:(quota_ctl.c:463:lov_quota_ctl()) Skipped 1 previous
> similar message
>
>  But on the OSS1 (responsible for ost2) the OST2 md array is mounted.
>
>  How I can check further about the ost's status ? Since the OST's
> md_array is mounted, shouldn't that be active ?
>
>  I even try through heartbeat a failover from OSS1 to OSS2 but the ost2
> remained inactive ( despite the ost2's md_array being mounted ).
>
>  If you have any idea or indication to lustre commands/files that would
> be very helpful.
>
> On Thu, Jan 17, 2013 at 10:01 AM, Niu, Yawei <yawei.niu at intel.com> wrote:
>
>>  Perhaps quota wasn't able to turned on for some OSTs, did you see any
>> error message (in syslog) when starting MDT & OSTs.
>>
>>   From: ???????? ????????? ???????? <
>> theodoros.stylianos.kondylis at gmail.com>
>> Date: Wednesday, January 16, 2013 2:26 AM
>> To: "lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
>>
>> Subject: [Lustre-discuss] Question about quotas.
>>
>>   Hello to everyone.
>>
>>  We have a lustre (v1.8.4) test cluster with 2 MDSs, 2OSSs and 4 Clients.
>>
>>  I am experimenting with the quotas but something seems to not work.
>>
>>  In first I did a
>>
>>   cli1# lfs quotacheck /lustre
>> cli1# lfs quotaon -ug /lustre
>> cli1# lfs quota /lustre
>>
>>  To get ::
>>
>>  Disk quotas for user root (uid 0):
>>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
>> grace
>> /lustre/jtest1/     [0]       0       0       -     [0]       0       0
>>     -
>> Some errors happened when getting quota info. Some devices may be not
>> working or deactivated. The data in "[]" is inaccurate.
>> Disk quotas for group root (gid 0):
>>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
>> grace
>> /lustre/jtest1/     [0]       0       0       -     [0]       0       0
>>     -
>> Some errors happened when getting quota info. Some devices may be not
>> working or deactivated. The data in "[]" is inaccurate.
>>
>>  So I tried ::
>>
>> mds2# cat /proc/fs/lustre/lquota/jtest1-MDT0000/quota_type
>> >> ug3
>> oss1# cat /proc/fs/lustre/lquota/jtest1-OST000{1,3}/quota_type
>> >> ug3
>>
>>  >> ug3
>>
>>  oss2# cat /proc/fs/lustre/lquota/jtest1-OST000{0,2}/quota_type
>> >> ug3
>> >> ug3
>>
>>  Then I tried ::
>>
>>  cli1# lfs quotaon -ugf /lustre
>>  cli1# lfs quota /lustre
>>  >>user quotas are not enabled.
>> >>group quotas are not enabled.
>>  mds2# cat /proc/fs/lustre/lquota/jtest1-MDT0000/quota_type
>> >> 3
>> oss1# cat /proc/fs/lustre/lquota/jtest1-OST000{1,3}/quota_type
>> >> 3
>> >> 3
>> oss2# cat /proc/fs/lustre/lquota/jtest1-OST000{0,2}/quota_type
>> >> 3
>> >> 3
>>
>>  And then again ::
>>
>>     cli1# lfs quotaon -ug /lustre
>> cli1# lfs quota /lustre
>>  >> Disk quotas for user root (uid 0):
>>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
>> grace
>> >> /lustre/jtest1/     [0]       0       0       -     [0]       0
>> 0       -
>> >> Some errors happened when getting quota info. Some devices may be not
>> working or deactivated. The data in "[]" is inaccurate.
>> >> Disk quotas for group root (gid 0):
>>      Filesystem  kbytes   quota   limit   grace   files   quota   limit
>> grace
>> >> /lustre/jtest1/     [0]       0       0       -     [0]       0
>> 0       -
>> >> Some errors happened when getting quota info. Some devices may be not
>> working or deactivated. The data in "[]" is inaccurate.
>>   mds2# cat /proc/fs/lustre/lquota/jtest1-MDT0000/quota_type
>> >> ug3
>> oss1# cat /proc/fs/lustre/lquota/jtest1-OST000{1,3}/quota_type
>> >> ug3
>> >> ug3
>> oss2# cat /proc/fs/lustre/lquota/jtest1-OST000{0,2}/quota_type
>> >> ug3
>> >> ug3
>>
>>
>>  So I would like to make a few questions in case anyone knows.
>>
>>  First of all is this a known issue, or am I doing something wrong here?
>> I am forcing the quotaon and that reacts as if I did a quotaoff.
>>
>>  Furthermore I would like to ask about this number 3 in the quota_type
>> files in case anyone knows what does it mean and why is this necessary.
>>
>>  Finally if I am not doing something wrong here, is there a way to fix
>> this ?
>>
>>  Thank you in advance for your time and any replies/guidance/directions.
>>
>>  Stelios.
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20130122/93d20a74/attachment.htm>


More information about the lustre-discuss mailing list