[lustre-discuss] quota problem on lustre 2.5.3 when doing lfs_migrate
Philippe Weill
Philippe.Weill at latmos.ipsl.fr
Fri Apr 17 01:53:44 PDT 2015
Le 16/04/2015 08:41, Dilger, Andreas a écrit :
> On 2015/04/15, 11:38 PM, "Philippe Weill" <Philippe.Weill at latmos.ipsl.fr>
> wrote:
>> we have problem with quota on a 2.5.3 filesystem (ext4 based on
>> scientific linux 6 )when using lfs_migrate after desactivating an ost on
>> mds file seem to be well migrated but allocated space stay in quota on the
>> desactivated device
>
> The deactivated OST isn't able to update the quota. If you reactivate the
> OST and then restart the MDS, the objects on the OST will be destroyed and
> the quota will be updated.
>
> Cheers, Andreas
>
Thanks Andreas
OK so we have just unmount the mdt and remount after it seem to be ok
for the size quota but not for the inode quota
for sample with another account with 14288 files migrated
after restarting the mds quota size is accurate
Disk quotas for user oneuser (uid 901):
Filesystem kbytes quota limit grace files quota limit
grace
/data 118429360 838860800 1048576000 - 29941 300000
310000 -
datafs-MDT0000_UUID
3768 - 0 - 29941 - 65538
-
datafs-OST0000_UUID
0 - 16075692 - - -
- -
datafs-OST0001_UUID
0 - 16023952 - - -
- -
datafs-OST0002_UUID
0 - 16020528 - - -
- -
datafs-OST0003_UUID
0 - 15960772 - - -
- -
datafs-OST0004_UUID
0 - 15933760 - - -
- -
datafs-OST0005_UUID
0 - 15769424 - - -
- -
datafs-OST0006_UUID
16997428 - 27068640 - - -
- -
datafs-OST0007_UUID
29532356 - 43761424 - - -
- -
datafs-OST0008_UUID
20113180 - 31903688 - - -
- -
datafs-OST0009_UUID
51782628 - 67237844 - - -
- -
[root ~]# chown -R admtest /data/oneuser
after giving all files to another user
[root at ciclad-ng ~]# lfs quota -uv oneuser
Disk quotas for user oneuser (uid 901):
Filesystem kbytes quota limit grace files quota limit
grace
/data 0 838860800 1048576000 - 14288 300000
310000 -
datafs-MDT0000_UUID
0 - 0 - 14288 - 65538
-
datafs-OST0000_UUID
0 - 16075692 - - -
- -
datafs-OST0001_UUID
0 - 16023952 - - -
- -
datafs-OST0002_UUID
0 - 16020528 - - -
- -
datafs-OST0003_UUID
0 - 15960772 - - -
- -
datafs-OST0004_UUID
0 - 15933760 - - -
- -
datafs-OST0005_UUID
0 - 15769424 - - -
- -
datafs-OST0006_UUID
0 - 16143188 - - -
- -
datafs-OST0007_UUID
0 - 16179636 - - -
- -
datafs-OST0008_UUID
0 - 16480220 - - -
- -
datafs-OST0009_UUID
0 - 16002064 - - -
- -
I'm still having 14288 files but no block on ost
server with rpm for rhel6 on sl6
[root at mds2-ipsl ~]# rpm -qa|egrep 'lustre|e2fs'
lustre-modules-2.5.3-2.6.32_431.23.3.el6_lustre.x86_64.x86_64
e2fsprogs-1.42.12.wc1-7.el6.x86_64
lustre-osd-ldiskfs-2.5.3-2.6.32_431.23.3.el6_lustre.x86_64.x86_64
lustre-iokit-2.5.3-2.6.32_431.23.3.el6_lustre.x86_64.x86_64
kernel-2.6.32-431.23.3.el6_lustre.x86_64
e2fsprogs-libs-1.42.12.wc1-7.el6.x86_64
lustre-2.5.3-2.6.32_431.23.3.el6_lustre.x86_64.x86_64
lfs migrate has been executed on a client sl6 with rpm from site
[root at nfs-lustre ~]# rpm -qa|grep lustre
lustre-modules-2.5.3-2.6.32_431.23.3.el6_lustre.x86_64.x86_64
lustre-client-2.5.3-2.6.32_431.23.3.el6.x86_64.x86_64
kernel-2.6.32-431.23.3.el6_lustre.x86_64
lustre-client-modules-2.5.3-2.6.32_431.23.3.el6.x86_64.x86_64
cluster client nodes are 1.8.9wc1 because we still have some filesystems
still in 1.8.9wc1
--
Weill Philippe - Administrateur Systeme et Reseaux
CNRS/UPMC/IPSL LATMOS (UMR 8190)
More information about the lustre-discuss
mailing list