[lustre-discuss] Re-format MDT for more inodes

Jérôme BECOT jerome.becot at inserm.fr
Wed May 11 12:15:13 PDT 2016


If the behavior is the same than 2.X, I think that the df -i command 
show the ost inodes

The lfs df -i should show per server detail (MDS and OSTs)

Le 09/05/2016 19:01, Ben Evans a écrit :
> Could you post the output from "lfs df -i" from one of the clients?  If I
> had to guess you may have more MDT inodes than (total) OST inodes now.
>
> -Ben Evans
>
> On 5/9/16, 12:36 PM, "lustre-discuss on behalf of Tung-Han Hsieh"
> <lustre-discuss-bounces at lists.lustre.org on behalf of
> thhsieh at twcp1.phys.ntu.edu.tw> wrote:
>
>> Dear All,
>>
>> We are facing a strange situation. So we are asking for help here.
>> Any suggestions will be very apprciated.
>>
>> Our Lustre file system (version 1.8.7) ran out of MDT inodes. So we
>> have backup the MDT data, reformat MDT with larger partition size
>> and more inodes, and restore the MDT data. After that, the whole
>> file system works normally, but we found that the client cannot see
>> as many inodes as the MDT server. Here are the details of what we
>> have done.
>>
>> 1. In the beginning, the MDT has a partition with about 200GB. It
>>    was formatted with default options, and we got more than 48,660,000
>>    inodes. But then we exhausted all the indoes. So we decided to
>>    reformat the MDT partiton.
>>
>> 2. We shutdown the Lustre file system, and proceed the following steps
>>    to backup the MDT data:
>>
>>    - mount -t ldiskfs /dev/sda2 /mnt/mdt
>>    - cd /mnt/mdt
>>    - getfattr -R -d -m '.*' -P . > /tmp/ea.bak
>>    - tar -cf /tmp/mdt.tar .
>>    - cd /
>>    - umount /mnt/mdt
>>
>>    (Note: /dev/sda1 is the MGS partition. We did not change it at all)
>>
>> 3. We use the "fdisk" to re-partition the hard disk, and enlarge the
>>    partition to 500GB (which is almost the whole disk size).
>>
>> 4. We reformat this partition with:
>>
>>    - mkfs.lustre --fsname cfs --mdt --mgsnode=<my_host_name> \
>>                  --mkfsoptions="-i 1024" /dev/sda2
>>
>>    Since we use stripe count = 1 (default), we would like to have a
>>    higher density of inodes in MDT.
>>
>>    After reformat, and mount it with ldiskfs file system, we really
>>    get a large number of inodes:
>>
>> # df -i
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> tmpfs                1019522       4 1019518    1% /lib/init/rw
>> udev                 1019522    2847 1016675    1% /dev
>> tmpfs                1019522       1 1019521    1% /dev/shm
>> /dev/shm             1019522       1 1019521    1% /dev/shm
>> overflow             1019522       2 1019520    1% /tmp
>> /dev/sda1              61824      42   61782    1% /cfs/mgs
>> /dev/sda2            486326016     2 486326014  1% /mnt/mdt
>>
>> 5. Then we restore the MDT data to the new partition via:
>>
>>    - tune2fs -O dir_index /dev/sda2
>>    - cd /mnt/mdt
>>    - tar xf /tmp/mdt.tar
>>    - setfattr --restore=/tmp/ea.bak
>>    - rm -f OBJECTS/* CATALOGS
>>    - cd /
>>    - umount /mnt/mdt
>>
>>    Then we can mount the lustre file system from client successfully.
>>
>> 6. However, from the client side, it cannot see all the inodes as in
>>    the MDT server. In MDT server, using "df -i" we see:
>>
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> tmpfs                1019522       4 1019518    1% /lib/init/rw
>> udev                 1019522    2847 1016675    1% /dev
>> tmpfs                1019522       1 1019521    1% /dev/shm
>> /dev/shm             1019522       1 1019521    1% /dev/shm
>> overflow             1019522       2 1019520    1% /tmp
>> /dev/sda1              61824      42   61782    1% /cfs/mgs
>> /dev/sda2            486326016 48661002 437665014   11% /cfs/mdt
>>
>>    But from the client, using "df -i" we see:
>>
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> /dev/sda1            3055616  448561 2607055   15% /
>> tmpfs                1024174       6 1024168    1% /lib/init/rw
>> udev                 1024174    4969 1019205    1% /dev
>> tmpfs                1024174       1 1024173    1% /dev/shm
>> /dev/shm             1024174       1 1024173    1% /dev/shm
>> /dev/sda3            26501120 1203655 25297465    5% /home
>> dfs0:/cfs            90849681 48661011 42188670   54% /work
>>
>>    Please note that the number 90849681 is actually the inode count
>>    that is close to the default inode density (--mkfsoptions="-i 4096")
>>    in a 500GB partition.
>>
>> Could anyone know what's going on here. Will this situation harm
>> the operation of the lustre file system? Any suggestions are very
>> appreciated.
>>
>> Thank you very much in advance.
>>
>>
>> Best Regards,
>>
>> T.H.Hsieh
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list