[lustre-discuss] Lustre FS inodes getting full

Jérôme BECOT jerome.becot at inserm.fr
Fri Nov 6 06:10:23 PST 2015


Yes that's why I understood. We don't use stripes.

What I don't know is what determines the inodes limit on the OST. I 
guess that the underlaying filesystem (i.e. ldiskfs here) is the 
culprit. But then on a 15TB OST with ldiskfs, I didn't expect to have a 
17M inodes limitation.

We use programs that generates tons of small files, and now we're 
getting full by using only 30% of disk space.

Is there any way to increase the max inode number available on the OSTs ?

Here again I guess I will probably not have any other choice than 
switching to a ZFS backend ?


Le 06/11/2015 15:00, Mohr Jr, Richard Frank (Rick Mohr) a écrit :
> Every Lustre file will use an inode on the MDS and at least one inode on an OST (more than one OST is the file stripe count is >1).  If your OSTs don't have free inodes, Lustre cannot allocate an object for the file's contents.
>
> The upper limit on the number of files will be the lesser of:
>
> 1) number of MDS inodes
> 2) sum of inodes across all OSTs
>
> But depending upon file size and stripe count, you could end up with less.
>
> -- Rick
>
>> On Nov 6, 2015, at 4:55 AM, Jérôme BECOT <jerome.becot at inserm.fr> wrote:
>>
>> Hi,
>>
>> We face a weird situation here. And i'd like to know if there is anything wrong and what can I do to fix that.
>>
>> We have a 30TB system with lustre 2.6 (1 MDS / 2 OSS). The inode usage is full though :
>>
>> root at SlurmMaster:~# df -i
>> Filesystem                Inodes    IUsed      IFree IUse% Mounted on
>> /dev/sda5                      0        0          0     - /
>> udev                     8256017      390    8255627    1% /dev
>> tmpfs                    8258094      347    8257747    1% /run
>> tmpfs                    8258094        5    8258089    1% /run/lock
>> tmpfs                    8258094        2    8258092    1% /run/shm
>> /dev/sdb1                      0        0          0     - /home
>> 10.0.1.60 at tcp:/lustre   37743327 37492361     250966  100% /scratch
>> cgroup                   8258094        8    8258086    1% /sys/fs/cgroup
>>
>> root at SlurmMaster:~# lfs df -i
>> UUID                      Inodes       IUsed       IFree IUse% Mounted on
>> lustre-MDT0000_UUID   1169686528    37413529  1132272999   3% /scratch[MDT:0]
>> lustre-OST0000_UUID     17160192    16996738      163454  99% /scratch[OST:0]
>> lustre-OST0001_UUID     17160192    16996308      163884  99% /scratch[OST:1]
>>
>> filesystem summary:     37740867    37413529      327338  99% /scratch
>>
>> What is happening here ? I thought we would have a 4 billion files max, not 16 million ?
>>
>> Thanks
>>
>> -- 
>> Jérome BECOT
>>
>> Administrateur Systèmes et Réseaux
>>
>> Molécules à visée Thérapeutique par des approches in Silico (MTi)
>> Univ Paris Diderot, UMRS973 Inserm
>> Case 013
>> Bât. Lamarck A, porte 412
>> 35, rue Hélène Brion 75205 Paris Cedex 13
>> France
>>
>> Tel : 01 57 27 83 82
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

-- 
Jérome BECOT

Administrateur Systèmes et Réseaux

Molécules à visée Thérapeutique par des approches in Silico (MTi)
Univ Paris Diderot, UMRS973 Inserm
Case 013
Bât. Lamarck A, porte 412
35, rue Hélène Brion 75205 Paris Cedex 13
France

Tel : 01 57 27 83 82



More information about the lustre-discuss mailing list