[lustre-discuss] Lustre FS inodes getting full

Jérôme BECOT jerome.becot at inserm.fr
Tue Nov 24 06:29:25 PST 2015


Thanks for your answer.

I am sorry I didn't thank you then.

Does reducing the average file size has an impact on the performances ?
Is there a reasonable size where going beyond may make the filesystem 
unstable or so ?

We are thinking of an average 100KB file size.

Thank you again

Le 07/11/2015 07:05, Dilger, Andreas a écrit :
> On 2015/11/06, 07:10, "lustre-discuss on behalf of Jérôme BECOT"
> <lustre-discuss-bounces at lists.lustre.org on behalf of
> jerome.becot at inserm.fr> wrote:
>
>> Yes that's why I understood. We don't use stripes.
>>
>> What I don't know is what determines the inodes limit on the OST. I
>> guess that the underlaying filesystem (i.e. ldiskfs here) is the
>> culprit. But then on a 15TB OST with ldiskfs, I didn't expect to have a
>> 17M inodes limitation.
>>
>> We use programs that generates tons of small files, and now we're
>> getting full by using only 30% of disk space.
> The default formatting a 15TB OST assumes an average file size of 1MB,
> which is normally a safe assumption for Lustre.
>
>> Is there any way to increase the max inode number available on the OSTs?
> This can be changed at format time by specifying the average file size
> (inode ratio) for the OSTs:
>
>      mkfs.lustre ... --mkfsoptions="-i <average_file_size>"
>
> But you may want to specify a slightly smaller average file size to give
> some safety margin.
>
>> Here again I guess I will probably not have any other choice than
>> switching to a ZFS backend ?
> The best way to handle this would be to add one or two more OSTs to the
> filesystem that are formatted with the smaller inode ratio, and Lustre
> will chose these instead of the full ones.  You could then migrate files
> from the older OSTs to the new ones until they are empty, reformat them
> with the smaller inode ratio, and add them back into the filesystem.
>
> Cheers, Andreas
>
>> Le 06/11/2015 15:00, Mohr Jr, Richard Frank (Rick Mohr) a écrit :
>>> Every Lustre file will use an inode on the MDS and at least one inode
>>> on an OST (more than one OST is the file stripe count is >1).  If your
>>> OSTs don't have free inodes, Lustre cannot allocate an object for the
>>> file's contents.
>>>
>>> The upper limit on the number of files will be the lesser of:
>>>
>>> 1) number of MDS inodes
>>> 2) sum of inodes across all OSTs
>>>
>>> But depending upon file size and stripe count, you could end up with
>>> less.
>>>
>>> -- Rick
>>>
>>>> On Nov 6, 2015, at 4:55 AM, Jérôme BECOT <jerome.becot at inserm.fr>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> We face a weird situation here. And i'd like to know if there is
>>>> anything wrong and what can I do to fix that.
>>>>
>>>> We have a 30TB system with lustre 2.6 (1 MDS / 2 OSS). The inode usage
>>>> is full though :
>>>>
>>>> root at SlurmMaster:~# df -i
>>>> Filesystem                Inodes    IUsed      IFree IUse% Mounted on
>>>> /dev/sda5                      0        0          0     - /
>>>> udev                     8256017      390    8255627    1% /dev
>>>> tmpfs                    8258094      347    8257747    1% /run
>>>> tmpfs                    8258094        5    8258089    1% /run/lock
>>>> tmpfs                    8258094        2    8258092    1% /run/shm
>>>> /dev/sdb1                      0        0          0     - /home
>>>> 10.0.1.60 at tcp:/lustre   37743327 37492361     250966  100% /scratch
>>>> cgroup                   8258094        8    8258086    1%
>>>> /sys/fs/cgroup
>>>>
>>>> root at SlurmMaster:~# lfs df -i
>>>> UUID                      Inodes       IUsed       IFree IUse% Mounted
>>>> on
>>>> lustre-MDT0000_UUID   1169686528    37413529  1132272999   3%
>>>> /scratch[MDT:0]
>>>> lustre-OST0000_UUID     17160192    16996738      163454  99%
>>>> /scratch[OST:0]
>>>> lustre-OST0001_UUID     17160192    16996308      163884  99%
>>>> /scratch[OST:1]
>>>>
>>>> filesystem summary:     37740867    37413529      327338  99% /scratch
>>>>
>>>> What is happening here ? I thought we would have a 4 billion files
>>>> max, not 16 million ?
>>>>
>>>> Thanks
>>>>
>>>> -- 
>>>> Jérome BECOT
> Cheers, Andreas

-- 
Jérome BECOT

Administrateur Systèmes et Réseaux

Molécules à visée Thérapeutique par des approches in Silico (MTi)
Univ Paris Diderot, UMRS973 Inserm
Case 013
Bât. Lamarck A, porte 412
35, rue Hélène Brion 75205 Paris Cedex 13
France

Tel : 01 57 27 83 82



More information about the lustre-discuss mailing list