[lustre-discuss] Lustre FS inodes getting full
andreas.dilger at intel.com
Fri Nov 6 22:05:51 PST 2015
On 2015/11/06, 07:10, "lustre-discuss on behalf of Jérôme BECOT"
<lustre-discuss-bounces at lists.lustre.org on behalf of
jerome.becot at inserm.fr> wrote:
>Yes that's why I understood. We don't use stripes.
>What I don't know is what determines the inodes limit on the OST. I
>guess that the underlaying filesystem (i.e. ldiskfs here) is the
>culprit. But then on a 15TB OST with ldiskfs, I didn't expect to have a
>17M inodes limitation.
>We use programs that generates tons of small files, and now we're
>getting full by using only 30% of disk space.
The default formatting a 15TB OST assumes an average file size of 1MB,
which is normally a safe assumption for Lustre.
>Is there any way to increase the max inode number available on the OSTs?
This can be changed at format time by specifying the average file size
(inode ratio) for the OSTs:
mkfs.lustre ... --mkfsoptions="-i <average_file_size>"
But you may want to specify a slightly smaller average file size to give
some safety margin.
>Here again I guess I will probably not have any other choice than
>switching to a ZFS backend ?
The best way to handle this would be to add one or two more OSTs to the
filesystem that are formatted with the smaller inode ratio, and Lustre
will chose these instead of the full ones. You could then migrate files
from the older OSTs to the new ones until they are empty, reformat them
with the smaller inode ratio, and add them back into the filesystem.
>Le 06/11/2015 15:00, Mohr Jr, Richard Frank (Rick Mohr) a écrit :
>> Every Lustre file will use an inode on the MDS and at least one inode
>>on an OST (more than one OST is the file stripe count is >1). If your
>>OSTs don't have free inodes, Lustre cannot allocate an object for the
>> The upper limit on the number of files will be the lesser of:
>> 1) number of MDS inodes
>> 2) sum of inodes across all OSTs
>> But depending upon file size and stripe count, you could end up with
>> -- Rick
>>> On Nov 6, 2015, at 4:55 AM, Jérôme BECOT <jerome.becot at inserm.fr>
>>> We face a weird situation here. And i'd like to know if there is
>>>anything wrong and what can I do to fix that.
>>> We have a 30TB system with lustre 2.6 (1 MDS / 2 OSS). The inode usage
>>>is full though :
>>> root at SlurmMaster:~# df -i
>>> Filesystem Inodes IUsed IFree IUse% Mounted on
>>> /dev/sda5 0 0 0 - /
>>> udev 8256017 390 8255627 1% /dev
>>> tmpfs 8258094 347 8257747 1% /run
>>> tmpfs 8258094 5 8258089 1% /run/lock
>>> tmpfs 8258094 2 8258092 1% /run/shm
>>> /dev/sdb1 0 0 0 - /home
>>> 10.0.1.60 at tcp:/lustre 37743327 37492361 250966 100% /scratch
>>> cgroup 8258094 8 8258086 1%
>>> root at SlurmMaster:~# lfs df -i
>>> UUID Inodes IUsed IFree IUse% Mounted
>>> lustre-MDT0000_UUID 1169686528 37413529 1132272999 3%
>>> lustre-OST0000_UUID 17160192 16996738 163454 99%
>>> lustre-OST0001_UUID 17160192 16996308 163884 99%
>>> filesystem summary: 37740867 37413529 327338 99% /scratch
>>> What is happening here ? I thought we would have a 4 billion files
>>>max, not 16 million ?
>>> Jérome BECOT
Lustre Software Architect
Intel High Performance Data Division
More information about the lustre-discuss