[lustre-discuss] Lustre FS inodes getting full

Jérôme BECOT jerome.becot at inserm.fr
Fri Nov 6 01:55:18 PST 2015


Hi,

We face a weird situation here. And i'd like to know if there is 
anything wrong and what can I do to fix that.

We have a 30TB system with lustre 2.6 (1 MDS / 2 OSS). The inode usage 
is full though :

root at SlurmMaster:~# df -i
Filesystem                Inodes    IUsed      IFree IUse% Mounted on
/dev/sda5                      0        0          0     - /
udev                     8256017      390    8255627    1% /dev
tmpfs                    8258094      347    8257747    1% /run
tmpfs                    8258094        5    8258089    1% /run/lock
tmpfs                    8258094        2    8258092    1% /run/shm
/dev/sdb1                      0        0          0     - /home
10.0.1.60 at tcp:/lustre   37743327 37492361     250966  100% /scratch
cgroup                   8258094        8    8258086    1% /sys/fs/cgroup

root at SlurmMaster:~# lfs df -i
UUID                      Inodes       IUsed       IFree IUse% Mounted on
lustre-MDT0000_UUID   1169686528    37413529  1132272999   3% 
/scratch[MDT:0]
lustre-OST0000_UUID     17160192    16996738      163454  99% 
/scratch[OST:0]
lustre-OST0001_UUID     17160192    16996308      163884  99% 
/scratch[OST:1]

filesystem summary:     37740867    37413529      327338  99% /scratch

What is happening here ? I thought we would have a 4 billion files max, 
not 16 million ?

Thanks

-- 
Jérome BECOT

Administrateur Systèmes et Réseaux

Molécules à visée Thérapeutique par des approches in Silico (MTi)
Univ Paris Diderot, UMRS973 Inserm
Case 013
Bât. Lamarck A, porte 412
35, rue Hélène Brion 75205 Paris Cedex 13
France

Tel : 01 57 27 83 82



More information about the lustre-discuss mailing list