[Lustre-discuss] lustre1.6.7: inodes problem

Kevin Van Maren Kevin.Vanmaren at Sun.COM
Wed Mar 25 11:59:54 PDT 2009


The mdt device can be up to 8TB in size.  With the default 
configuration, you get one inode for every 4KB, but you can double the 
inodes if you format it with one inode every 2KB (see the manual).

You are looking at either needing to reformat (ideally also with a 
larger mdt), or removing small files from the filesystem.

Kevin


Shigeru Sugimoto wrote:
> Hello,
>
> So to fix the situation below, I can switch the MGS's HDD from 29GB to
> 750GB, which I guess would simply give me 25 times larger space for
> inodes.
> But before taking that way, I would like to make sure if there's any
> other(better) solution or configuration to fix my problem below.
>
> Thanks,
>
> Shigeru
>
> 2009/3/23 Shigeru Sugimoto <lustre.shigeru at gmail.com>
>   
>> Hello,
>>
>> I am having a problem regarding inodes. My storage has more than 36TB space and I tried to copy less than 1TB data(total size is less than 1TB but more than 200,000 files, each of those are around 1MB-10MB).
>> In the middle of the rsync process to copy the data, it suddenly started to display "No space left on device", though there are almost 99% of the storage is still free.
>> I have checked the inodes with df -i option and got the result below.
>>
>> [root at lustre-client1 mnt]# lfs df -i
>> UUID                    Inodes     IUsed     IFree IUse% Mounted on
>> user-MDT0000_UUID       393216    393216         0  100% /mnt/lustre[MDT:0]
>> user-OST0000_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:0]
>> user-OST0001_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:1]
>> user-OST0002_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:2]
>> user-OST0003_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:3]
>> user-OST0004_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:4]
>> user-OST0005_UUID     91570176     14999  91555177    0% /mnt/lustre[OST:5]
>> user-OST0006_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:6]
>> user-OST0007_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:7]
>> user-OST0008_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:8]
>> user-OST0009_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:9]
>> user-OST000a_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:10]
>> user-OST000b_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:11]
>> user-OST000c_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:12]
>> user-OST000d_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:13]
>> user-OST000e_UUID     91570176     15000  91555176    0% /mnt/lustre[OST:14]
>> user-OST000f_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:15]
>> user-OST0010_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:16]
>> user-OST0011_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:17]
>> user-OST0012_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:18]
>> user-OST0013_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:19]
>> user-OST0014_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:20]
>> user-OST0015_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:21]
>> user-OST0016_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:22]
>> user-OST0017_UUID     91570176     14968  91555208    0% /mnt/lustre[OST:23]
>>
>> My configurations are below.
>> - Lustre version: 1.6.7
>> - One physical MGS and one physical OSS.
>> - The MGS is doing MDT role, cofigured by "mkfs.lustre --mgs --mdt --fsname=user --reformat /dev/VolGroup00/mdt"
>> - The OSS consists of 24 OSTs, configured by "mkfs.lustre --ost --msgnode=servername at tcp --fsname=user --reformat /dev/sda" ( the same command executed for /dev/sdb to /dev/sdx)
>> So I'm using the default setting for inodes (i.e. --mkfsoptions=-i 4096, 1 inode per 4096 bytes of filesystem space).
>> - Available storage size of MGS is 29GB, as below.
>>
>> [root at mgs ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/mapper/VolGroup00-LogVol00
>>                        29G  2.0G   26G   8% /
>> /dev/cciss/c0d0p1      99M   16M   78M  17% /boot
>> none                 1014M     0 1014M   0% /dev/shm
>> /dev/VolGroup00/mdt   1.4G   63M  1.2G   5% /mnt/mdt
>>
>> I was trying to change the size of inode but the minimum size is 1024bytes/inode, so it would just get four times bigger number of inodes compared to the current configuration, with the current hardware.
>>
>> Would anyone here help me to correct my configuration in order to handle thousands of files with the current MGS?
>>
>> Thanks for your time,
>>
>> Shigeru
>>
>>     
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list