<br>Hello,<br><br>I am having a problem regarding inodes. My storage has more than 36TB space and I tried to copy less than 1TB data(total size is less than 1TB but more than 200,000 files, each of those are around 1MB-10MB).<br>
In the middle of the rsync process to copy the data, it suddenly started to display "No space left on device", though there are almost 99% of the storage is still free.<br>I have checked the inodes with df -i option and got the result below.<br>
<br>[root@lustre-client1 mnt]# lfs df -i<br>
UUID Inodes IUsed IFree IUse% Mounted on<br>
user-MDT0000_UUID 393216 393216 0 100% /mnt/lustre[MDT:0]<br>
user-OST0000_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:0]<br>
user-OST0001_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:1]<br>
user-OST0002_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:2]<br>
user-OST0003_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:3]<br>
user-OST0004_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:4]<br>
user-OST0005_UUID 91570176 14999 91555177 0% /mnt/lustre[OST:5]<br>
user-OST0006_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:6]<br>
user-OST0007_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:7]<br>
user-OST0008_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:8]<br>
user-OST0009_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:9]<br>
user-OST000a_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:10]<br>
user-OST000b_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:11]<br>
user-OST000c_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:12]<br>
user-OST000d_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:13]<br>
user-OST000e_UUID 91570176 15000 91555176 0% /mnt/lustre[OST:14]<br>
user-OST000f_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:15]<br>
user-OST0010_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:16]<br>
user-OST0011_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:17]<br>
user-OST0012_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:18]<br>
user-OST0013_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:19]<br>
user-OST0014_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:20]<br>
user-OST0015_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:21]<br>
user-OST0016_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:22]<br>
user-OST0017_UUID 91570176 14968 91555208 0% /mnt/lustre[OST:23]<br>
<br>My configurations are below.<br>- Lustre version: 1.6.7<br>- One physical MGS and one physical OSS.<br>- The MGS is doing MDT role, cofigured by "mkfs.lustre --mgs --mdt --fsname=user --reformat /dev/VolGroup00/mdt"<br>
- The OSS consists of 24 OSTs, configured by "mkfs.lustre --ost --msgnode=servername@tcp --fsname=user --reformat /dev/sda" ( the same command executed for /dev/sdb to /dev/sdx)<br>So I'm using the default setting for inodes (i.e. --mkfsoptions=-i 4096, 1 inode per 4096 bytes of filesystem space). <br>
- Available storage size of MGS is 29GB, as below.<br><br>[root@mgs ~]# df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/mapper/VolGroup00-LogVol00<br>
29G 2.0G 26G 8% /<br>
/dev/cciss/c0d0p1 99M 16M 78M 17% /boot<br>
none 1014M 0 1014M 0% /dev/shm<br>
/dev/VolGroup00/mdt 1.4G 63M 1.2G 5% /mnt/mdt<br><br>I was trying to change the size of inode but the minimum size is 1024bytes/inode, so it would just get four times bigger number of inodes compared to the current configuration, with the current hardware.<br>
<br>Would anyone here help me to correct my configuration in order to handle thousands of files with the current MGS?<br><br>Thanks for your time,<br><br>Shigeru<br><br>