[Lustre-discuss] MDS inode allocation question

Gary Molenkamp gary at sharcnet.ca
Wed Apr 28 06:44:49 PDT 2010


Thanx for the details on the Inode number, but I'm still having an issue
where I'm not getting the number I expected from the MDS creation, but I
suspect its not a reporting error from lfs.

When I create the MDS, I specified '-i 1024' and I can see (locally)
800M inodes, but only part of the available space is allocated.  Also,
when the client mounts the filesystem,  the MDS only has 400M blocks
available:

gulfwork-MDT0000_UUID 430781784    500264 387274084    0% /gulfwork[MDT:0]

As we were creating files for testing, I saw that each inode allocation
on the MDS was consuming 4k of space, so even though I have 800M inodes
available on actual mds partition, it appears that the actual space
available was only allowing 100M inodes in the lustre fs.  Am I
understanding that correctly?

I tried to force the MDS creation to use a smaller size per inode but
that produced an error:

mkfs.lustre --fsname gulfwork --mdt --mgs --mkfsoptions='-i 1024 -I
1024' --reformat --failnode=10.18.12.1 /dev/sda
...
   mke2fs: inode_size (1024) * inodes_count (860148736) too big for a
        filesystem with 215037184 blocks, specify higher inode_ratio
	(-i) or lower inode count (-N).
...

yet the actual drive has many more blocks available:

SCSI device sda: 1720297472 512-byte hdwr sectors (880792 MB)

Is this ext4 setting the block size limit?


FYI, I am using:
  lustre-1.8.2-2.6.18_164.11.1.el5-ext4_lustre.1.8.2.x86_64.rpm
  lustre-ldiskfs-3.0.9-2.6.18_164.11.1.el5-ext4_lustre.1.8.2.x86_64.rpm
  e2fsprogs-1.41.6.sun1-0redhat.rhel5.x86_64.rpm




-- 
Gary Molenkamp			SHARCNET
Systems Administrator		University of Western Ontario
gary at sharcnet.ca		http://www.sharcnet.ca
(519) 661-2111 x88429		(519) 661-4000



More information about the lustre-discuss mailing list