[Lustre-discuss] inode tuning on shared mdt/mgs

Andreas Dilger adilger at whamcloud.com
Tue Jul 5 18:35:00 PDT 2011


On 2011-07-05, at 12:16 PM, Aaron Everett <aeverett at forteds.com> wrote:

> Thank you both for the explanation. I have spent the morning populating our Lustre file system with test data, and monitoring the inode usage.

Sorry for my fragmented first email, and thanks to Kevin for finishing it. 

> Having reformatted with --mkfsoptions="-i 1536" I'm seeing roughly 8M IUsed for every 1M IFree decrease. If the ratio holds, this will meet my needs. 

This "ratio" is only an artifact of your observation and is not going to persist for the life of the filesystem.

In Lustre 2.1 this behavior of returning min(inodes, blocks) on the MDT has been removed. The mkfs.lustre inode ratio calculation has been improved, and the statfs code no longer needs to overcompensate for the danger of external xattr blocks.  If you run out of blocks on the MDT this will still show up in normal "lfs df" output, separate from the "lfs df -i" output. 

In the meantime, you can check the total number of inodes created on the MDT filesystem at format time with:

dumpe2fs -h {dev} | grep "Inode count"

There is a patch for 1.8 to change this as well, but it didn't make it into 1.8.6. 

Cheers, Andreas

> On Sat, Jul 2, 2011 at 10:54 AM, Kevin Van Maren <kevin.van.maren at oracle.com> wrote:
> Andreas Dilger wrote:
> 
> On 2011-07-01, at 12:03 PM, Aaron Everett <aeverett at forteds.com <mailto:aeverett at forteds.com>> wrote:
> I'm trying to increase the number of inodes available on our shared mdt/mgs. I've tried reformatting using the following:
> 
>  mkfs.lustre --fsname fdfs --mdt --mgs --mkfsoptions="-i 2048" --reformat /dev/sdb
> 
> The number of inodes actually decreased when I specified -i 2048 vs. leaving the number at default. 
> 
> This os a bit of an anomaly in how 1.8 reports the inode count. You actually do have more inodes on the MDS, but because the MDS might need to use an external block to store the striping layout, it limits the returned inode count to the worst case usage. As the filesystem fills and these external blck
> 
> [trying to complete his sentence:]
> are not used, the free inode count keeps reporting the same number of free inodes, as the number of used inodes goes up.
> 
> It is pretty weird, but it was doing the same thing in v1.6
> 
> 
> We have a large number of smaller files, and we're nearing our inode limit on our mdt/mgs. I'm trying to find a solution before simply expanding the RAID on the server. Since there is plenty of disk space, changing the bytes per inode seemed like a simple solution. 
> >From the docs:
> 
> Alternately, if you are specifying an absolute number of inodes, use the-N <number of inodes> option. You should not specify the -i option with an inode ratio below one inode per 1024 bytes in order to avoid unintentional mistakes. Instead, use the -N option.
> 
> What is the format of the -N flag, and how should I calculate the number to use? Thanks for your help!
> 
> Aaron
> 
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110705/2eb2cf66/attachment.htm>


More information about the lustre-discuss mailing list