[lustre-discuss] confused about mdt space

肖正刚 guru.novice at gmail.com
Wed Apr 1 00:55:34 PDT 2020


Hi,
Please forget my first question, I made a mistake.
For  " the recent lustre versions use a 1KB inode size by default and the
default format options create 1 inodes for every 2.5 KB of MDT space" :
I checked the inode size is 1KB and  in my online systems,  as you said ,
about 40~41% of mdt disk space consumed by inodes.
but from the manual I found the default "inode ratio" is 2K, so where the
additional 0.5KB comes from ?

Thanks.


肖正刚 <guru.novice at gmail.com> 于2020年4月1日周三 下午1:00写道:

> Thanks a lot.
> I have two more questions:
> 1) Assume I consider the mdt space use the method described in lustre
> manual, by calculation, the metadata space is 400GB.
> After format(default option), about 160GB(40% of 400GB) preallocated for
> inodes, so the avalaible inodes number is less than estimated, right ?
> 2) mds need additional space for other use, like log,acls,xattrs;how to
> estimate these space ?
>
> Thanks!
>
> Mohr Jr, Richard Frank <rmohr at utk.edu> 于2020年3月31日周二 下午9:57写道:
>
>>
>>
>> > On Mar 30, 2020, at 10:56 PM, 肖正刚 <guru.novice at gmail.com> wrote:
>> >
>> > Hello, I have some question about metadata space.
>> > 1) I have ten 960GB SAS SSDs for mdt,after done raid10,we have 4.7TB
>> space free.
>> > after formated as mdt,we only have 2.6TB space free; so where the 2.1TB
>> space go ?
>> > 2) for the 2.6TB space, what's it used for?
>>
>> That space is used by inodes.  I believe the recent lustre versions use a
>> 1KB inode size by default and the default format options create 1 inodes
>> for every 2.5 KB of MDT space.  So about 40% of your disk space will be
>> consumed by inodes.
>>
>>>> Rick Mohr
>> Senior HPC System Administrator
>> Joint Institute for Computational Sciences
>> University of Tennessee
>>
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200401/3a9e4ab8/attachment.html>


More information about the lustre-discuss mailing list