[Lustre-discuss] 1.8.1 test setup achieved, what about maximum mdt size

Andreas Dilger adilger at sun.com
Fri Oct 23 03:57:45 PDT 2009


On 2009-10-23, at 03:51, Bernd Schubert wrote:
> On Tuesday 20 October 2009, Andreas Dilger wrote:
>> On 18-Oct-09, at 16:04, Piotr Wadas wrote:
>>> Now, I did a simple count of MDT size as described in lustre 1.8.1
>>> manual,
>>> and setup mdt as recommended. The question is, no matter I did right
>>> count
>>> or not, what actually will happen, if MDT partition runs out of  
>>> space?
>>> Any chances to dump the whole MGS+MDT combined fs, supply a bigger
>>> block
>>> device, or extend partition size with some e2fsprogs/tune2fs trick ?
>>> This assumes, that no matter how big MDT is, it will be exhausted
>>> someday.
>>
>> It is true that the MDT device can become full at some point, but  
>> this
>> happens fairly rarely given that most Lustre HPC users have very  
>> large
>> files, and the size of the MDT is MUCH smaller than the space  
>> needed for
>> the file data.  The maximum size of MDT is 8TB, and if you format the
>
> Is that still true with recent kernels such as the one from SLES11?  
> I thought
> ldiskfs is based on ext4 there? So we should have at least 16TiB and  
> I'm not
> sure if all the e2fsprogs patches already have been landed to get 64- 
> bit max
> sizes?


16TB LUN support is still under testing, so it isn't officially  
supported
yet.  The upstream e2fsprogs don't have 64-bit support finished yet  
(also
under testing) and when that is done there will need to be additional
testing with Lustre.  There is some question of whether SLES11 will get
all of the fixes needed for > 16TB support, or if it is better to get  
that
from RHEL6 instead.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.




More information about the lustre-discuss mailing list