[lustre-discuss] MDT size smaller than expected

Steve Barnet barnet at icecube.wisc.edu
Tue Jun 26 05:21:26 PDT 2018


Hi Andreas,


On 6/25/18 5:47 PM, Andreas Dilger wrote:
> On Jun 25, 2018, at 20:39, Steve Barnet <barnet at icecube.wisc.edu> wrote:
>>
>> Hi all,
>>
>>   I'm setting up a new lustre filesystem with 2.10.4. Things are
>> looking OK so far. However, I noticed that when I mount up
>> my MDT, df reports a smaller size than I expect. The volume is
>> 2.2TB, but the MDT reports 1.3:
>>
>>
>> icecube-lfs6-mds-1 ~ # df -h -t lustre
>> Filesystem                 Size  Used Avail Use% Mounted on
>> /dev/mapper/md3420-1-vd-0  1.3T   94M  1.2T   1% /mnt/lustre/lfs6-mdt0000
>>
>>
>> It appears that the host sees the correct size for the volume:
>>
>> icecube-lfs6-mds-1 ~ # fdisk -l /dev/mapper/md3420-1-vd-0
>>
>> Disk /dev/mapper/md3420-1-vd-0: 2388.7 GB, 2388672905216 bytes, 4665376768 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>
>>
>>   So I am a little confused. Seems to work OK, but I'd like
>> to understand what might be going on there.
> 
> About half of the MDT is consumed by the inodes with the default formatting parameters,
> so this won't ever show up as part of the free space in the filesystem.  The statfs()
> interface is somewhat limited in what it can show, and the alternative is to show the
> total blocks as 2.2TB, but there is 1.1TB of "Used" space, which would probably get
> even more questions on the flip side "why is half of my MDT filesystem used, and how
> do I get rid of that space usage".


Thanks much. So just to make sure I'm clear when the
inevitable question arises: the total size is 2.2TB,
however, 1.1TB of that is inodes so that is subtracted
from the 2.2TB and that's what we see in df.

That makes sense, and I can definitely see why that
choice would be made. I just wanted to be sure I
hadn't done something I might regret for the life of
the filesystem. :-)

Thanks again!

Best,

---Steve


> 
> Cheers, Andreas
> 
>>
>>
>> ---------------------------------------------------------------------
>>
>> mkfs.lustre --fsname=lfs6 --reformat --mgs --mdt --servicenode=lfs6-mds-1 at tcp1 --servicenode=lfs6-mds-2 at tcp1 --index=0 /dev/mapper/md3420-1-vd-0 >> mkfs.out
>>
>>
>> icecube-lfs6-mds-1 ~ # cat mkfs.out
>>
>>    Permanent disk data:
>> Target:     lfs6:MDT0000
>> Index:      0
>> Lustre FS:  lfs6
>> Mount type: ldiskfs
>> Flags:      0x1065
>>               (MDT MGS first_time update no_primnode )
>> Persistent mount opts: user_xattr,errors=remount-ro
>> Parameters:  failover.node=10.128.11.145 at tcp1:10.128.11.144 at tcp1
>>
>> device size = 2278016MB
>> formatting backing filesystem ldiskfs on /dev/mapper/md3420-1-vd-0
>> 	target name   lfs6:MDT0000
>> 	4k blocks     583172096
>> 	options        -J size=4096 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,mmp,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F
>> mkfs_cmd = mke2fs -j -b 4096 -L lfs6:MDT0000  -J size=4096 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,mmp,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F /dev/mapper/md3420-1-vd-0 583172096
>> Writing CONFIGS/mountdata
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> 
> ---
> Andreas Dilger
> Principal Lustre Architect
> Whamcloud
> 
> 
> 
> 
> 
> 



More information about the lustre-discuss mailing list