[Lustre-discuss] MGT of 128 MB - already out of space

Andreas Dilger adilger at sun.com
Fri Dec 18 22:25:30 PST 2009


On 2009-12-18, at 18:13, Jeffrey Bennett wrote:
> Scenario is the following:
>
> - Lustre 1.8.1.1
> - 3 Lustre filesystems, fully redundant (two networks, OSSs on  
> active/active, MDSs on active/passive)
> - 1 MGS, 1 MDT, 2 OSTs
> - For the MGT, 128MB were allocated, following Lustre's manual  
> recommendations
> - The MGT is already out of space, and a "ls" of the MGT is showing  
> files are 8MB, like:
>
> -rw-r--r-- 1 root root 8.0M Dec  2 15:11 devfs-client
> -rw-r--r-- 1 root root 8.0M Dec  2 15:11 devfs-MDT0000
> -rw-r--r-- 1 root root 8.0M Dec  2 16:42 devfs-OST0000

How many OSTs do you have?  Is this consuming all of the space?

> Other lustre filesystems I have worked on show much smaller files. A  
> "dumpe2fs" on this MGT does not show anything strange like huge  
> block sizes, etc.

Are these files sparse by some chance?  What does "ls -ls" show?

It may be that your journal is consuming a lot of space?  Try running:

debugfs -c -R "stat <8>" /dev/{MGTdev}

You really don't need more than the absolute minimum of space for the  
MGT, which is 4MB.  You can remove the journal via "tune2fs -O  
^has_journal" on an umounted filesystem, then "tune2fs -j -J size=4"  
to recreate it at the minimum size (maybe "-J size=5" if it complains).

> Question is, why are these files so big and how can we "shrink" them?
> Is it possible to run --writeconf to fix this?

If all of the space is really consumed by the config files, are you  
using a lot of "lctl conf_param" commands, ost pools, or something  
else that would put a lot of records into the config logs?

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.




More information about the lustre-discuss mailing list