[lustre-discuss] FOLLOW UP: MDT filling up with 4 MB files

Pawel Dziekonski dzieko at wcss.pl
Sat Oct 15 14:49:46 PDT 2016


Hi,

we had the same problem on 2.5.3. Robinhood was supposed to
consume changelog but it wasn't. Don't know why.  Simply
disabling changelog was not enough - we had to remount MDT.
We did it by simply doing failover to other MDS node (HA
pair).

The other issue we had with MDT was the size of inodes -
they are (were at that time) created with 512 bytes by
default and when you use the stripe count then it will not
accommodate the lfsck and xattr data on that single inode
and it starts utilizing the disk space. So you have to
create the inodes with proper size, then whole data will be
saved in that same inode and will not occupy additional disk
space.  AFAIK, this is a known issue since 2.x.
Unfortunately the only solution was to reformat MDT offline.

P




(via failover for 
On pią, 14 paź 2016 at 06:46:59 -0400, Jessica Otey wrote:
> All,
> My colleagues in Chile now believe that both of their 2.5.3 file
> systems are experiencing this same problem with the MDTs filling up
> with files. We have also come across a report from another user from
> early 2015 denoting the same issue, also with a 2.5.3 system.
> 
> See: https://www.mail-archive.com/search?l=lustre-discuss@lists.lustre.org&q=subject:%22Re%5C%3A+%5C%5Blustre%5C-discuss%5C%5D+MDT+partition+getting+full%22&o=newest
> 
> We are confident that these files are not related to the changelog feature.
> 
> Does anyone have any other suggestions as to what the cause of this
> problem could be?
> 
> I'm intrigued that the Lustre version involved in all 3 reports is
> 2.5.3. Could this be a bug?
> 
> Thanks,
> Jessica
> 
> 
> >On Thu, Sep 29, 2016 at 8:58 AM, Jessica Otey <jotey at nrao.edu
> ><mailto:jotey at nrao.edu>> wrote:
> >
> >    Hello all,
> >    I write on behalf of my colleagues in Chile, who are experiencing
> >    a bizarre problem with their MDT, namely, it is filling up with 4
> >    MB files. There is no issue with the number of inodes, of which
> >    there are hundreds of millions unused. Â
> >
> >    [root at jaopost-mds ~]# tune2fs -l /dev/sdb2 | grep -i inode
> >    device /dev/sdb2 mounted by lustre
> >    Filesystem features: Â  Â  Â has_journal ext_attr resize_inode
> >    dir_index filetype needs_recovery flex_bg dirdata sparse_super
> >    large_file huge_file uninit_bg dir_nlink quota
> >    Inode count: Â  Â  Â  Â  Â  Â  Â 239730688
> >    Free inodes: Â  Â  Â  Â  Â  Â  Â 223553405
> >    Inodes per group: Â  Â  Â  Â  32768
> >    Inode blocks per group: Â  4096
> >    First inode: Â  Â  Â  Â  Â  Â  Â 11
> >    Inode size:Â  Â  Â  Â  Â 512
> >    Journal inode: Â  Â  Â  Â  Â  Â 8
> >    Journal backup: Â  Â  Â  Â  Â  inode blocks
> >    User quota inode: Â  Â  Â  Â  3
> >    Group quota inode: Â  Â  Â  Â 4
> >
> >    Has anyone ever encountered such a problem? The only thing unusual
> >    about this cluster is that it is using 2.5.3 MDS/OSSes while still
> >    using 1.8.9 clients—something I didn't actually believe was
> >    possible, as I thought the last version to work effectively with
> >    1.8.9 clients was 2.4.3. However, for all I know, the version gap
> >    may have nothing to do with this phenomena.
> >
> >    Any and all advice is appreciated. Any general information on the
> >    structure of the MDT also welcome, as such info is in short supply
> >    on the internet.
> >
> >    Thanks,
> >    Jessica
> >

-- 
Pawel Dziekonski <pawel.dziekonski at wcss.pl>
Wroclaw Centre for Networking & Supercomputing, HPC Department
phone: +48 71 320 37 39, fax: +48 71 322 57 97, http://www.wcss.pl


More information about the lustre-discuss mailing list