[lustre-discuss] Fwd: Re: MDT filling up with 4 MB files

Jessica Otey jotey at nrao.edu
Thu Sep 29 09:53:13 PDT 2016


[Sent on behalf of Maxs.Simmonds at alma.cl]

Colin,

We cleared the changelogs on the MDT, but see no space clearance.

Any idea how the 4MB files are produced?

Thanks.


On 29/09/16 13:25, Colin Faber wrote:
> Yes, if you're not consuming the records, you're going to see them eat 
> up space on the MDT.
>
> On Thu, Sep 29, 2016 at 10:04 AM, Jessica Otey <jotey at nrao.edu 
> <mailto:jotey at nrao.edu>> wrote:
>
>
>
>     On 9/29/16 12:36 PM, Colin Faber wrote:
>>     Is the changelogs feature enabled?
>>
>     Yes, and.. the output of lfs changelogs gives us 360,000 lines...
>     Do you think that is the source of all the 'extra' data?
>
>>     On Thu, Sep 29, 2016 at 8:58 AM, Jessica Otey <jotey at nrao.edu
>>     <mailto:jotey at nrao.edu>> wrote:
>>
>>         Hello all,
>>         I write on behalf of my colleagues in Chile, who are
>>         experiencing a bizarre problem with their MDT, namely, it is
>>         filling up with 4 MB files. There is no issue with the number
>>         of inodes, of which there are hundreds of millions unused.
>>
>>         [root at jaopost-mds ~]# tune2fs -l /dev/sdb2 | grep -i inode
>>         device /dev/sdb2 mounted by lustre
>>         Filesystem features:  has_journal ext_attr resize_inode
>>         dir_index filetype needs_recovery flex_bg dirdata
>>         sparse_super large_file huge_file uninit_bg dir_nlink quota
>>         Inode count:  239730688
>>         Free inodes:  223553405
>>         Inodes per group:         32768
>>         Inode blocks per group:   4096
>>         First inode:              11
>>         Inode size:       512
>>         Journal inode:            8
>>         Journal backup:           inode blocks
>>         User quota inode:         3
>>         Group quota inode:        4
>>
>>         Has anyone ever encountered such a problem? The only thing
>>         unusual about this cluster is that it is using 2.5.3
>>         MDS/OSSes while still using 1.8.9 clients—something I didn't
>>         actually believe was possible, as I thought the last version
>>         to work effectively with 1.8.9 clients was 2.4.3. However,
>>         for all I know, the version gap may have nothing to do with
>>         this phenomena.
>>
>>         Any and all advice is appreciated. Any general information on
>>         the structure of the MDT also welcome, as such info is in
>>         short supply on the internet.
>>
>>         Thanks,
>>         Jessica
>>
>>         Below is a look inside the O folder at the root of the MDT,
>>         where there are about 48,000 4MB files:
>>
>>         [root at jaopost-mds O]# pwd
>>         /lustrebackup/O
>>         [root at jaopost-mds O]# tree -L 1
>>         .
>>         ├── 1
>>         ├── 10
>>         └── 200000003
>>
>>         3 directories, 0 files
>>
>>         [root at jaopost-mds O]# ls -l 1
>>         total 2240
>>         drwx------ 2 root root 69632 sep 16 16:25 d0
>>         drwx------ 2 root root 69632 sep 16 16:25 d1
>>         drwx------ 2 root root 61440 sep 16 17:46 d10
>>         drwx------ 2 root root 69632 sep 16 17:46 d11
>>         drwx------ 2 root root 69632 sep 16 18:04 d12
>>         drwx------ 2 root root 65536 sep 16 18:04 d13
>>         drwx------ 2 root root 65536 sep 16 18:04 d14
>>         drwx------ 2 root root 69632 sep 16 18:04 d15
>>         drwx------ 2 root root 61440 sep 16 18:04 d16
>>         drwx------ 2 root root 61440 sep 16 18:04 d17
>>         drwx------ 2 root root 69632 sep 16 18:04 d18
>>         drwx------ 2 root root 69632 sep 16 18:04 d19
>>         drwx------ 2 root root 65536 sep 16 16:25 d2
>>         drwx------ 2 root root 69632 sep 16 18:04 d20
>>         drwx------ 2 root root 69632 sep 16 18:04 d21
>>         drwx------ 2 root root 61440 sep 16 18:04 d22
>>         drwx------ 2 root root 69632 sep 16 18:04 d23
>>         drwx------ 2 root root 61440 sep 16 16:11 d24
>>         drwx------ 2 root root 69632 sep 16 16:11 d25
>>         drwx------ 2 root root 69632 sep 16 16:11 d26
>>         drwx------ 2 root root 69632 sep 16 16:11 d27
>>         drwx------ 2 root root 69632 sep 16 16:25 d28
>>         drwx------ 2 root root 69632 sep 16 16:25 d29
>>         drwx------ 2 root root 69632 sep 16 16:25 d3
>>         drwx------ 2 root root 65536 sep 16 16:25 d30
>>         drwx------ 2 root root 65536 sep 16 16:25 d31
>>         drwx------ 2 root root 69632 sep 16 16:25 d4
>>         drwx------ 2 root root 61440 sep 16 16:25 d5
>>         drwx------ 2 root root 69632 sep 16 16:25 d6
>>         drwx------ 2 root root 73728 sep 16 16:25 d7
>>         drwx------ 2 root root 65536 sep 16 17:46 d8
>>         drwx------ 2 root root 69632 sep 16 17:46 d9
>>         -rw-r--r-- 1 root root     8 ene  4  2016 LAST_ID
>>
>>         [root at jaopost-mds d0]# ls -ltr | more
>>         total 5865240
>>         -rw-r--r-- 1 root root  252544 ene  4  2016 32
>>         -rw-r--r-- 1 root root 2396224 ene  9  2016 2720
>>         -rw-r--r-- 1 root root 4153280 ene  9  2016 2752
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2784
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2816
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2848
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2880
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2944
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 2976
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3008
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3040
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3072
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3104
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3136
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3168
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3200
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3232
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3264
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3296
>>         -rw-r--r-- 1 root root 4153280 ene 10  2016 3328
>>
>>
>>
>>         _______________________________________________
>>         lustre-discuss mailing list
>>         lustre-discuss at lists.lustre.org
>>         <mailto:lustre-discuss at lists.lustre.org>
>>         http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
>>
>>
>
>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20160929/c275b283/attachment.htm>


More information about the lustre-discuss mailing list