<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 9/29/16 12:36 PM, Colin Faber wrote:<br>
</div>
<blockquote
cite="mid:CAJcXmBmB2kUp84Bip2viBO=0ZFzvr9LqCPdGgRS67_eVgsTqZg@mail.gmail.com"
type="cite">
<div dir="ltr">Is the changelogs feature enabled?</div>
<div class="gmail_extra"><br>
</div>
</blockquote>
Yes, and.. the output of lfs changelogs gives us 360,000 lines... Do
you think that is the source of all the 'extra' data?<br>
<blockquote
cite="mid:CAJcXmBmB2kUp84Bip2viBO=0ZFzvr9LqCPdGgRS67_eVgsTqZg@mail.gmail.com"
type="cite">
<div class="gmail_extra">
<div class="gmail_quote">On Thu, Sep 29, 2016 at 8:58 AM,
Jessica Otey <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:jotey@nrao.edu" target="_blank">jotey@nrao.edu</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Hello all,<br>
I write on behalf of my colleagues in Chile, who are
experiencing a bizarre problem with their MDT, namely, it
is filling up with 4 MB files. There is no issue with the
number of inodes, of which there are hundreds of millions
unused. <br>
<br>
<div><tt>[root@jaopost-mds ~]# tune2fs -l /dev/sdb2 | grep
-i inode</tt></div>
<div><tt>device /dev/sdb2 mounted by lustre</tt></div>
<div><tt>Filesystem features: has_journal ext_attr
resize_inode dir_index filetype needs_recovery flex_bg
dirdata sparse_super large_file huge_file uninit_bg
dir_nlink quota</tt></div>
<div><tt>Inode count: 239730688</tt></div>
<div><tt>Free inodes: 223553405</tt></div>
<div><tt>Inodes per group: 32768</tt></div>
<div><tt>Inode blocks per group: 4096</tt></div>
<div><tt>First inode: 11</tt></div>
<div><tt>Inode size:</tt><tt><span style="white-space:pre-wrap"> </span></tt><tt>
512</tt></div>
<div><tt>Journal inode: 8</tt></div>
<div><tt>Journal backup: inode blocks</tt></div>
<div><tt>User quota inode: 3</tt></div>
<div><tt>Group quota inode: 4</tt></div>
<br>
Has anyone ever encountered such a problem? The only thing
unusual about this cluster is that it is using 2.5.3
MDS/OSSes while still using 1.8.9 clients—something I
didn't actually believe was possible, as I thought the
last version to work effectively with 1.8.9 clients was
2.4.3. However, for all I know, the version gap may have
nothing to do with this phenomena. <br>
<br>
Any and all advice is appreciated. Any general information
on the structure of the MDT also welcome, as such info is
in short supply on the internet.<br>
<br>
Thanks,<br>
Jessica<br>
<br>
Below is a look inside the O folder at the root of the
MDT, where there are about 48,000 4MB files:<br>
<pre>[root@jaopost-mds O]# pwd
/lustrebackup/O
[root@jaopost-mds O]# tree -L 1
.
├── 1
├── 10
└── 200000003
3 directories, 0 files
[root@jaopost-mds O]# ls -l 1
total 2240
drwx------ 2 root root 69632 sep 16 16:25 d0
drwx------ 2 root root 69632 sep 16 16:25 d1
drwx------ 2 root root 61440 sep 16 17:46 d10
drwx------ 2 root root 69632 sep 16 17:46 d11
drwx------ 2 root root 69632 sep 16 18:04 d12
drwx------ 2 root root 65536 sep 16 18:04 d13
drwx------ 2 root root 65536 sep 16 18:04 d14
drwx------ 2 root root 69632 sep 16 18:04 d15
drwx------ 2 root root 61440 sep 16 18:04 d16
drwx------ 2 root root 61440 sep 16 18:04 d17
drwx------ 2 root root 69632 sep 16 18:04 d18
drwx------ 2 root root 69632 sep 16 18:04 d19
drwx------ 2 root root 65536 sep 16 16:25 d2
drwx------ 2 root root 69632 sep 16 18:04 d20
drwx------ 2 root root 69632 sep 16 18:04 d21
drwx------ 2 root root 61440 sep 16 18:04 d22
drwx------ 2 root root 69632 sep 16 18:04 d23
drwx------ 2 root root 61440 sep 16 16:11 d24
drwx------ 2 root root 69632 sep 16 16:11 d25
drwx------ 2 root root 69632 sep 16 16:11 d26
drwx------ 2 root root 69632 sep 16 16:11 d27
drwx------ 2 root root 69632 sep 16 16:25 d28
drwx------ 2 root root 69632 sep 16 16:25 d29
drwx------ 2 root root 69632 sep 16 16:25 d3
drwx------ 2 root root 65536 sep 16 16:25 d30
drwx------ 2 root root 65536 sep 16 16:25 d31
drwx------ 2 root root 69632 sep 16 16:25 d4
drwx------ 2 root root 61440 sep 16 16:25 d5
drwx------ 2 root root 69632 sep 16 16:25 d6
drwx------ 2 root root 73728 sep 16 16:25 d7
drwx------ 2 root root 65536 sep 16 17:46 d8
drwx------ 2 root root 69632 sep 16 17:46 d9
-rw-r--r-- 1 root root 8 ene 4 2016 LAST_ID
[root@jaopost-mds d0]# ls -ltr | more
total 5865240
-rw-r--r-- 1 root root 252544 ene 4 2016 32
-rw-r--r-- 1 root root 2396224 ene 9 2016 2720
-rw-r--r-- 1 root root 4153280 ene 9 2016 2752
-rw-r--r-- 1 root root 4153280 ene 10 2016 2784
-rw-r--r-- 1 root root 4153280 ene 10 2016 2816
-rw-r--r-- 1 root root 4153280 ene 10 2016 2848
-rw-r--r-- 1 root root 4153280 ene 10 2016 2880
-rw-r--r-- 1 root root 4153280 ene 10 2016 2944
-rw-r--r-- 1 root root 4153280 ene 10 2016 2976
-rw-r--r-- 1 root root 4153280 ene 10 2016 3008
-rw-r--r-- 1 root root 4153280 ene 10 2016 3040
-rw-r--r-- 1 root root 4153280 ene 10 2016 3072
-rw-r--r-- 1 root root 4153280 ene 10 2016 3104
-rw-r--r-- 1 root root 4153280 ene 10 2016 3136
-rw-r--r-- 1 root root 4153280 ene 10 2016 3168
-rw-r--r-- 1 root root 4153280 ene 10 2016 3200
-rw-r--r-- 1 root root 4153280 ene 10 2016 3232
-rw-r--r-- 1 root root 4153280 ene 10 2016 3264
-rw-r--r-- 1 root root 4153280 ene 10 2016 3296
-rw-r--r-- 1 root root 4153280 ene 10 2016 3328
</pre>
<br>
</div>
<br>
______________________________<wbr>_________________<br>
lustre-discuss mailing list<br>
<a moz-do-not-send="true"
href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
<a moz-do-not-send="true"
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>