<div><div dir="auto">ANS,</div></div><div dir="auto"><br></div><div dir="auto">Lustre on top of ZFS has to estimate capacities and it’s fairly off when the OSTs are new and empty. As objects are written to OSTs and capacity is consumed it gets the sizing of capacity more accurate. At the beginning it’s so off that it appears to be an error. </div><div dir="auto"><br></div><div dir="auto">What version are you running? Some patches have been added to make this calculation more accurate. </div><div dir="auto"><br></div><div dir="auto">—Jeff</div><div><br><div class="gmail_quote"><div dir="ltr">On Mon, Dec 31, 2018 at 22:08 ANS <<a href="mailto:ans3456@gmail.com">ans3456@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">Dear Team,<div><br></div><div>I am trying to configure lustre with backend ZFS as file system with 2 servers in HA. But after compiling and creating zfs pools</div><div><div><br></div><div>zpool list</div><div>NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT</div><div>lustre-data 54.5T 25.8M 54.5T - 16.0E 0% 0% 1.00x ONLINE -</div><div>lustre-data1 54.5T 25.1M 54.5T - 16.0E 0% 0% 1.00x ONLINE -</div><div>lustre-data2 54.5T 25.8M 54.5T - 16.0E 0% 0% 1.00x ONLINE -</div><div>lustre-data3 54.5T 25.8M 54.5T - 16.0E 0% 0% 1.00x ONLINE -</div><div>lustre-meta 832G 3.50M 832G - 16.0E 0% 0% 1.00x ONLINE -</div><div><br></div><div>and when mounted to client</div><div><br></div><div>lfs df -h<br></div><div><div>UUID bytes Used Available Use% Mounted on</div><div>home-MDT0000_UUID 799.7G 3.2M 799.7G 0% /home[MDT:0]</div><div>home-OST0000_UUID 39.9T 18.0M 39.9T 0% /home[OST:0]</div><div>home-OST0001_UUID 39.9T 18.0M 39.9T 0% /home[OST:1]</div><div>home-OST0002_UUID 39.9T 18.0M 39.9T 0% /home[OST:2]</div><div>home-OST0003_UUID 39.9T 18.0M 39.9T 0% /home[OST:3]</div><div><br></div><div>filesystem_summary: 159.6T 72.0M 159.6T 0% /home</div></div><div><br></div><div>So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any one give the information regarding this.</div><div><br></div><div>Also from performance prospective what are the zfs and lustre parameters to be tuned.</div></div></div></div></div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div><br></div>-- <br><div dir="ltr" class="m_2480666676409727911gmail_signature"><div dir="ltr">Thanks,<div>ANS.</div></div></div></div></div></div></div>
_______________________________________________<br>
lustre-discuss mailing list<br>
<a href="mailto:lustre-discuss@lists.lustre.org" target="_blank">lustre-discuss@lists.lustre.org</a><br>
<a href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org" rel="noreferrer" target="_blank">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
</blockquote></div></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">------------------------------<br>Jeff Johnson<br>Co-Founder<br>Aeon Computing<br><br><a href="mailto:jeff.johnson@aeoncomputing.com" target="_blank">jeff.johnson@aeoncomputing.com</a><br><a href="http://www.aeoncomputing.com" target="_blank">www.aeoncomputing.com</a><br>t: 858-412-3810 x1001 f: 858-412-3845<br>m: 619-204-9061<br><br>4170 Morena Boulevard, Suite C - San Diego, CA 92117<div><br></div><div>High-Performance Computing / Lustre Filesystems / Scale-out Storage</div></div></div></div></div>