<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">I do not have always big file, I Also
have small files on Lustre, so I found out in my scenario that the
default 128K record size<br>
fits my needs better.<br>
In real life I do not expect to have direct I/O . But before
putting it in production I Was testing it<br>
and the Direct I/O performances were far lower than other similar
lustre partitions with ldiskfs.<br>
<br>
<br>
On 17/10/16 08:59, PGabriele wrote:<br>
</div>
<blockquote
cite="mid:CADd4w=ggfLxd5RpUOvUrPNHmTF3P=ceit1Hib32Fxd9RJDMbDg@mail.gmail.com"
type="cite">
<div dir="ltr">you can have a better understanding of the gap from
this presentation: <a moz-do-not-send="true"
href="http://www.eofs.eu/_media/events/lad16/02_zfs_md_performance_improvements_zhuravlev.pdf">ZFS
metadata performance improvements</a></div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 14 October 2016 at 08:42, Dilger,
Andreas <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:andreas.dilger@intel.com" target="_blank">andreas.dilger@intel.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><span
class="">On Oct 13, 2016 19:02, Riccardo Veraldi <<a
moz-do-not-send="true"
href="mailto:Riccardo.Veraldi@cnaf.infn.it">Riccardo.Veraldi@cnaf.infn.it</a><wbr>>
wrote:<br>
><br>
> Hello,<br>
> will the lustre 2.9.0 rpm be released on the Intel
site ?<br>
> Also the latest rpm for zfsonlinux available is
0.6.5.8<br>
<br>
</span>The Lustre 2.9.0 packages will be released, when the
release is complete.<br>
You are welcome to test the pre-release version from Git, if
you are<br>
interested.<br>
<br>
You are also correct that the ZoL 0.7.0 release is not yet
available.<br>
There are still improvements when using ZoL 0.6.5.8, but
some of these<br>
patches only made it into 0.7.0.<br>
<br>
Cheers, Andreas<br>
<div class="HOEnZb">
<div class="h5"><br>
> On 13/10/16 11:16, Dilger, Andreas wrote:<br>
>> On Oct 13, 2016, at 10:32, E.S. Rosenberg <<a
moz-do-not-send="true"
href="mailto:esr%2Blustre@mail.hebrew.edu">esr+lustre@mail.hebrew.edu</a>>
wrote:<br>
>>> On Fri, Oct 7, 2016 at 9:16 AM, Xiong,
Jinshan <<a moz-do-not-send="true"
href="mailto:jinshan.xiong@intel.com">jinshan.xiong@intel.com</a>>
wrote:<br>
>>><br>
>>>>> On Oct 6, 2016, at 2:04 AM, Phill
Harvey-Smith <<a moz-do-not-send="true"
href="mailto:p.harvey-smith@warwick.ac.uk">p.harvey-smith@warwick.ac.uk</a>>
wrote:<br>
>>>>><br>
>>>>> Having tested a simple setup for
lustre / zfs, I'd like to try and<br>
>>>>> replicate on the test system what
we currently have on the production<br>
>>>>> system, which uses a much older
version of lustre (2.0 IIRC).<br>
>>>>><br>
>>>>> Currently we have a combined mgs /
mds node and a single oss node.<br>
>>>>> we have 3 filesystems : home,
storage and scratch.<br>
>>>>><br>
>>>>> The MGS/MDS node currently has the
mgs on a seperate block device and<br>
>>>>> the 3 mds on a combined lvm volume.<br>
>>>>><br>
>>>>> The OSS has an ost each (on a
separate disks) for scratch and home<br>
>>>>> and two ost for storage.<br>
>>>>><br>
>>>>> If we migrate this setup to a ZFS
based one, will I need to create a<br>
>>>>> separate zpool for each mdt / mgt /
oss or will I be able to create<br>
>>>>> a single zpool and split it up
between the individual mdt / oss blocks,<br>
>>>>> if so how do I tell each filesystem
how big it should be?<br>
>>>> We strongly recommend to create
separate ZFS pools for OSTs, otherwise grant, which is a
Lustre internal space reserve algorithm, won’t work
properly.<br>
>>>><br>
>>>> It’s possible to create a single zpool
for MDTs and MGS, and you can use ‘zfs set
reservation=<space> <target>’ to reserve
spaces for different targets.<br>
>>> I thought ZFS was only recommended for OSTs
and not for MDTs/MGS?<br>
>> The MGT/MDT can definitely be on ZFS. The
performance of ZFS has been<br>
>> trailing behind that of ldiskfs, but we've made
significant performance<br>
>> improvements with Lustre 2.9 and ZFS 0.7.0.
Many people use ZFS for the<br>
>> MDT backend because of the checksums and
integrated JBOD management, as<br>
>> well as the ability to create snapshots, data
compression, etc.<br>
>><br>
>> Cheers, Andreas<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> lustre-discuss mailing list<br>
>> <a moz-do-not-send="true"
href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
>> <a moz-do-not-send="true"
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
>><br>
><br>
> ______________________________<wbr>_________________<br>
> lustre-discuss mailing list<br>
> <a moz-do-not-send="true"
href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
> <a moz-do-not-send="true"
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
<br>
______________________________<wbr>_________________<br>
lustre-discuss mailing list<br>
<a moz-do-not-send="true"
href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
<a moz-do-not-send="true"
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="gmail_signature" data-smartmail="gmail_signature">www:
<a moz-do-not-send="true" href="http://paciucci.blogspot.com"
target="_blank">http://paciucci.blogspot.com</a></div>
</div>
</blockquote>
<p><br>
</p>
</body>
</html>