[lustre-discuss] Quick ZFS pool question?
Riccardo Veraldi
Riccardo.Veraldi at cnaf.infn.it
Mon Oct 17 23:31:04 PDT 2016
I do not have always big file, I Also have small files on Lustre, so I
found out in my scenario that the default 128K record size
fits my needs better.
In real life I do not expect to have direct I/O . But before putting it
in production I Was testing it
and the Direct I/O performances were far lower than other similar lustre
partitions with ldiskfs.
On 17/10/16 08:59, PGabriele wrote:
> you can have a better understanding of the gap from this presentation:
> ZFS metadata performance improvements
> <http://www.eofs.eu/_media/events/lad16/02_zfs_md_performance_improvements_zhuravlev.pdf>
>
> On 14 October 2016 at 08:42, Dilger, Andreas <andreas.dilger at intel.com
> <mailto:andreas.dilger at intel.com>> wrote:
>
> On Oct 13, 2016 19:02, Riccardo Veraldi
> <Riccardo.Veraldi at cnaf.infn.it
> <mailto:Riccardo.Veraldi at cnaf.infn.it>> wrote:
> >
> > Hello,
> > will the lustre 2.9.0 rpm be released on the Intel site ?
> > Also the latest rpm for zfsonlinux available is 0.6.5.8
>
> The Lustre 2.9.0 packages will be released, when the release is
> complete.
> You are welcome to test the pre-release version from Git, if you are
> interested.
>
> You are also correct that the ZoL 0.7.0 release is not yet available.
> There are still improvements when using ZoL 0.6.5.8, but some of these
> patches only made it into 0.7.0.
>
> Cheers, Andreas
>
> > On 13/10/16 11:16, Dilger, Andreas wrote:
> >> On Oct 13, 2016, at 10:32, E.S. Rosenberg
> <esr+lustre at mail.hebrew.edu <mailto:esr%2Blustre at mail.hebrew.edu>>
> wrote:
> >>> On Fri, Oct 7, 2016 at 9:16 AM, Xiong, Jinshan
> <jinshan.xiong at intel.com <mailto:jinshan.xiong at intel.com>> wrote:
> >>>
> >>>>> On Oct 6, 2016, at 2:04 AM, Phill Harvey-Smith
> <p.harvey-smith at warwick.ac.uk
> <mailto:p.harvey-smith at warwick.ac.uk>> wrote:
> >>>>>
> >>>>> Having tested a simple setup for lustre / zfs, I'd like to
> try and
> >>>>> replicate on the test system what we currently have on the
> production
> >>>>> system, which uses a much older version of lustre (2.0 IIRC).
> >>>>>
> >>>>> Currently we have a combined mgs / mds node and a single oss
> node.
> >>>>> we have 3 filesystems : home, storage and scratch.
> >>>>>
> >>>>> The MGS/MDS node currently has the mgs on a seperate block
> device and
> >>>>> the 3 mds on a combined lvm volume.
> >>>>>
> >>>>> The OSS has an ost each (on a separate disks) for scratch
> and home
> >>>>> and two ost for storage.
> >>>>>
> >>>>> If we migrate this setup to a ZFS based one, will I need to
> create a
> >>>>> separate zpool for each mdt / mgt / oss or will I be able
> to create
> >>>>> a single zpool and split it up between the individual mdt /
> oss blocks,
> >>>>> if so how do I tell each filesystem how big it should be?
> >>>> We strongly recommend to create separate ZFS pools for OSTs,
> otherwise grant, which is a Lustre internal space reserve
> algorithm, won’t work properly.
> >>>>
> >>>> It’s possible to create a single zpool for MDTs and MGS, and
> you can use ‘zfs set reservation=<space> <target>’ to reserve
> spaces for different targets.
> >>> I thought ZFS was only recommended for OSTs and not for MDTs/MGS?
> >> The MGT/MDT can definitely be on ZFS. The performance of ZFS
> has been
> >> trailing behind that of ldiskfs, but we've made significant
> performance
> >> improvements with Lustre 2.9 and ZFS 0.7.0. Many people use ZFS
> for the
> >> MDT backend because of the checksums and integrated JBOD
> management, as
> >> well as the ability to create snapshots, data compression, etc.
> >>
> >> Cheers, Andreas
> >>
> >> _______________________________________________
> >> lustre-discuss mailing list
> >> lustre-discuss at lists.lustre.org
> <mailto:lustre-discuss at lists.lustre.org>
> >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
> >>
> >
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss at lists.lustre.org
> <mailto:lustre-discuss at lists.lustre.org>
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> <mailto:lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
>
>
>
>
> --
> www: http://paciucci.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20161017/be7010ff/attachment.htm>
More information about the lustre-discuss
mailing list