[lustre-discuss] Quick ZFS pool question?

E.S. Rosenberg esr+lustre at mail.hebrew.edu
Thu Oct 13 09:32:17 PDT 2016


On Fri, Oct 7, 2016 at 9:16 AM, Xiong, Jinshan <jinshan.xiong at intel.com>
wrote:

>
> > On Oct 6, 2016, at 2:04 AM, Phill Harvey-Smith <
> p.harvey-smith at warwick.ac.uk> wrote:
> >
> > Hi all,
> >
> > Having tested a simple setup for lustre / zfs, I'd like to trey and
> replicate on the test system what we currently have on the production
> system, which uses a much older version of lustre (2.0 IIRC).
> >
> > Currently we have a combined mgs / mds node and a single oss node. we
> have 3 filesystems : home, storage and scratch.
> >
> > The MGS/MDS node currently has the mgs on a seperate block device and
> the 3 mds on a combined lvm volume.
> >
> > The OSS has an ost each (on a separate disks) for scratch and home and
> two ost for storage.
> >
> > If we migrate this setup to a ZFS based one, will I need to create a
> separate zpool for each mdt / mgt / oss  or will I be able to create a
> single zpool and split it up between the individual mdt / oss blocks, if so
> how do I tell each filesystem how big it should be?
>
> We strongly recommend to create separate ZFS pools for OSTs, otherwise
> grant, which is a Lustre internal space reserve algorithm, won’t work
> properly.
>
> It’s possible to create a single zpool for MDTs and MGS, and you can use
> ‘zfs set reservation=<space> <target>’ to reserve spaces for different
> targets.
>
I thought ZFS was only recommended for OSTs and not for MDTs/MGS?
Eli

>
> Jinshan
>
> >
> > Cheers.
> >
> > Phill.
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20161013/87cdb149/attachment.htm>


More information about the lustre-discuss mailing list