[lustre-discuss] Quick ZFS pool question?

Hans Henrik Happe happe at nbi.ku.dk
Fri Oct 7 02:09:45 PDT 2016

Just curios, if you set reservation on an zfs OST fs, will the algorithm work? Also, will it go totally crazy or just not be able to make good decisions, because something external is grabbing the space?

Hans Henrik

On October 7, 2016 8:16:54 AM GMT+02:00, "Xiong, Jinshan" <jinshan.xiong at intel.com> wrote:
>> On Oct 6, 2016, at 2:04 AM, Phill Harvey-Smith
><p.harvey-smith at warwick.ac.uk> wrote:
>> Hi all,
>> Having tested a simple setup for lustre / zfs, I'd like to trey and
>replicate on the test system what we currently have on the production
>system, which uses a much older version of lustre (2.0 IIRC).
>> Currently we have a combined mgs / mds node and a single oss node. we
>have 3 filesystems : home, storage and scratch.
>> The MGS/MDS node currently has the mgs on a seperate block device and
>the 3 mds on a combined lvm volume.
>> The OSS has an ost each (on a separate disks) for scratch and home
>and two ost for storage.
>> If we migrate this setup to a ZFS based one, will I need to create a
>separate zpool for each mdt / mgt / oss  or will I be able to create a
>single zpool and split it up between the individual mdt / oss blocks,
>if so how do I tell each filesystem how big it should be?
>We strongly recommend to create separate ZFS pools for OSTs, otherwise
>grant, which is a Lustre internal space reserve algorithm, won’t work
>It’s possible to create a single zpool for MDTs and MGS, and you can
>use ‘zfs set reservation=<space> <target>’ to reserve spaces for
>different targets.
>> Cheers.
>> Phill.
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>lustre-discuss mailing list
>lustre-discuss at lists.lustre.org

Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20161007/58a15ad4/attachment.htm>

More information about the lustre-discuss mailing list