[lustre-discuss] backup zfs MDT or migrate from ZFS back to ldiskfs

Stu Midgley sdm900 at gmail.com
Sat Jul 22 19:48:04 PDT 2017


Interesting, so my fears were well founded.  Basically, once you choose
ZFS, you are stuck with it.  The only way to migrate off ZFS is to create a
new Lustre file system and copy all the contents out.

I'll get to the bottom of the slow send/receive.  I am using ashift=12 for
both file systems.  I'll switch to using ashift=9 for the SSD's.  Though, I
assume this only reduces the disk usage of the MDT and doesn't help latency
at all.  Thanks.


On Sun, Jul 23, 2017 at 7:14 AM, Dilger, Andreas <andreas.dilger at intel.com>
wrote:

> Using rsync or tar to backup/restore a ZFS MDT is not supported, because
> this changes the dnode numbering, but ZFS OI Scrub is not yet implemented
> (there is a Jira ticket for this, and some work is underway there).
>
> Options include using zfs send/recv, as you were using, or just
> incrementally replacing the disks in the pool one at a time and letting
> them resilver to the SSDs (assuming they are larger than the HDDs they are
> replacing).
>
> I'm not sure why send/recv is so slow and exploding the metadata size, but
> it might relate to the ashift=12 on the target and ashift=9 on the source?
> This can be particularly bad with RAIDz compared to mirrors, since small
> blocks (as typically used on the MDT) will always need to write 16KB
> instead of 8 or 12KB (with 2 or 3 mirrors).
>
> Cheers, Andreas
>
> On Jul 22, 2017, at 07:48, Raj <rajgautam at gmail.com> wrote:
>
> Stu,
> Is there a reason why you picked Raidz 3 rather than 4 way mirror across 4
> disks?
> Raidz 3 parity calculation might take more cpu resources rather than
> mirroring across disks but also the latency may be higher in mirroring to
> sync across all the disks. Wondering if you did some testing before
> deciding it.
>
> On Fri, Jul 21, 2017 at 12:27 AM Stu Midgley <sdm900 at gmail.com> wrote:
>
>> we have been happily using 2.9.52+0.7.0-rc3 for a while now.
>>
>> The MDT is a raidz3 across 4 disks.
>>
>> On Fri, Jul 21, 2017 at 1:19 PM, Isaac Huang <he.huang at intel.com> wrote:
>>
>>> On Fri, Jul 21, 2017 at 12:54:15PM +0800, Stu Midgley wrote:
>>> > Afternoon
>>> >
>>> > I have an MDS running on spinning media and wish to migrate it to
>>> SSD's.
>>> >
>>> >     Lustre 2.9.52
>>> >     ZFS 0.7.0-rc3
>>>
>>> This may not be a stable combination - I don't think Lustre officially
>>> supports 0.7.0-rc yet. Plus, there's a recent Lustre osd-zfs bug and
>>> its fix hasn't been back ported to 2.9 yet (to the best of my knowledge):
>>> https://jira.hpdd.intel.com/browse/LU-9305
>>>
>>> > How do I do it?
>>>
>>> Depends on how you've configured the MDT pool. If the disks are
>>> mirrored or just plan disks without any redundancy (i.e. not RAIDz),
>>> you can simply attach the SSDs to the hard drives to form or extend
>>> mirrors and then detach the hard drives - see zpool attach/detach.
>>>
>>> -Isaac
>>>
>>
>>
>>
>> --
>> Dr Stuart Midgley
>> sdm900 at gmail.com
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>


-- 
Dr Stuart Midgley
sdm900 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170723/f6a49366/attachment.htm>


More information about the lustre-discuss mailing list