[lustre-discuss] Questions about migrate OSTs from ldiskfs to zfs

Fernando Pérez fperez at icm.csic.es
Mon Feb 29 11:29:45 PST 2016


Thanks Andreas.

I will follow your recommendations.

Regards.
=============================================
Fernando Pérez
Institut de Ciències del Mar (CMIMA-CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone:  (+34) 93 230 96 35
=============================================

> El 27 feb 2016, a las 1:36, Dilger, Andreas <andreas.dilger at intel.com> escribió:
> 
> On Feb 23, 2016, at 05:24, Fernando Perez <fperez at icm.csic.es> wrote:
>> 
>> Hi all.
>> 
>> We have a 230 TB small lustre system. We are using lustre 2.4.1 with zfs 0.6.2 installed in the OSSs. The lustre architecture is the following:
>> 
>> 1 MDS +1  MDT in the same server + 3 OSS with 15 ldiskfs OSTs (external disks some with fibre controllers and SAS disks and others Coraid aoe cabinet with standard SATA disks).
>> 
>> We need to replace 9 OSTs by 2 zfs OSTs due to hardware problems: replace Coraid storage by a Supermicro SAS JBOD with twenty disks, each disk has 8 TB.
>> 
>> I have some questions that I hope you can help me to answer:
>> 
>> - What can I do with the inactive osts numbers? Acording to lustre manual I suppose that I can't erase them,
> 
> If you are replacing the old OSTs with a different number of new OSTs, and because the OSTs have a different back-end storage type, you will need to do file migration at the Lustre level. 
> 
> That means you will have to add at least some of the new OSTs before removing the old ones, or completely empty at least one of the old OSTs if you want to add a new one in its place. The more of the new ones you have online when doing the migration the faster it will finish, so this is a bit of a trade off vs. having gaps in your OST config.  With some careful juggling of OST indices you could minimize the number of unused indices.
> 
> Note that you can totally remove old OSTs from your config by doing a writeconf to completely regenerate your config. There will still be gaps in your OST index if you didn't avoid this during reconfiguration, but the removed OSTs will no longer be listed.  If you are planning to add more new OSTs to replace the removed ones in the near future then this may not be worthwhile.
> 
> We are looking at fixing this to allow complete removal of OSTs from the config, since permanently inactive OSTs is a common complaint. 
> 
>> - I don't need to restore OST configuration files to replace ldiskfs OSTs by zfs OSTs, do I?
> 
> If you have a newer Lustre version with "mkfs.lustre --replace" you don't need to do anything like this. 
> 
>> - Dou you recommend to do a lustre update before replace the OSTs by the new zfs OSTs?
> 
> Lustre 2.4.1 is very old.  It makes sense to use a newer version than this, especially if you are using ZFS. 
> 
> However, it is generally bad sysadmin practice to do major hardware and software updates at the same time, since it becomes very difficult to isolate any problems that appear afterward. I would recommend to upgrade to a newer version of Lustre (2.5.3, or what is recommended from your support provider) at least on the servers and run that for a week or two before doing the hardware upgrade. 
> 
>> - I have read in the list that there are problem with the last zfsonlinux release and lustre only works with zfsonlinux 0.6.3. Is this right?
> 
> Newer versions of Lustre work well with ZFS 0.6.4.3, I don't remember anymore what ZFS versions were tested with 2.4.1. We had some problems under heavy load with 0.6.5.3 and have moved back to 0.6.4.3 for the Lustre 2.8.0 release until that is fixed.
> 
> Cheers, Andreas



More information about the lustre-discuss mailing list