[lustre-discuss] Draining and replacing OSTs with larger volumes

Patrick Farrell pfarrell at whamcloud.com
Thu Feb 28 16:24:13 PST 2019


Scott,


This sounds great.  Slower, but safer.  You might want to integrate the pool suggestion Jongwoo made in the other recent thread in order to control allocations to your new OSTs (assuming you're trying to stay live during most of this).


- Patrick

________________________________
From: Scott Wood <woodystrash at hotmail.com>
Sent: Thursday, February 28, 2019 6:15:54 PM
To: Patrick Farrell; Jongwoo Han
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Draining and replacing OSTs with larger volumes

My Thanks to both Jongwoo and Patrick for your responses.

Great advice to do a practice run in a virtual environment but I'm lucky enough to have a physical one. I have a testbed that has the same versions of all software but with iscsi targets as the OSTs, rather than physical arrays, and not so many OSTs (8 in the testbed and 60 in production)  I do use it for test upgrades and fully intend to do a dry run there.

Jongwoo, to address your point, yes the rolling migration is forced, as we only have two new arrays, and 10 existing arrays which we can upgrade the drives in.  You asked about OST sizes.  OSTs are 29TB, six per array, two arrays per OSS pair, 5 OSS pairs.  I also expect the migrate-replace-migrate-replace to be painfully slow, but with the hardware at hand, it's the only option.  I was figuring they may take a few weeks to drain each pair of arrays.  As for the rolling upgrade, based on yours and Patrick's responses, we'll skip that to keep things cleaner.

Taking your points in to consideration, the amended plan will be:

1) Deploy a new HA pair of OSSs with arrays populated with OSTs that are twice the size of our current ones, but stick with the existing v2.10.3
2) Remove the 12 OSTs that are connected to my oldest HA pair of OSSs as described in 14.9.3, using 12 parallel migrate processes across 12 clients
3) Repopulate those arrays with the larger drives and make new 12 OSTs from scratch, with fresh indices, and bring them online
4) Repeat steps 2 and 3 for the four remaining original HA pairs of OSSs
5) Take a break and let the dust settle
6) At a later date, have a scheduled outage and upgrade from 2.10.3 to whatever the current maintenance release is

Again, you feedback is appreciated.

Cheers
Scott
________________________________
From: Patrick Farrell <pfarrell at whamcloud.com>
Sent: Thursday, 28 February 2019 11:06 PM
To: Jongwoo Han; Scott Wood
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Draining and replacing OSTs with larger volumes

Scott,

I’d like to strongly second all of Jongwoo’s advice, particularly that about adding new OSTs rather than replacing existing ones, if possible.  That procedure is so much simpler and involves a lot less messing around “under the hood”.  It takes you from a complex procedure with many steps to, essentially, copying a bunch of data around while your file system remains up, and adding and removing a few OSTs at either end.

It would also be non-destructive for your existing data.  One of the scary things about the original proposed process is that if something goes wrong partway through, the original data is already gone (or at least very hard to get).

Regards,
- Patrick
________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Jongwoo Han <jongwoohan at gmail.com>
Sent: Thursday, February 28, 2019 5:36:54 AM
To: Scott Wood
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Draining and replacing OSTs with larger volumes



On Thu, Feb 28, 2019 at 11:09 AM Scott Wood <woodystrash at hotmail.com<mailto:woodystrash at hotmail.com>> wrote:
Hi folks,

Big upgrade process in the works and I had some questions.  Our current infrastructure has 5 HA pairs of OSSs and arrays with an HA pair of management and metadata servers who also share an array, all running lustre 2.10.3.  Pretty standard stuff.  Our upgrade plan is as follows:

1) Deploy a new HA pair of OSSs with arrays populated with OSTs that are twice the size of our originals.
2) Follow the process in section 14.9 of the lustre docs to drain all OSTs in one of existing the HA pairs' arrays
3) Repopulate the first old pair of deactivated and drained arrays with new larger drives
4) Upgrade the offline OSSs from 2.10.3 to 2.10.latest?
5) Return them to service
6) Repeat steps 2-4 for the other 4 old HA pairs of OSSs and OSTs

I'd expect this would be doable without downtime as we'd only be taking arrays offline that have no objects on them, and we've added new arrays and OSSs before with no issues.  I have a few questions before we begin the process:

1) My interpretation of the docs is that  we OK to install them with 2.10.6 (or 2.10.7, if it's out), as rolling upgrades withing X.Y are supported.  Is that correct?

In theory, rolling upgrade should work, but generally recommended upgrade procedure is to stop filesystem and unmount all MDS and OSS, upgrade package and bring them up. This will prevent human errors during repeated per-server upgrade.
When it is done correctly, It will take not more than 2 hours.

2) Until the whole process is complete, we'll have imbalanced OSTs.  I know that's not ideal, but is it all that big an issue

Rolling upgrade will cause imbalance, but after long run, the files will be assigned will be evenly distributed. No need to worry about it on one-shot upgrade scenario.

3) When draining the OSTs of files, section 14.9.3, point 2.a. states that the lfs find |lfs migrate can take multiple OSTs as args, but I thought it would be better to run one instance of that per OST and distribute them across multiple clients .  Is that reasonable (and faster)?

Parallel redistribute is generally faster than one-by-one. If the MDT can endure scanning load, run multiple migrate processes each for against one OST
4) When the drives are replaced with bigger ones, can the original OST configuration files be restored to them as described in Docs section 14.9.5, or due the the size mismatch, will that be bad?

Since this process will treat objects as files, the configurations should go as same.

5) What questions should I be asking that I haven't thought of?


I do not know the size of OSTs to deal with, but I think migrate(empty)-replace-migrate-replace is really painful process as it will take long time. If circumtances allow, I suggest add all new OST arrays to OSS with new OST nums, migrate OST objects, deactivate and remove old OSTs.

If that all goes well, and we did upgrade the OSSs to a newer 2.10.x, we'd follow it up with a migration of the MGT and MDT to one of the management servers, upgrade the other, fail them back, upgrade the second, and rebalance the MDT and MGT services back across the two.  We'd expect the usual pause in services as those migrate but other than that, fingers crossed, should all be good.  Are we missing anything?


If this plan is forced, rolling migrate and upgrade should be planned carefully. It will be better to set up correct procedure checklist by practicing on a virtual environment with identical versions.

Cheers
Scott
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


--
Jongwoo Han
+82-505-227-6108
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190301/1c976cc7/attachment-0001.html>


More information about the lustre-discuss mailing list