[lustre-discuss] lfs_migrate

Dilger, Andreas andreas.dilger at intel.com
Mon Mar 20 14:17:30 PDT 2017


If you mark the OSTs degraded and the MDS will still avoid them for new allocations, though you should make a second scanning pass to verify.

Deactivating the OSTs on the MDS will allow it to delete the (now unused) OST objects.

Cheers, Andreas

On Mar 20, 2017, at 17:03, E.S. Rosenberg <esr at cs.huji.ac.il<mailto:esr at cs.huji.ac.il>> wrote:



On Mon, Mar 20, 2017 at 10:59 PM, Dilger, Andreas <andreas.dilger at intel.com<mailto:andreas.dilger at intel.com>> wrote:
If you've marked the OST inactive on the MDS then that is not surprising. See https://jira.hpdd.intel.com/browse/LU-4825 and the comments in my previous email.
Ah OK.
But if I re-activate the OST will lfs_migrate still move them away from the device?

Cheers, Andreas

On Mar 20, 2017, at 16:56, E.S. Rosenberg <esr at cs.huji.ac.il<mailto:esr at cs.huji.ac.il>> wrote:



On Mon, Mar 20, 2017 at 10:50 PM, E.S. Rosenberg <esr+lustre at mail.hebrew.edu<mailto:esr+lustre at mail.hebrew.edu>> wrote:


On Mon, Mar 20, 2017 at 10:19 PM, Dilger, Andreas <andreas.dilger at intel.com<mailto:andreas.dilger at intel.com>> wrote:
The underlying "lfs migrate" command (not the "lfs_migrate" script) in newer Lustre versions (2.9) is capable of migrating files that are in use by using the "--block" option, which prevents other processes from accessing or modifying the file during migration.

Unfortunately, "lfs_migrate" doesn't pass that argument on, though it wouldn't be hard to change the script. Ideally, the "lfs_migrate" script would pass all unknown options to "lfs migrate".


The other item of note is that setting the OST inactive on the MDS will prevent the MDS from deleting objects on the OST (see https://jira.hpdd.intel.com/browse/LU-4825 for details).  In Lustre 2.9 and later it is possible to set on the MDS:

   mds# lctl set_param osp.<OST>.create_count=0

to stop MDS allocation of new objects on that OST. On older versions it is possible to set on the OSS:

  oss# lctl set_param obdfilter.<OST>.degraded=1

so that it tells the MDS to avoid it if possible, but this isn't a hard exclusion.

It is also possible to use a testing hack to mark an OST as out of inodes, but that only works for one OST per OSS and it sounds like that won't be useful in this case.

Cheers, Andreas
You're making me want Lustre 2.9 more :) but for now I'm still stuck on 2.8 and because this is very much production these days I'm more careful with the update (hoping to finally get hw allocated for a test env soon to test the update).
Thanks,
Eli
Another related question:
The migration has been running for several hours now on one OST but I am yet to see 1 block being freed from the OSS point of view, is this not mv but cp as far as the original OST is concerned?
(also man lfs has no lfs migrate that was added in 2.9?)
Thanks,
Eli

On Mar 20, 2017, at 13:11, Brett Lee <brettlee.lustre at gmail.com<mailto:brettlee.lustre at gmail.com>> wrote:

Hi Eli,

I believe that is still the case with lfs_migrate.  If otherwise, we'll probably hear soon.

You should be able to disable those OSTs while keeping the file system active - via a command on the MDS(s) as well as the clients.  My notes have the command as shown below, but please confirm via the appropriate Lustre manual:

lctl set_param osc.<fsname>-<OST00xy>-*.active=0

Brett
--
Protect Yourself Against Cybercrime
PDS Software Solutions LLC
https://www.TrustPDS.com<https://www.trustpds.com/>

On Mon, Mar 20, 2017 at 10:43 AM, E.S. Rosenberg <esr+lustre at mail.hebrew.edu<mailto:esr+lustre at mail.hebrew.edu>> wrote:
In the man page it says the following:

Because  lfs_migrate  is  not yet closely integrated with the MDS, it cannot determine whether a file is currently open and/or in-use by other applications or nodes.  That makes it UNSAFE for use on files that might be modified by other applications, since the migrated file is only a copy of the current file, and this will result in the old file becoming an open-unlinked file and any  modifications to that file will be lost.

Is this still the case?
Is there a better way to disable OSTs while keeping the filesystem live?

Background:
We need to take a OSS enclosure that hosts multiple OSTs offline for hardware maintenance, I'd like to do this without bringing Lustre as a whole down. I made sure there is enough space on the other OSTs to house the contents of the machine going offline and am now about to move things.

Thanks,
Eli

_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170320/4c27dcfb/attachment.htm>


More information about the lustre-discuss mailing list