[lustre-discuss] lfs_migrate

E.S. Rosenberg esr+lustre at mail.hebrew.edu
Mon Mar 20 13:50:59 PDT 2017


On Mon, Mar 20, 2017 at 10:19 PM, Dilger, Andreas <andreas.dilger at intel.com>
wrote:

> The underlying "lfs migrate" command (not the "lfs_migrate" script) in
> newer Lustre versions (2.9) is capable of migrating files that are in use
> by using the "--block" option, which prevents other processes from
> accessing or modifying the file during migration.
>
> Unfortunately, "lfs_migrate" doesn't pass that argument on, though it
> wouldn't be hard to change the script. Ideally, the "lfs_migrate" script
> would pass all unknown options to "lfs migrate".
>
>
> The other item of note is that setting the OST inactive on the MDS will
> prevent the MDS from deleting objects on the OST (see
> https://jira.hpdd.intel.com/browse/LU-4825 for details).  In Lustre 2.9
> and later it is possible to set on the MDS:
>
>    mds# lctl set_param osp.<OST>.create_count=0
>
> to stop MDS allocation of new objects on that OST. On older versions it is
> possible to set on the OSS:
>
>   oss# lctl set_param obdfilter.<OST>.degraded=1
>
> so that it tells the MDS to avoid it if possible, but this isn't a hard
> exclusion.
>
> It is also possible to use a testing hack to mark an OST as out of inodes,
> but that only works for one OST per OSS and it sounds like that won't be
> useful in this case.
>
> Cheers, Andreas
>
You're making me want Lustre 2.9 more :) but for now I'm still stuck on 2.8
and because this is very much production these days I'm more careful with
the update (hoping to finally get hw allocated for a test env soon to test
the update).
Thanks,
Eli

>
> On Mar 20, 2017, at 13:11, Brett Lee <brettlee.lustre at gmail.com> wrote:
>
> Hi Eli,
>
> I believe that is still the case with lfs_migrate.  If otherwise, we'll
> probably hear soon.
>
> You should be able to disable those OSTs while keeping the file system
> active - via a command on the MDS(s) as well as the clients.  My notes have
> the command as shown below, but please confirm via the appropriate Lustre
> manual:
>
> lctl set_param osc.<fsname>-<OST00xy>-*.active=0
>
> Brett
> --
> Protect Yourself Against Cybercrime
> PDS Software Solutions LLC
> https://www.TrustPDS.com <https://www.trustpds.com/>
>
> On Mon, Mar 20, 2017 at 10:43 AM, E.S. Rosenberg <
> esr+lustre at mail.hebrew.edu> wrote:
>
>> In the man page it says the following:
>>
>> Because  lfs_migrate  is  not yet closely integrated with the MDS, it
>> cannot determine whether a file is currently open and/or in-use by other
>> applications or nodes.  That makes it UNSAFE for use on files that might be
>> modified by other applications, since the migrated file is only a copy of
>> the current file, and this will result in the old file becoming an
>> open-unlinked file and any  modifications to that file will be lost.
>>
>> Is this still the case?
>> Is there a better way to disable OSTs while keeping the filesystem live?
>>
>> Background:
>> We need to take a OSS enclosure that hosts multiple OSTs offline for
>> hardware maintenance, I'd like to do this without bringing Lustre as a
>> whole down. I made sure there is enough space on the other OSTs to house
>> the contents of the machine going offline and am now about to move things.
>>
>> Thanks,
>> Eli
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170320/130c598d/attachment.htm>


More information about the lustre-discuss mailing list