[lustre-discuss] lfs_migrate

E.S. Rosenberg esr+lustre at mail.hebrew.edu
Sun May 7 05:27:43 PDT 2017


Since we were pressured for time (the migrate was to empty a disk enclosure
that was functioning too close to failure for comfort) I mailed the
affected user a list of affected files and how they could take care of
things.
If I do have time in the future I may write this script.

Now that we again have a working enclosure, should I be taking action
myself to re-balance the files (with migrate) or should I just let time &
lustre do its' thing?

Thanks,
Eli

On Tue, May 2, 2017 at 3:51 AM, Dilger, Andreas <andreas.dilger at intel.com>
wrote:

> If your filesystem was created with Lustre 2.1 or later then you can use:
>
>    FID=$(lfs path2fid "/path/to/file")
>    lfs fid2path "$FID"
>
> to find all the pathnames that are hard links to that file. There is a
> patch to add a "lfs path2links" option that does this in a single step, but
> it is not in any release yet.
>
> The number of pathnames should match the hard link count returned by "stat
> -c%h" if the files don't have too many hard links (i.e. below 140 or so)
> and then you can manually migrate the file and re-link the other pathnames
> to the new file with "ln -f".
>
> That is something that has been on the todo list for lfs_migrate for a
> while, so if you wanted to implement that in the script and submit a patch
> to Gerrit it would be appreciated.
>
> Cheers, Andreas
>
> On May 1, 2017, at 06:59, E.S. Rosenberg <esr+lustre at mail.hebrew.edu>
> wrote:
>
> Now that we have reached close to the end of the migration process we have
> a lot of files that are being skipped due to "multiple hard links", I am
> not sure what my strategy should be concerning such files.
> Is there any migration automation possible on these? Or is my only route
> contacting the owners (who may just not have known how to use 'ln -s')?
>
> Any advice would be very welcome.
> Thanks,
> Eliyahu - אליהו
>
> On Wed, Apr 12, 2017 at 6:55 PM, Todd, Allen <Allen.Todd at sig.com> wrote:
>
>> Thanks Andreas -- good to know there is yet another reason to upgrade.
>> We are on 2.7.0.  I was trying to hold out for progressive file layout to
>> land.
>>
>> Allen
>>
>> -----Original Message-----
>> From: lustre-discuss [mailto:lustre-discuss-bounces at lists.lustre.org] On
>> Behalf Of Dilger, Andreas
>> Sent: Wednesday, April 12, 2017 8:19 AM
>> To: Todd, Allen <Allen.Todd at msx.bala.susq.com>
>> Cc: E.S. Rosenberg <esr at cs.huji.ac.il>; lustre-discuss at lists.lustre.org
>> Subject: Re: [lustre-discuss] lfs_migrate
>>
>> On Apr 10, 2017, at 14:53, Todd, Allen <Allen.Todd at sig.com> wrote:
>> >
>> > While everyone is talking about lfs migrate, I would like to point out
>> that it appears to be missing an option to preserve file modification and
>> access times, which makes it less useful for behind the scenes data
>> management tasks.
>>
>> This should actually be the default, though there was a bug in older
>> versions of Lustre that didn't preserve the timestamps.  That was fixed in
>> Lustre 2.8.
>>
>> Cheers, Andreas
>>
>> > Allen
>> >
>> > -----Original Message-----
>> > From: lustre-discuss [mailto:lustre-discuss-bounces at lists.lustre.org]
>> > On Behalf Of Henri Doreau
>> > Sent: Tuesday, April 04, 2017 3:18 AM
>> > To: E.S. Rosenberg <esr at cs.huji.ac.il>
>> > Cc: lustre-discuss at lists.lustre.org
>> > Subject: Re: [lustre-discuss] lfs_migrate
>> >
>> > Hello,
>> >
>> > the manpage for lfs(1) lists the available options in 2.8:
>> > """
>> > lfs migrate -m <mdt_index> directory
>> > lfs migrate [-c | --stripe-count <stripe_count>]
>> >               [-i | --stripe-index <start_ost_idx>]
>> >               [-S | --stripe-size <stripe_size>]
>> >               [-p | --pool <pool_name>]
>> >               [-o | --ost-list <ost_indices>]
>> >               [-b | --block]
>> >               [-n | --non-block] file|directory """
>> >
>> > Although this is certainly terse, I guess that most parameters are
>> intuitive.
>> >
>> > The command will open the file to restripe (blocking concurrent
>> accesses or not, depending on -b/-n), create a special "volatile"
>> (=unlinked) one with the requested striping parameters and copy the source
>> into the destination.
>> >
>> > If the copy succeeds, the two files are atomically swapped and
>> concurrent access protection is released.
>> >
>> > In non-blocking mode, the process will detect if the source file was
>> already opened or if there's an open during the copy process and abort
>> safely. It is then up to the admin to reschedule the migration later, maybe
>> with -b.
>> >
>> > HTH
>> >
>> > Henri
>> >
>> > On 02/avril - 14:43 E.S. Rosenberg wrote:
>> >> Thanks for all the great replies!
>> >>
>> >> I may be wrong on this but 'lfs migrate' does not seem to be
>> >> documented in the manpage (my local one is 2.8 so I expect that but
>> >> even manpages that I find online).
>> >>
>> >> Any pointers would be very welcome.
>> >>
>> >> On Thu, Mar 23, 2017 at 12:31 PM, Henri Doreau <henri.doreau at cea.fr>
>> wrote:
>> >>
>> >>> On 20/mars - 22:50 E.S. Rosenberg wrote:
>> >>>> On Mon, Mar 20, 2017 at 10:19 PM, Dilger, Andreas <
>> >>> andreas.dilger at intel.com>
>> >>>> wrote:
>> >>>>
>> >>>>> The underlying "lfs migrate" command (not the "lfs_migrate"
>> >>>>> script) in newer Lustre versions (2.9) is capable of migrating
>> >>>>> files that are in
>> >>> use
>> >>>>> by using the "--block" option, which prevents other processes from
>> >>>>> accessing or modifying the file during migration.
>> >>>>>
>> >>>>> Unfortunately, "lfs_migrate" doesn't pass that argument on, though
>> >>>>> it wouldn't be hard to change the script. Ideally, the "lfs_migrate"
>> >>> script
>> >>>>> would pass all unknown options to "lfs migrate".
>> >>>>>
>> >>>>>
>> >>>>> The other item of note is that setting the OST inactive on the MDS
>> >>>>> will prevent the MDS from deleting objects on the OST (see
>> >>>>> https://jira.hpdd.intel.com/browse/LU-4825 for details).  In
>> >>>>> Lustre
>> >>> 2.9
>> >>>>> and later it is possible to set on the MDS:
>> >>>>>
>> >>>>>   mds# lctl set_param osp.<OST>.create_count=0
>> >>>>>
>> >>>>> to stop MDS allocation of new objects on that OST. On older
>> >>>>> versions
>> >>> it is
>> >>>>> possible to set on the OSS:
>> >>>>>
>> >>>>>  oss# lctl set_param obdfilter.<OST>.degraded=1
>> >>>>>
>> >>>>> so that it tells the MDS to avoid it if possible, but this isn't a
>> >>>>> hard exclusion.
>> >>>>>
>> >>>>> It is also possible to use a testing hack to mark an OST as out of
>> >>> inodes,
>> >>>>> but that only works for one OST per OSS and it sounds like that
>> >>>>> won't
>> >>> be
>> >>>>> useful in this case.
>> >>>>>
>> >>>>> Cheers, Andreas
>> >>>>>
>> >>>> You're making me want Lustre 2.9 more :) but for now I'm still
>> >>>> stuck on
>> >>> 2.8
>> >>>> and because this is very much production these days I'm more
>> >>>> careful with the update (hoping to finally get hw allocated for a
>> >>>> test env soon to
>> >>> test
>> >>>> the update).
>> >>>> Thanks,
>> >>>> Eli
>> >>>>
>> >>>
>> >>> Hello,
>> >>>
>> >>> this safer version of `lfs migrate' (LU-4840) is actually available
>> >>> in 2.8.
>> >>>
>> >>> When used with --non-block flag, a concurrent open of the file being
>> >>> migrated will cause the migration to fail. With --block (or nothing,
>> >>> it's the default behavior) and as Andreas said, concurrent opens
>> >>> will block until the migration completes.
>> >>>
>> >>> Regards
>> >>>
>> >>> --
>> >>> Henri Doreau
>> >>>
>> >
>> > _______________________________________________
>> > lustre-discuss mailing list
>> > lustre-discuss at lists.lustre.org
>> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>> >
>> > ________________________________
>> >
>> > IMPORTANT: The information contained in this email and/or its
>> attachments is confidential. If you are not the intended recipient, please
>> notify the sender immediately by reply and immediately delete this message
>> and all its attachments. Any review, use, reproduction, disclosure or
>> dissemination of this message or any attachment by an unintended recipient
>> is strictly prohibited. Neither this message nor any attachment is intended
>> as or should be construed as an offer, solicitation or recommendation to
>> buy or sell any security or other financial instrument. Neither the sender,
>> his or her employer nor any of their respective affiliates makes any
>> warranties as to the completeness or accuracy of any of the information
>> contained herein or that this message or any of its attachments is free of
>> viruses.
>> > _______________________________________________
>> > lustre-discuss mailing list
>> > lustre-discuss at lists.lustre.org
>> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>> Cheers, Andreas
>> --
>> Andreas Dilger
>> Lustre Principal Architect
>> Intel Corporation
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>> ________________________________
>>
>> IMPORTANT: The information contained in this email and/or its attachments
>> is confidential. If you are not the intended recipient, please notify the
>> sender immediately by reply and immediately delete this message and all its
>> attachments. Any review, use, reproduction, disclosure or dissemination of
>> this message or any attachment by an unintended recipient is strictly
>> prohibited. Neither this message nor any attachment is intended as or
>> should be construed as an offer, solicitation or recommendation to buy or
>> sell any security or other financial instrument. Neither the sender, his or
>> her employer nor any of their respective affiliates makes any warranties as
>> to the completeness or accuracy of any of the information contained herein
>> or that this message or any of its attachments is free of viruses.
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170507/5258a40b/attachment-0001.htm>


More information about the lustre-discuss mailing list