[lustre-discuss] set OSTs read only ?

Raj rajgautam at gmail.com
Sun Jul 16 08:10:13 PDT 2017


No. Deactivating an OST will not allow to create new objects(file). But,
client can read AND modify an existing objects(append the file). Also, it
will not free any space from deleted objects until the OST is activated
again.

On Sun, Jul 16, 2017, 9:29 AM E.S. Rosenberg <esr+lustre at mail.hebrew.edu>
wrote:

> On Thu, Jul 13, 2017 at 5:49 AM, Bob Ball <ball at umich.edu> wrote:
>
>> On the mgs/mgt do something like:
>> lctl --device <fsname>-OST0019-osc-MDT0000 deactivate
>>
>> No further files will be assigned to that OST.  Reverse with "activate".
>> Or reboot the mgs/mdt as this is not persistent.  "lctl dl" will tell you
>> exactly what that device name should be for you.
>>
> Doesn't that also disable reads from the OST though?
>
>>
>> bob
>>
>>
>> On 7/12/2017 6:04 PM, Alexander I Kulyavtsev wrote:
>>
>> You may find advise from Andreas on this list (also attached below). I
>> did not try setting fail_loc myself.
>>
>> In 2.9 there is setting  osp.*.max_create_count=0 described at LUDOC-305.
>>
>> We used to set OST degraded as described in lustre manual.
>> It works most of the time but at some point I saw lustre errors in logs
>> for some ops. Sorry, I do not recall details.
>>
>> I still not sure either of these approaches will work for you: setting
>> OST degraded or fail_loc will makes some osts selected instead of others.
>> You may want to verify if these settings will trigger clean error on user
>> side (instead of blocking) when all OSTs are degraded.
>>
>> The other and also simpler approach would be to enable lustre quota and
>> set quota below used space for all users (or groups).
>>
>> Alex.
>>
>> *From: *"Dilger, Andreas" <andreas.dilger at intel.com>
>> *Subject: **Re: [lustre-discuss] lustre 2.5.3 ost not draining*
>> *Date: *July 28, 2015 at 11:51:38 PM CDT
>> *Cc: *"lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
>>
>> Setting it degraded means the MDS will avoid allocations on that OST
>> unless there aren't enough OSTs to meet the request (e.g. stripe_count =
>> -1), so it should work.
>>
>> That is actually a very interesting workaround for this problem, and it
>> will work for older versions of Lustre as well.  It doesn't disable the
>> OST completely, which is fine if you are doing space balancing (and may
>> even be desirable to allow apps that need more bandwidth for a widely
>> striped file), but it isn't good if you are trying to empty the OST
>> completely to remove it.
>>
>> It looks like another approach would be to mark the OST as having no free
>> space using OBD_FAIL_OST_ENOINO (0x229) fault injection on that OST:
>>
>>   lctl set_param fail_loc=0x229 fail_val=<ost_index>
>>
>> This would cause the OST to return 0 free inodes from OST_STATFS for the
>> specified OST index, and the MDT would skip this OST completely.  To
>> disable all of the OSTs on an OSS use <ost_index> = -1.  It isn't possible
>> to selectively disable a subset of OSTs using this method.  The
>> OBD_FAIL_OST_ENOINO fail_loc has been available since Lustre 2.2, which
>> covers all of the 2.4+ versions that are affected by this issue.
>>
>> If this mechanism works for you (it should, as this fail_loc is used
>> during regular testing) I'd be obliged if someone could file an LUDOC bug
>> so the manual can be updated.
>>
>> Cheers, Andreas
>>
>>
>>
>> On Jul 12, 2017, at 4:20 PM, Riccardo Veraldi <
>> Riccardo.Veraldi at cnaf.infn.it> wrote:
>>
>> Hello,
>>
>> on one of my lustre FS I need to find a solution so that users can still
>> access data on the FS but cannot write new files on it.
>> I have hundreds of clients accessing the FS so remounting it ro is not
>> really easily feasible.
>> Is there an option on the OSS side to allow OSTs to be accessed just to
>> read data and not to store new data ?
>> tunefs.lustre could do that ?
>> thank you
>>
>> Rick
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing listlustre-discuss at lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170716/598c123a/attachment-0001.htm>


More information about the lustre-discuss mailing list