[lustre-discuss] set OSTs read only ?

Dilger, Andreas andreas.dilger at intel.com
Sun Jul 16 21:00:32 PDT 2017


When you write "MGS", you really mean "MDS". The MGS would be the place for this if you were changing the config to permanently deactivate the OSTs via "lctl conf_param". To temporarily do this, the commands should be run on the MDS via "lctl set_param".  In most cases the MDS and MGS are co-located, so the distinction is irrelevant, but good to get it right for the record.

The problem of objects not being unlinked until after the MDS is restarted has been fixed.

Also, with 2.9 and later it is possible to use "lctl set_param osp.<OST>.create_count=0" to stop new file allocation on that OST without blocking unlinked at all, which is best for emptying old OSTs, rather than using "deactivate".

For marking the OSTs read-only, both of these solutions will not prevent clients from modifying the OST filesystems, just from creating new files (assuming all OSTs are set this way).

You might consider to try "mount -o remount,ro" on the MDT and OST filesystems on the servers to see if this works (I haven't tested this myself). The problem might be that this prevents new clients from mounting.

It probably makes sense to add server-side read-only mounting as a feature. Could you please file a ticket in Jira about this?

Cheers, Andreas

On Jul 16, 2017, at 09:16, Bob Ball <ball at umich.edu<mailto:ball at umich.edu>> wrote:

I agree with Raj.  Also, I have noted with Lustre 2.7, that the space is not actually freed after re-activation of the OST, until the mgs is restarted.  I don't recall the reason for this, or know if this was fixed in later Lustre versions.

Remember, this is done on the mgs, not on the clients.  If you do it on a client, the behavior is as you thought.

bob

On 7/16/2017 11:10 AM, Raj wrote:

No. Deactivating an OST will not allow to create new objects(file). But, client can read AND modify an existing objects(append the file). Also, it will not free any space from deleted objects until the OST is activated again.

On Sun, Jul 16, 2017, 9:29 AM E.S. Rosenberg <esr+lustre at mail.hebrew.edu<mailto:esr%2Blustre at mail.hebrew.edu>> wrote:
On Thu, Jul 13, 2017 at 5:49 AM, Bob Ball <ball at umich.edu<mailto:ball at umich.edu>> wrote:
On the mgs/mgt do something like:
lctl --device <fsname>-OST0019-osc-MDT0000 deactivate

No further files will be assigned to that OST.  Reverse with "activate".  Or reboot the mgs/mdt as this is not persistent.  "lctl dl" will tell you exactly what that device name should be for you.
Doesn't that also disable reads from the OST though?

bob


On 7/12/2017 6:04 PM, Alexander I Kulyavtsev wrote:
You may find advise from Andreas on this list (also attached below). I did not try setting fail_loc myself.

In 2.9 there is setting  osp.*.max_create_count=0 described at LUDOC-305.

We used to set OST degraded as described in lustre manual.
It works most of the time but at some point I saw lustre errors in logs for some ops. Sorry, I do not recall details.

I still not sure either of these approaches will work for you: setting OST degraded or fail_loc will makes some osts selected instead of others.
You may want to verify if these settings will trigger clean error on user side (instead of blocking) when all OSTs are degraded.

The other and also simpler approach would be to enable lustre quota and set quota below used space for all users (or groups).

Alex.

From: "Dilger, Andreas" <andreas.dilger at intel.com<mailto:andreas.dilger at intel.com>>
Subject: Re: [lustre-discuss] lustre 2.5.3 ost not draining
Date: July 28, 2015 at 11:51:38 PM CDT
Cc: "lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>" <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>

Setting it degraded means the MDS will avoid allocations on that OST
unless there aren't enough OSTs to meet the request (e.g. stripe_count =
-1), so it should work.

That is actually a very interesting workaround for this problem, and it
will work for older versions of Lustre as well.  It doesn't disable the
OST completely, which is fine if you are doing space balancing (and may
even be desirable to allow apps that need more bandwidth for a widely
striped file), but it isn't good if you are trying to empty the OST
completely to remove it.

It looks like another approach would be to mark the OST as having no free
space using OBD_FAIL_OST_ENOINO (0x229) fault injection on that OST:

  lctl set_param fail_loc=0x229 fail_val=<ost_index>

This would cause the OST to return 0 free inodes from OST_STATFS for the
specified OST index, and the MDT would skip this OST completely.  To
disable all of the OSTs on an OSS use <ost_index> = -1.  It isn't possible
to selectively disable a subset of OSTs using this method.  The
OBD_FAIL_OST_ENOINO fail_loc has been available since Lustre 2.2, which
covers all of the 2.4+ versions that are affected by this issue.

If this mechanism works for you (it should, as this fail_loc is used
during regular testing) I'd be obliged if someone could file an LUDOC bug
so the manual can be updated.

Cheers, Andreas


On Jul 12, 2017, at 4:20 PM, Riccardo Veraldi <Riccardo.Veraldi at cnaf.infn.it<mailto:Riccardo.Veraldi at cnaf.infn.it>> wrote:

Hello,

on one of my lustre FS I need to find a solution so that users can still
access data on the FS but cannot write new files on it.
I have hundreds of clients accessing the FS so remounting it ro is not
really easily feasible.
Is there an option on the OSS side to allow OSTs to be accessed just to
read data and not to store new data ?
tunefs.lustre could do that ?
thank you

Rick

_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org




_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170717/8b8c6203/attachment.htm>


More information about the lustre-discuss mailing list