[Lustre-discuss] Lustre-discuss Digest, Vol 106, Issue 18

Ms. Megan Larko dobsonunit at gmail.com
Wed Jan 14 14:00:06 PST 2015


Greetings,

I concur with Mr. Ball that running a Lustre file system in excess of 90%
full can be problematic.  In my personal experience numbers above 92% have
caused slow response times for users especially for write activity.
Depending upon your Lustre stripe set-up, the system takes a longer time
using the Lustre default stripe of one in its attempt to locate enough
space to store a file.   It a larger stripe number is used, the problem
does  not go away but it is lessened.  I have had a few experiences in
which if one OST completely fills to 100% then nothing else may be written
anywhere on that single-mount-point Lustre file system.   Man oh man!  Have
I heard user complaints about that!   "What do you mean no more space?   A
df shows me another 800Gb (on a 100Tb files system)".

That said, I have had success with creating a folder with a specified
stripe size of  two or so less than the total number of OSTs in the file
system and putting files into that striped folder until I can re-balance
the file system either by a clean-up of deleting files, moving them to tape
or some other archive system, or until I can add more OSTs (I like that
grow feature!).

Someone once said that files will grow to consume all available space.  I
forget the attribution.

Cheers,
megan

On Wed, Jan 14, 2015 at 3:55 PM, <lustre-discuss-request at lists.lustre.org>
wrote:

> Send Lustre-discuss mailing list submissions to
>         lustre-discuss at lists.lustre.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.lustre.org/mailman/listinfo/lustre-discuss
> or, via email, send a message with subject or body 'help' to
>         lustre-discuss-request at lists.lustre.org
>
> You can reach the person managing the list at
>         lustre-discuss-owner at lists.lustre.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Lustre-discuss digest..."
>
>
> Today's Topics:
>
>    1. Performance dropoff for a nearly full Lustre file system
>       (Mike Selway)
>    2. Re: Performance dropoff for a nearly full Lustre file system
>       (Bob Ball)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 14 Jan 2015 19:43:02 +0000
> From: Mike Selway <mselway at cray.com>
> To: "lustre-discuss at lists.lustre.org"
>         <lustre-discuss at lists.lustre.org>
> Subject: [Lustre-discuss] Performance dropoff for a nearly full Lustre
>         file    system
> Message-ID:
>         <5073651DB6C02643B8739403BE96A0E27BCE4D at CFWEX01.americas.cray.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello,
>                I'm looking for experiences for what has been observed to
> happen (performance drop offs, severity of drops, partial/full failures,
> ...) when an operational Lustre File System has been almost "filled"...
> percentages of interest are in the range from say 80% to 99%.  Multiple
> responses appreciated.
>
> Also, comments from anyone who has implemented a Robin Hood approach,
> about how they worked to avoid performance drop offs of a "near full" file
> system by "archiving and releasing data blocks" to auto-reconstruct
> continuous data areas.
>
> Thanks!
> Mike
>
> Mike Selway | Sr. Storage Architect (TAS) | Cray Inc.
> Work +1-301-332-4116 | mselway at cray.com
> 146 Castlemaine Ct,   Castle Rock,  CO  80104|   Check out Tiered Adaptive
> Storage (TAS)!<
> http://www.cray.com/Products/Storage/Tiered-Adaptive-Storage.aspx>
>
> [cid:image001.png at 01CF36E5.85AF42A0]<http://www.cray.com/>
> [cid:image002.jpg at 01D02FF7.A1B0F1E0]
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/65e3298d/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image001.png
> Type: image/png
> Size: 5290 bytes
> Desc: image001.png
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/65e3298d/attachment-0001.png
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image002.jpg
> Type: image/jpeg
> Size: 2329 bytes
> Desc: image002.jpg
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/65e3298d/attachment-0001.jpg
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 14 Jan 2015 15:55:13 -0500
> From: Bob Ball <ball at umich.edu>
> To: Mike Selway <mselway at cray.com>,     "lustre-discuss at lists.lustre.org"
>         <lustre-discuss at lists.lustre.org>
> Subject: Re: [Lustre-discuss] Performance dropoff for a nearly full
>         Lustre file system
> Message-ID: <54B6D7B1.7020403 at umich.edu>
> Content-Type: text/plain; charset="windows-1252"; Format="flowed"
>
> In my memory, it is not recommended to run Lustre more than 90% full.
>
> bob
>
> On 1/14/2015 2:43 PM, Mike Selway wrote:
> >
> > Hello,
> >
> >                I?m looking for experiences for what has been observed
> > to happen (performance drop offs, severity of drops, partial/full
> > failures, ?) when an operational Lustre File System has been almost
> > ?filled?? percentages of interest are in the range from say 80% to
> > 99%. Multiple responses appreciated.
> >
> > Also, comments from anyone who has implemented a Robin Hood approach,
> > about how they worked to avoid performance drop offs of a ?near full?
> > file system by ?archiving and releasing data blocks? to
> > auto-reconstruct continuous data areas.
> >
> > Thanks!
> > Mike
> >
> > *Mike Selway****|** Sr. Storage Architect (TAS) | Cray Inc.*
> >
> > Work+1-301-332-4116 | mselway at cray.com
> >
> > 146 Castlemaine Ct, Castle Rock,  CO  80104|  Check out Tiered
> > Adaptive Storage (TAS)!
> > <http://www.cray.com/Products/Storage/Tiered-Adaptive-Storage.aspx>
> >
> > cid:image001.png at 01CF36E5.85AF42A0 <http://www.cray.com/>
> >
> >
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/8618f787/attachment.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: image/png
> Size: 5290 bytes
> Desc: not available
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/8618f787/attachment.png
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: image/jpeg
> Size: 2329 bytes
> Desc: not available
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss/attachments/20150114/8618f787/attachment.jpe
> >
>
> ------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
> End of Lustre-discuss Digest, Vol 106, Issue 18
> ***********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20150114/beb0f9be/attachment.htm>


More information about the lustre-discuss mailing list