[Lustre-discuss] full ost

gvozden rovina gracisce at gmail.com
Fri Jan 29 02:48:58 PST 2010


Hi!

Thank you for your swift answer. I have just one more question. Is it
possible to
configure lustre system so that it writes not just the file but also the
copy of the same file
in the same time as you create it?

thx!

On Fri, Jan 29, 2010 at 11:07 AM, Johann Lombardi <johann at sun.com> wrote:

> On Fri, Jan 29, 2010 at 10:32:26AM +0100, gvozden rovina wrote:
> > OST. For instance i copied 2.5 GB file to lustre which had 120 GB storage
> > space (I have 2GB test OSTs) and it didn't automatically recognized full
> > OST but it simply stopped working with " No space left on device" error
> > message. There was plenty of space left on filesystem (cca 100GB). I'm
> > aware that I can stripe the file over several OSTs but this should be
> done
> > automatically! If the system detect that one of the OST is full it should
> > put it in offline state automatically.* I just cant believe that I have
> to
> > manualy watch over which OST is getting full and putting it offline like
> > it is described here:
>
> The mds monitors OST disk usage by regularly sending OST_STATFS rpcs and
> it won't allocate *new* files on OSTs that are full. This means that you
> don't need to put full OSTs offline on the MDS, those OSTs will be skipped
> automatically at file creation time.
>
> That being said, we do *not* migrate existing files stored on full OSTs or
> increase the stripe count dynamically. The default stripe count is 1
> and since your OST size is 2GB, this means that by default, the maximum
> file size is limited to 2GB (even less with metadata overhead). You can
> of course change the stripe count with lfs setstripe.
> Restriping files would require to have the layout lock feature, which
> is not available in any lustre releases yet.
>
> Johann
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100129/acc4f9ee/attachment.htm>


More information about the lustre-discuss mailing list