[Lustre-discuss] MDT overloaded when writting small files in large number

Brian J. Murrell Brian.Murrell at Sun.COM
Mon Dec 8 06:17:30 PST 2008


On Mon, 2008-12-08 at 09:48 +0530, anil kumar wrote:
>  
> Example: 
> Case1 : If we try to write data set of 1GB with 200Files ; write,
> delete & read is faster 
> Case2 : if we try to write data set of 1GB with 14000Files ; write,
> delete & read is very slow.

Yes, this is not surprising for various values of "slow".  Lustre is
known to perform much better on large files as that is the typical HPC
workload.

> Even we see most of the issues reported related to small files & work
> around to improve performance by disable debug mode.  But in our case
> disabling debug mode did not help to improve the performance.

There is a bug in our BZ tracking small file performance issues.  I'm
not sure if it's seen much action lately though.  I don't recall the
number but you might want to subscribe to that bug.
 
> Please let us know if there is any other alternate options to improve
> performance while using more number of file with small size.

If you have already reviewed the archives of this list and applied all
of the various remedies for small files, and you have plenty of memory
in your MDS, then there is not much else you can do I'm, afraid.

We do recognize that small files is an area in which we don't perform as
well as we do on large files.  As/if demand for small file performance
increases, it will bubble up our list of priorities.

b.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20081208/201fb10e/attachment.pgp>


More information about the lustre-discuss mailing list