[Lustre-discuss] Lustre Caching Read Files

Roger Spellman roger at terascala.com
Fri Mar 6 09:59:29 PST 2009


Hello,

 

I have a customer running an application that opens 24,000 files, and
reads from all of them.  The application **leaves these files open**.
The average file size is 750MB, for a total of 18G of files.  Their
system has 16G of RAM + 8G of swap.

 

When they run their application on a local drive, the application runs
fine.

 

When they run it under Lustre, the application fails at about 10,000
files.

 

I was able to reproduce this on my system with a very simple C program.
I first create 1,000 files on /mnt/lustre using dd.

 

My program then allocates 1000 buffers, each 500k big, then reads 500k
from the 1000 files.  My system has 4G of RAM, and top indicates that
this program uses 37% of total memory.

 

If I run the same program with 1,000 files on a local disk, top
indicates that it uses 12.1% of RAM.  That is 1/3 !

 

Why is that?  Is there anything I can do to change this behavior?

 

I did set max_read_ahead_mb to 1, and max_read_ahead_whole_mb to 0, but
that did not seem to help.

 

Thanks.

 

Roger Spellman

Staff Engineer

Terascala, Inc.

508-588-1501

www.terascala.com http://www.terascala.com/

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20090306/c46abf02/attachment.htm>


More information about the lustre-discuss mailing list