[Lustre-discuss] lustre and small files overhead

Joe Barjo jobarjo78 at yahoo.fr
Mon Mar 10 04:27:51 PDT 2008


Andreas Dilger a écrit :
> On Mar 07, 2008  12:49 +0100, Joe Barjo wrote:
>   
>> I made some more tests, and have setup a micro lustre cluster on lvm
>> volumes.
>> node a: MDS
>> node b and c: OST
>> node a,b,c,d,e,f: clients
>> Gigabit ethernet network.
>> Made the optimizations: lnet.debug=0, lru_size to 10000, max_dirty_mb to
>> 1024
>>     
>
> For high RPC-rate operations using an interconnect like Infiniband is
> better than ethernet.
>
>   
infiniband is not in our budget...
>> The svn checkout takes 50s ( 15s on a localdisk, 25s on a local lustre
>> demo (with debug=0))
>> Launching gkrellm, a single svn checkout consumes about 20% of the MDS
>> system cpu with about 2.4mbyte/sec ethernet communication.
>>     
>
>   
>> About 6MByte/s disk bandwidth on OST1, up to 12-16MB/s on OST2 disk
>> bandwidth, network bandwidth on OST is about 10 to 20 times under disk
>> bandwidth.
>> Why so much disk bandwidth on OSTs, is it a readahead problem?
>>     
>
> That does seem strange, I can't really say why.  There is some metadata
> overhead, and that is higher with small files, but I don't think it
> would be 10-20x overhead.
>
>   
The checkouted source is only 65 megabytes. So much OST disk bandwidth
is probably not normal.
Maybe you should verify this point.
Are you sure there isn't an optimazation for this? This looks like
readahead or something like that.
>> I launched a compilation distributed on the 6 clients:
>> MDS system cpu goes up to 60% system ressource (athlon 64 3500+)
>> 12MByte/s on the ethernet, OST goes up to the same level as previous test.
>>
>> How come is there so much network communications on the MDT?
>>     
>
> Because every metadata operation has to be done on the MDS currently.
> We are working toward having metadata writeback cache operations on
> the client, but it doesn't happen currently.  For operations like
> compilation it is basically entirely metadata overhead.
>
>   
>> As I understood that the MDS can not be load balanced, I don't see how
>> lustre is scalable to thousands of clients...
>>     
>
> Because in many HPC environments there are very few metadata operations
> in comparison to the amount of data being read/written.  Average file
> sizes are 20-30MB instead of 20-30kB.
>
>   
>> It looks like lustre is not made for this kind of application
>>     
>
> No, it definitely isn't tuned for small files.
>   
Could it be tuned one day for small files?
Which filesystem would you suggest for me?
I already tried nfs, afs
I will now try glusterfs

Thanks for your support

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080310/11e3dfa0/attachment.htm>


More information about the lustre-discuss mailing list