[Lustre-discuss] MDT overloaded when writting small files in large number
Balagopal Pillai
pillai at mathstat.dal.ca
Fri Dec 5 03:21:49 PST 2008
"OST - 13 ( also act as nfsserver)"
Then I am assuming that your OSS is also a Lustre client.
It might be useful to search through this
list to find out the potential pitfalls of mounting Lustre volumes on OSS.
siva murugan wrote:
> We are trying to uptake Lustre in one of our heavy read/write
> intensive infrastructure(daily writes - 8million files, 1TB ). Average
> size of files written is 1KB ( I know , Lustre can't scale well for
> small size files, but just wanted to analyze the possibility of
> uptaking )
>
> Following are some of the tests conducted to see the difference in
> large and small file size writting,
>
> MDT - 1
> OST - 13 ( also act as nfsserver)
> Clients access lustrefs via NFS ( not patchless clients)
>
> Test 1 :
>
> Number of clients - 10
> Dataset size read/written - 971M (per client)
> Number of files in the dataset- 14000
> Total data written - 10gb
> Time taken - 1390s
>
> Test2 :
>
> Number of clients - 10
> Dataset size read/written -1001M (per client)
> Number of files in the dataset - 4
> Total data written - 10gb
>
> Time taken - 215s
>
>
> Test3 :
>
> Number of clients - 10
> Dataset size read.written- 53MB (per client)
> Number of files in the dataset- 14000
> Total data written - 530MB
> Time taken - 1027s
> MDT was heavily loaded during Test3 ( Load average > 25 ). Since the
> file size in Test 3 is small(1kb) and number of files written is too
> large(14000 x 10 clients ), obvisouly mdt gets loaded in allocating
> inodes, total data written in test3 is only 530MB.
>
> Question : Is there any parameter that I can tune in MDT to increase
> the performance when writting large number of small files ?
>
> Please help
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
More information about the lustre-discuss
mailing list