[Lustre-discuss] MDT overloaded when writting small files in large number

siva murugan siva.murugan at gmail.com
Thu Dec 4 21:28:59 PST 2008


We are trying to uptake Lustre in one of our heavy read/write intensive
infrastructure(daily writes - 8million files, 1TB ). Average size of files
written is 1KB ( I know , Lustre can't scale well for small size files, but
just wanted to analyze the possibility of uptaking )

Following are some of the tests conducted  to see the difference in large
and small file size writting,

MDT - 1
OST  - 13 ( also act as nfsserver)
Clients access lustrefs via NFS ( not patchless clients)

Test 1 :

Number of clients  - 10
Dataset size read/written - 971M (per client)
Number of files in the dataset- 14000
Total data written - 10gb
Time taken - 1390s

Test2 :

Number of clients  - 10
Dataset size read/written -1001M (per client)
Number of files in the dataset - 4
Total data written - 10gb

Time taken - 215s


Test3 :

Number of clients  - 10
Dataset size read.written- 53MB (per client)
Number of files in the dataset- 14000
Total data written - 530MB
Time taken - 1027s
MDT was heavily loaded during  Test3 ( Load average > 25 ). Since the file
size in Test 3 is small(1kb) and number of files written is too large(14000
x 10 clients ), obvisouly mdt gets loaded in allocating inodes, total data
written in test3 is only 530MB.

Question  : Is there any parameter that I can tune in MDT to increase the
performance when writting large number of small files ?

Please help
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20081205/a06bcaa9/attachment.htm>


More information about the lustre-discuss mailing list