[Lustre-discuss] MDT overloaded when writting small files in large number

anil kumar anil.k.kv at gmail.com
Sun Dec 7 20:18:33 PST 2008


Hi,

Most of the issues related to insufficient memory while using client & OSS
on same Server,  in our case more than 50% of the memory is alway free on
OSS as we have 32GB on MDT/OSS nodes.

We do'nt see the performance issues while the filesize is large & number of
files are less; performance issues starts as number of files increase.  So
it might be the turn around time which is causing the problem as each
transaction need to go to MGS/MDT & then again back to OSS.

Example:
Case1 : If we try to write data set of 1GB with 200Files ; write, delete &
read is faster
Case2 : if we try to write data set of 1GB with 14000Files ; write, delete &
read is very slow.

 Even we see most of the issues reported related to small files & work
around to improve performance by disable debug mode.  But in our case
disabling debug mode did not help to improve the performance.

Please let us know if there is any other alternate options to improve
performance while using more number of file with small size.

Thanks,
Anil





On Fri, Dec 5, 2008 at 4:51 PM, Balagopal Pillai <pillai at mathstat.dal.ca>wrote:

> "OST  - 13 ( also act as nfsserver)"
>
>              Then I am assuming that your OSS is also a Lustre client.
> It might be useful to search through this
> list to find out the potential pitfalls of mounting Lustre volumes on OSS.
>
>
>
> siva murugan wrote:
> > We are trying to uptake Lustre in one of our heavy read/write
> > intensive infrastructure(daily writes - 8million files, 1TB ). Average
> > size of files written is 1KB ( I know , Lustre can't scale well for
> > small size files, but just wanted to analyze the possibility of
> > uptaking )
> >
> > Following are some of the tests conducted  to see the difference in
> > large and small file size writting,
> >
> > MDT - 1
> > OST  - 13 ( also act as nfsserver)
> > Clients access lustrefs via NFS ( not patchless clients)
> >
> > Test 1 :
> >
> > Number of clients  - 10
> > Dataset size read/written - 971M (per client)
> > Number of files in the dataset- 14000
> > Total data written - 10gb
> > Time taken - 1390s
> >
> > Test2 :
> >
> > Number of clients  - 10
> > Dataset size read/written -1001M (per client)
> > Number of files in the dataset - 4
> > Total data written - 10gb
> >
> > Time taken - 215s
> >
> >
> > Test3 :
> >
> > Number of clients  - 10
> > Dataset size read.written- 53MB (per client)
> > Number of files in the dataset- 14000
> > Total data written - 530MB
> > Time taken - 1027s
> > MDT was heavily loaded during  Test3 ( Load average > 25 ). Since the
> > file size in Test 3 is small(1kb) and number of files written is too
> > large(14000 x 10 clients ), obvisouly mdt gets loaded in allocating
> > inodes, total data written in test3 is only 530MB.
> >
> > Question  : Is there any parameter that I can tune in MDT to increase
> > the performance when writting large number of small files ?
> >
> > Please help
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/mailman/listinfo/lustre-discuss
> >
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20081208/d33e9208/attachment.htm>


More information about the lustre-discuss mailing list