[Lustre-discuss] Lustre Performance Data for Simultaneous Reads and Writes from Multiple Clients

Klaus Steden klaus.steden at thomson.net
Fri Jul 25 14:32:26 PDT 2008


Hi Daniel,

I don¹t believe so.

Various people have posted informal results from their own tests in the
field, but none have ever been formally collated. There are some rough
numbers on the Wikipedia page for CFS for GigE, IB, and 10GigE, but they
assume particular things about configuration, disk throughput, striping, OST
counts, etc.

Because Lustre is so versatile, information like this can be hard to nail
down ‹ running a GigE network with ATA drives is obviously not going to get
the best performance compared to 8 GB Fibre Channel, but both are equally
valid Lustre configurations.

hth,
Klaus

On 7/25/08 9:03 AM, "Daniel Ferber" <Daniel.Ferber at Sun.COM>did etch on stone
tablets:

> 
> I¹m working with someone who is modeling a customer system, and wants to
> partially model Lustre performance as part of that.
> 
> What they would like is the following data, or similar, for a given network. I
> say given in that pick any network config and any given stripe size and any
> given file size (can be 250MB to 1GB), and then supply the following data:
> 
> * From a single client, the read I/O start and stop time, or ³bandwidth²
> * From a single client, the write I/O start and stop time, or ³bandwidth²
> * Then introduce additional clients doing reads or writes and study the
> impact, as in one client writing and four clients reading simultaneously, and
> their start/stop times for IO, or bandwidth, and then one client reading and
> four clients writing simultaneously, and their individual IO bandwidths
> 
> The objective really is to know how concurrent reads and writes impact Lustre
> performance. 
> 
> Does this data exist, or would someone need to go do and collect this data?
> 
> Thanks,
> Dan
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080725/2f5c7df7/attachment.htm>


More information about the lustre-discuss mailing list