[Lustre-discuss] performance tuning

Andreas Dilger adilger at sun.com
Thu Jul 2 15:06:03 PDT 2009


On Jul 02, 2009  13:34 -0600, Martin Pokorny wrote:
> I'm in the process of evaluating Lustre for a project I'm working on, 
> and I'd like to ask for some advice on tuning my configuration for 
> better performance. For my evaluation work, I've got one MGS/MDS and 
> four OSSes each hosting one OST. This storage cluster was put together 
> using some spare nodes that we had from a small, currently unused 
> compute cluster, and the disks are all single scsi drives. All of the 
> Lustre servers are running 2.6.18-92.1.17.el5_lustre.1.8.0smp kernels, 
> and the clients are patchless. All networking is over 1Gb Ethernet.

Note that using single SCSI disks means you have no redundancy of your
data.  If any disk is lost, and you are striping your files over all
of the OSTs (as it seems from below) then all of your files will also
lose data.  That might be fine if Lustre is just used as a scratch
filesystem, but it might also not be what you are expecting.

> In our application we have an instrument streaming data to a (compute) 
> cluster, which then does some work and writes results to a file, all of 
> which generally has to occur in real time (that is, keep up with the 
> streaming data). The files are written by processes running on the 
> cluster concurrently; that is, for a particular data set, multiple 
> processes are writing to one file. Due to the way the instrument 
> distributes data to the cluster nodes, as well as the format of output 
> files, each cluster process generally writes a relatively small amount 
> of data in a block, but at a high frequency (about every 10-100ms). It 
> might be important to note that the blocks written by a single process 
> are not in general contiguous. The aggregate data rate being written to 
> the output files is approximately 100MB/s at this time, although this 
> may ramp up considerably at a later date.
> 
> While my brief testing with IOR showed acceptable write throughput to 
> the Lustre filesystem, I have been unable to achieve anywhere near that 
> figure with our application doing the writes --- I'm concerned that the 
> write pattern being used is a severely limiting factor. In this 
> situation, does anyone have any advice about what I ought to be looking 
> at to improve performance on Lustre?

Writing small file chunks from many clients to a single file is definitely
one way to have very bad IO performance with Lustre.

Some ways to improve this:
- have the application aggregate writes some amount before submitting
  them to Lustre.  Lustre by default enforces POSIX coherency semantics,
  so it will result in lock ping-pong between client nodes if they are
  all writing to the same file at one time
- have the application to 4kB O_DIRECT sized IOs to the file and disable
  locking on the output file.  That will avoid partial-page IO submissions,
  and by disabling locking you will at least avoid the contention between
  the clients.
- I thought there was also an option to have clients do lockless/uncached
  IO wihtout changing the app, but I can't recall the details on how to
  activate it.  Possibly another of the Lustre engineers will recall.
- have the application write contiguous data?
- add more disks, or use SSD disks for the OSTs.  This will improve your
  IOPS rate dramatically.  It probably makes sense to create larger OSTs
  rather than many smaller OSTs due to less overhead (journal, connections,
  etc).
- using MPI-IO might also help

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.




More information about the lustre-discuss mailing list