[Lustre-discuss] very slow IO using o_direct write on RHEL 5.1

Alex Lee alee at datadirectnet.com
Mon Sep 8 00:51:45 PDT 2008


Anyone know if there is something inherently slow about o_direct writes
on linux?

I am running lustre 1.6.5.1 on few dell 2950s. Each OST is capable of
300MB/s and I have 8 OST on my FS.

Using buffers I can max out the bandwidth fine but soon as I try a
single file o_direct write I get only 135MB/s no matter what
stripecount, rpc flight number or linux sectorsize I use.

I can get a little bit more bandwidth using larger stripesize but that
only takes me up to 200mb/s. I cant help wonder if there something thats
holding up the IOPS using single client, single file write. Trying to
see if its the lustre client or just the way linux handles IO...

Anyone have any settings I might be forgetting on the linux
server/client? I have /sys/fs/block/sd*/max_sectorsize set and elevator
set to noop. I cant think of anything on lustre side since I'm not even
using more then 1-2 RPC in flight when running.

Any help would be really appreciated,
-Alex





More information about the lustre-discuss mailing list