[Lustre-discuss] small file performance
Robin Humble
rjh+lustre at cita.utoronto.ca
Sun Jan 6 02:13:07 PST 2008
On Sat, Jan 05, 2008 at 11:08:10AM -0500, Aaron Knister wrote:
> Striping is turned off. Are there any other optimizations you know of to
> increase the speed of metadata operations?
having blindingly fast disks as the metadata backing store improves
performance greatly. Lustre's MDS also benefits from as many fast cores
as you can throw at it.
as a temporary measure you could put the metadata disk in a ramdisk to
see if that makes a difference to your tests. eg.
mkdir -p /mnt/ramdisk
mount -t ramfs none -o rw,size=3072M,mode=755 /mnt/ramdisk
dd if=/dev/zero of=/mnt/ramdisk/mds bs=1M count=3000
losetup /dev/loop0 /mnt/ramdisk/mds
mkfs.lustre --fsname=testfs --mdt --mgs --mkfsoptions='"-i 1024"' --reformat /dev/loop0
hmmm... you say your other option is NFS - are you comparing to NFS in
sync mode or async?
here's a toy Linux 2.6.23 kernel benchmark - tar xfj, make -j 6, rm -rf.
times in seconds (best of at least 2 runs):
Lustre NFS sync NFS async local disk loopback on Lustre
tar 377 1157 22 9 9
build 719 916 378 290 291
rm 24 449 15 1 1
Note that the NFS server and the Lustre MDS and the local disk node are
the same machine in this test, but the NFS is over GigE to a SAS raid1
pair whilst Lustre is over IB to many disks, so it's not an
apples-to-apples comparison by any means.
It does however indicate the differences between NFS settings, and also
that Lustre can be faster than NFS when they are both writing to disks
and not just caching in the server's ram - what NFS async does. caching
unwritten data in the server's ram leads to data loss if the server
crashes.
no global filesystem is going to be as fast as local disk on metadata
operations, but the last column of the table shows a neat trick where
Lustre can give you a local filesystem that's even better than a local
disk - it has all the metadata speed of a local disk, but also the
bandwidth of striped Lustre :-)
I created a big striped file on Lustre, and mounted it on a node as a
loopback ext2 filesystem. Lustre sees no metadata traffic - just one
file open on one node. we're going to use this technique for 'local'
scratch disks on diskless nodes.
cheers,
robin
More information about the lustre-discuss
mailing list