[Lustre-discuss] ldiskfs performance vs. XFS performance

Bernd Schubert bs_lists at aakef.fastmail.fm
Mon Oct 18 05:42:47 PDT 2010


Hello Michael,

On Monday, October 18, 2010, Michael Kluge wrote:
> Hi list,
> 
> we have Lustre 1.8.3 running on a DDN 9900. One LUN (10 discs) formatted
> with XFS shows 400 MB/s if oppressed with one 'dd' and large block
> sizes. One LUN formatted an mounted with ldiskfs (the ext3 based that is
> default in 1.8.3.) shows 110 MB/s. It this the expected behaviour? It
> looks a bit low compared to XFS.

Yes, unfortunately not entirely unexpected, with upstream Oracle versions. 
Firstly, please send a mail to support at ddn.com and ask for the udev tuning rpm 
(please add [Lustre] in the subject line).

Then see this MMP issue here:
https://bugzilla.lustre.org/show_bug.cgi?id=23129

which requires 
https://bugzilla.lustre.org/show_bug.cgi?id=22882

(as Lustre requires contributor agreements and as self-signed agreements do 
not work anymore, that presently causes some headache and brought in legacy 
and as always with bureaucracy it takes ages to sort it out - so landing our 
patches is delayed presently).

In order to prevent data corruption in case of controller failures, you should 
also disable the S2A write back cache and enable async-journals instead on 
Lustre (enabled by default in DDN Lustre versions). 

> 
> We think with help from DDN we did everything we can from a hardware
> perspective. We formatted the LUN with the correct striping and stripe
> size, DDN adjusted some controller parameters and we even put the file
> system journal on a RAM disk. The LUN has 16 TB capacity. I formated
> only 7 for the moment due to the 8 TB limit.

You should use ext4 based ldiskfs to get more than 8TiB. Our releases use that 
as default.

> 
> This is what I did:
> 
> MDS_NID=IP at SOMEHWERE
> RAM_DEV=/dev/ram1
> dd if=/dev/zero of=$RAM_DEV bs=1M count=1000
> mke2fs -O journal_dev -b 4096 $RAM_DEV
> 
> mkfs.lustre  --device-size=$((7*1024*1024*1024)) --ost --fsname=luram
> --mgsnode=$MDS_NID --mkfsoptions="-E stride=32,stripe-width=256 -b 4096
> -j -J device=$RAM_DEV" /dev/disk/by-path/...
> 
> mount -t ldiskfs /dev/disk/by-path/... /mnt/ost_1
> 
> Is there a way to push the bandwidth limit for a single data stream any
> further?

While it could make it difficult with support, you could use our DDN Lustre 
releases:

http://eu.ddn.com:8080/lustre/lustre/1.8.3/ddn3.3/


Hope it helps,
Bernd


-- 
Bernd Schubert
DataDirect Networks



More information about the lustre-discuss mailing list