[Lustre-devel] LustreFS performance

Andreas Dilger adilger at sun.com
Thu Mar 5 13:27:00 PST 2009


On Mar 04, 2009  12:28 -0500, Jeff Darcy wrote:
> Oleg Drokin wrote:
>> On Mar 2, 2009, at 3:45 PM, Andreas Dilger wrote:
>>> Note that strictly speaking we need to use ldiskfs on a ramdisk, not
>>> tmpfs, because we don't have an fsfilt_tmpfs.
>>
>> The idea was loop device on tmpfs, I think.
>
> FYI, this is exactly what we do with our FabriCache feature - i.e. both  
> MDT and OSTs are actually loopback files on tmpfs.

The problem with using a loop device instead of a ramdisk is that you now
have 2 layers of indirection - MDS->ldiskfs->loop->tmpfs->RAM instead of
MDS->ldiskfs->RAM.  The drawback (or possibly benefit) is that ramdisks
consume a fixed amount of RAM and are not "sparse" (AFAIK, that may have
changed since I last looked into this).  That said, once a block is written
to by mke2fs or by ldiskfs in the loop->tmpfs case it will also never be
freed again, so you only get some marginal benefit.

> Modulo a few issues  
> with preallocated write space eating all storage leaving none for actual  
> data, it works rather well producing high performance numbers and giving  
> LNDs a good workout.  BTW, the loopback driver does copies and is  
> disturbingly single-threaded, which can create a bottleneck.  This can  
> be worked around with multiple instances per node, though.

Even better, if you have some development skills, would be to implement
(or possibly resurrect) an fsfilt-tmpfs layer.  Since tmpfs isn't going
to be recoverable anyways (I assume you just reformat from scratch when
there is a crash), then you can make all of the transaction handling
as no-ops, and just implement the minimal interfaces needed to work.
That would allow unlinked files to release space from tmpfs, and also
avoid the fixed allocation overhead and journaling of ldiskfs, probably
saving you 5% of RAM (more on the MDS) and a LOT of memcpy() overhead.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.




More information about the lustre-devel mailing list