[Lustre-discuss] Performance dropoff for a nearly full Lustre file system

Mike Selway mselway at cray.com
Thu Jan 15 08:40:45 PST 2015


Hello Kilian,
	Thanks for the link... good presentation, and seems as a summary that a multi-OST system is just as encumbered by basic disk topology as any other type of file system.  Something that would seem to help would be a constant "monitoring" ability seeking out fragmented files or fragmented "free space" and then shifting things around (a form of the defrag without administrator involvement).  This would consume some level of the aggregate performance but would result in a cleaner file system across time.  The other answer is to move past the idea that the FS is the only place for data, and expand the architecture to using Robin Hood and an out of band "repository" file system.  Then arguably I can keep my performance optimized through the combination of the active Lustre FS and the "holding" FS and never need to allow the Lustre side to get to 99%.

Thanks!
Mike

Mike Selway | Sr. Storage Architect (TAS) | Cray Inc.
Work +1-301-332-4116 | mselway at cray.com
146 Castlemaine Ct,   Castle Rock,  CO  80104|   Check out Tiered Adaptive Storage (TAS)!

         



> -----Original Message-----
> From: Kilian Cavalotti [mailto:kilian.cavalotti.work at gmail.com]
> Sent: Wednesday, January 14, 2015 8:56 PM
> To: Dilger, Andreas
> Cc: Mike Selway; lustre-discuss at lists.lustre.org
> Subject: Re: [Lustre-discuss] Performance dropoff for a nearly full Lustre file
> system
> 
> Hi all,
> 
> On Wed, Jan 14, 2015 at 7:27 PM, Dilger, Andreas <andreas.dilger at intel.com>
> wrote:
> > Of course, fragmentation also plays a role, which is why ldiskfs will reserve 5%
> of the disk by default to avoid permanent performance loss caused by
> fragmentation if the filesystem gets totally full.
> 
> Ashley Pittman gave a presentation at LAD'13 about the influence of
> fragmentation on performance.
> http://www.eofs.eu/fileadmin/lad2013/slides/03_Ashley_Pittman_Fragmentati
> on_lad13.pdf
> 
> Cheers,
> --
> Kilian


More information about the lustre-discuss mailing list