[lustre-discuss] billions of 50k files
Brian Andrus
toomuchit at gmail.com
Wed Nov 29 14:31:04 PST 2017
All,
I have always seen lustre as a good solution for large files and not the
best for many small files.
Recently, I have seen a request for a small lustre system (2 OSSes, 1
MDS) that would be for billions of files that average 50k-100k.
It seems to me, that for this to be 'of worth', the block sizes on disks
need to be small, but even then, with tcp overhead and inode
limitations, it may still not perform all that well (compared to larger
files).
Am I off here? Have there been some developments in lustre that help
this scenario (beyond small files being stored on the MDT directly)?
Thanks for any insight,
Brian Andrus
More information about the lustre-discuss
mailing list