[lustre-discuss] Building Best Practices Guide

Kyss Yousef yousefkyssit at gmail.com
Thu Sep 29 14:47:58 PDT 2022


Hello

I'm new to this list.  I've been contracted to develop a best practices
guide for my client who is running Lustre for its research lab.

They currently run a number of different file systems in house depending on
their use case, so I've created a comparison table and tried to fill in
some of the blanks but I'm at a loss.

For the record, I am not a Lustre admin.  I know enough to be dangerous,
which is why I joined this list.  I am "filling in" until they can hire a
replacement storage admin, which is why I'm not able to go ask them these,
presumably, simple questions.

And before I'm told to go search the documentation, believe me I have spent
the last two hours combing the internet and documentation to try to find my
answers but cannot.

So here is my list of questions.

-What does the minimum configuration look like?
Example Minimum MDS/MDT, OSS/OST, what does that look like from a minimum
storage capacity perspective?

-What is the maximum capacity a Lustre cluster can scale?  (I have done
some math and have come to 1EB max storage capacity)

-What is a reasonable read/write throughput performance for the cluster?
-Same question for R/W IOPs?

-Does Lustre support snapshots?  (I know there is a reference to this and
many of my questions on the Wiki page hosted at wiki.lustre.org but I keep
getting an error that it is down)

-What are the protocols supported by Lustre?  I know it supports a POSIX
client but how is it supporting NFS and SMB?  It looks like it is sharing
the ZFS pool via NFS, but again the wiki is down and can't access it.

I think that's it.  I hope to create this guide so users can decide which
file system best suites their specific needs and my client can reduce time
to insight.

Thanks for your help

Yousef
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220929/a9a3f9c2/attachment.htm>


More information about the lustre-discuss mailing list