[lustre-discuss] NFS Client Attributes caching - equivalent feature/config in Lustre
adilger at whamcloud.com
Wed May 20 13:54:04 PDT 2020
I just found this old email in my spam folder...
On Apr 21, 2020, at 14:54, Pinkesh Valdria <pinkesh.valdria at oracle.com<mailto:pinkesh.valdria at oracle.com>> wrote:
Does lustre have mount options to mimic NFS mount option behavior , listed below?
I know in most cases, Lustre would perform much better than NFS and can scale and support a lot of clients in parallel. I have a use case, where there are only few clients accessing the filesystem and the files are really small, but in millions and files are very infrequently updated. The files are stored on an NFS server and its mounted on the clients with the below mount options, which results in caching of file attributes/metadata on the client and thus reduces # of calls to metadata and delivers better performance.
NFS mount options
type nfs (rw,nolock,nocto,actimeo=900,nfsvers=3,proto=tcp)
Lustre will already cache file attributes and data on the client, since it is totally coherent, and doesn't depend on random timeouts like NFS to decide whether the client should cache the data or not.
A custom proprietary application which compile (make command) on some of these files takes 20-24 seconds to run. The same command when ran on the same files stored in BeeGFS parallel filesystem takes 80-90 seconds (4x times slow), mainly because there is no client caching in BeeGFS and client has to make a lot more metadata calls compared to NFS cache file attributes.
I already tried BeeGFS and I am asking this question to determine, if Lustre performance would be better than NFS for very small file workloads (50 bytes, 200 bytes, 2KB files) with 5 millions files spread across nested directories. Does lustre have mount options to mimic NFS mount option behavior, listed below? Or is there some optional feature in Lustre to achieve this cache behavior?
Yes, Lustre will already/always have the desired caching behavior by default, no settings needed. Some tuning might be needed if the working set is so large (10,000s of files) that the locks protecting the data are cancelled because of the sheer volume of data or because the files are unused for a long time (i.e. over 1h).
Since Lustre can be downloaded for free, you could always give your application workload a test, to see what the performance is.
For very small files, you might want to consider to use Data-on-MDT (DoM) by running "lfs setstripe -E 64k -L mdt -E64M -c 1 -Eeof -c -1 $dir" on the test directory (or on the root directory of the filesystem) to have it store these tiny files directly on the MDT. You would in that case need enough free space on the MDT to hold all of the files.
ac / noac
Selects whether the client may cache file attributes. If neither option is specified (or if ac is specified), the client caches file attributes.
For my custom applications, cache of file attributes is fine (no negative impact) and it helps to improve performance of NFS.
Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value. If this option is not specified, the NFS client uses the defaults for each of these options listed above.
For my applications, it’s okay to use cache file attributes/metadata for few mins (eg: 5mins) by setting this value, it can reduce # of metadata calls been made to the server and especially with filesystems storing lot of small files, it’s a huge performance penalty, which can be avoided.
When mounting servers that do not support the NLM protocol, or when mounting an NFS server through a firewall that blocks the NLM service port, specify the nolock mount option.Specifying the nolock option may also be advised to improve the performance of a proprietary application which runs on a single client and uses file locks extensively.
Appreciate any guidance.
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Principal Lustre Architect
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss