[lustre-discuss] How does Lustre client side caching work?

Joakim Ziegler joakim at terminalmx.com
Tue Jul 25 11:09:52 PDT 2017


Hello, I'm pretty new to Lustre, we're looking at setting up a Lustre
cluster for storage of media assets (something in the 0.5-1PB range to
start with, maybe 6 OSSes (in HA pairs), running on our existing FDR IB
network). It looks like a good match for our needs, however, there's an
area I've been unable to find details about. Note that I'm just
investigating for now, I have no running Lustre setup.

There are plenty of references to Lustre using client side caching, and how
the Distributed Lock Manager makes this work. However, I can't find almost
any information about how the client side cache actually works. When I
first heard it mentioned, I imagined something like the ZFS L2ARC, where
you can add a device (say, a couple of SSDs) to the client and point Lustre
at it to use it for caching. But some references I come across just talk
about the normal kernel page cache, which is probably smaller and less
persistent than what I'd like for our usage.

Could anyone enlighten me? I have a large dataset, but clients typically
use a small part of it at any given time, and uses it quite intensively, so
a client-side cache (either a read cache or ideally a writeback cache)
would likely reduce network traffic and server load quite a bit. We've been
using NFS over RDMA and fscache to get a read cache that does roughly this
so far on our existing file servers, and it's been quite effective, so I
imagine we could also benefit from something similar as we move to Lustre.

-- 
Joakim Ziegler  -  Supervisor de postproducción  -  Terminal
joakim at terminalmx.com   -   044 55 2971 8514   -   5264 0864
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170725/72452936/attachment.htm>


More information about the lustre-discuss mailing list