[Lustre-discuss] Client directory entry caching

Oleg Drokin oleg.drokin at oracle.com
Mon Aug 2 21:21:27 PDT 2010


Hello!

On Jul 30, 2010, at 7:20 AM, Daire Byrne wrote:
> Ah yes... that makes sense. I recall the opencache gave a big boost in
> performance for NFS exporting but I wasn't sure if it had become the
> default. I haven't been keeping up to date with Lustre developments.

It was default for NFS for quite some time.

> So even with the metadata going over NFS the opencache in the client
> seems to make quite a difference (I'm not sure how much the NFS client
> caches though). As expected I see no mdt activity for the NFS export
> once cached. I think it would be really nice to be able to enable the
> opencache on any lustre client. A couple of potential workloads that I

A simple workaround for you to enable opencache on a specific client would
be to add cr_flags |= MDS_OPEN_LOCK; in mdc/mdc_lib.c:mds_pack_open_flags()

or if you want it to be cluster wide, in the mds/mds_open.c:mds_open()
make all conditions checking for MDS_OPEN_LOCK to be true.

I guess we really need to have an option for this, but I am not sure
if we want it on the client, server, or both.

> can think of that would benefit are WAN clients and clients that need
> to do mainly metadata (e.g. scanning the filesystem, rsync --link-dest
> hardlink snapshot backups). For the WAN case I'd be quite interested

Open is very narrow metadata case, so if you do metadata but no opens you would
get zero benefit from open cache.
Also getting this extra lock puts some extra cpu load on MDS, but if we go this far,
we can then somewhat simplify rep-ack and hold it for much shorter time in
a lot of cases which would greatly help WAN workloads that happen to create
files in same dir from many nodes, for example. (see bug 20373, first patch)
Just be aware that testing with more than 16000 clients at ORNL clearly shows
degradations at LAN latencies.

Bye,
    Oleg


More information about the lustre-discuss mailing list