[lustre-discuss] Lustre and Elasticsearch
johnbent at gmail.com
Sun Nov 26 20:03:21 PST 2017
How does the lock manager avoid disk IO? Locks don’t survive MDS0 failure?
> On Nov 26, 2017, at 8:29 PM, Dilger, Andreas <andreas.dilger at intel.com> wrote:
> The flock functionality only affects applications that are actually using it. It does not add any overhead for applications that do not use flock.
> There are two flock options:
> - localflock, which only keeps locking on the local client node and is sufficient for applications that only run on a single node
> - flock, which adds locking between applications on different clients mounted with this option. This is if you have a distributed application that is running on multiple clients that controls its file access via flock (e.g. Producer/consumer).
> The overhead itself depends on how much the application is actually using flock. The lock manager is on MDS0, and uses Lustre RPCs (which can run at 100k/s or higher), and does not involve any disk IO.
> Cheers, Andreas
> On Nov 26, 2017, at 12:03, E.S. Rosenberg <esr+lustre at mail.hebrew.edu> wrote:
>> Hi Torsten,
>> Thanks that worked!
>> Do you or anyone on the list know if/how flock affects Lustre performance?
>> Thanks again,
>>> On Tue, Nov 21, 2017 at 9:18 AM, Torsten Harenberg <torsten.harenberg at cern.ch> wrote:
>>> Hi Eli,
>>> Am 21.11.17 um 01:26 schrieb E.S. Rosenberg:
>>> > So I was wondering would this issue be solved by Lustre bindings for
>>> > Java or is this a way of locking that isn't supported by Lustre?
>>> I know nothing about Elastic Search, but have you tried to mount Lustre
>>> with "flock" in the mount options?
>>> <> <>
>>> <> Dr. Torsten Harenberg Torsten.Harenberg at cern.ch <>
>>> <> Bergische Universitaet <>
>>> <> Fakutät 4 - Physik Tel.: +49 (0)202 439-3521 <>
>>> <> Gaussstr. 20 Fax : +49 (0)202 439-2811 <>
>>> <> 42097 Wuppertal @CERN: Bat. 1-1-049 <>
>>> <> <>
>>> <><><><><><><>< Of course it runs NetBSD http://www.netbsd.org ><>
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss