[lustre-discuss] Lustre and Elasticsearch

Dilger, Andreas andreas.dilger at intel.com
Sun Nov 26 19:29:41 PST 2017


The flock functionality only affects applications that are actually using it. It does not add any overhead for applications that do not use flock.

There are two flock options:

 - localflock, which only keeps locking on the local client node and is sufficient for applications that only run on a single node
- flock, which adds locking between applications on different clients mounted with this option. This is if you have a distributed application that is running on multiple clients that controls its file access via flock (e.g. Producer/consumer).

The overhead itself depends on how much the application is actually using flock. The lock manager is on MDS0, and uses Lustre RPCs (which can run at 100k/s or higher), and does not involve any disk IO.

Cheers, Andreas

On Nov 26, 2017, at 12:03, E.S. Rosenberg <esr+lustre at mail.hebrew.edu<mailto:esr+lustre at mail.hebrew.edu>> wrote:

Hi Torsten,
Thanks that worked!

Do you or anyone on the list know if/how flock affects Lustre performance?

Thanks again,
Eli

On Tue, Nov 21, 2017 at 9:18 AM, Torsten Harenberg <torsten.harenberg at cern.ch<mailto:torsten.harenberg at cern.ch>> wrote:
Hi Eli,

Am 21.11.17 um 01:26 schrieb E.S. Rosenberg:
> So I was wondering would this issue be solved by Lustre bindings for
> Java or is this a way of locking that isn't supported by Lustre?

I know nothing about Elastic Search, but have you tried to mount Lustre
with "flock" in the mount options?

Cheers

 Torsten

--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>                                                              <>
<> Dr. Torsten Harenberg     Torsten.Harenberg at cern.ch<mailto:Torsten.Harenberg at cern.ch>          <>
<> Bergische Universitaet                                       <>
<> Fakutät 4 - Physik        Tel.: +49 (0)202 439-3521<tel:%2B49%20%280%29202%20439-3521>          <>
<> Gaussstr. 20              Fax : +49 (0)202 439-2811<tel:%2B49%20%280%29202%20439-2811>          <>
<> 42097 Wuppertal           @CERN: Bat. 1-1-049                <>
<>                                                              <>
<><><><><><><>< Of course it runs NetBSD http://www.netbsd.org ><>

_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20171127/8fe73bec/attachment.html>


More information about the lustre-discuss mailing list