[lustre-discuss] Lustre and Elasticsearch

E.S. Rosenberg esr+lustre at mail.hebrew.edu
Tue Nov 28 12:01:43 PST 2017


Thanks for all the great feedback and answers!

On Mon, Nov 27, 2017 at 7:04 AM, Mark Hahn <hahn at mcmaster.ca> wrote:

> Do you or anyone on the list know if/how flock affects Lustre performance?
>>
>
> I'm still puzzled by this: elasticsearch is specifically designed for each
> node to have a completely separate storage tree.
> why would there ever be any inter-node locking if your nodes happen to
> store onto Lustre?

I didn't go into their source, but they use flock.
IIRC they do support storage being a shared medium, which would be nice as
far as I'm concerned because I don't see why I need to hold the same
dataset multiple times...

>   it seems like localflock would be perfect: not even pointless flock
> roundtrips to the MDS
> would take place.
>
If it doesn't do shared datastore I may do that.

>
> I have no idea how well ES would work with Lustre - my ES clusters
> use local storage.

We're currently storing on an NFS filesystem, though if each elastic
instance has it's own copy of the data using local makes sense (we just try
to be mostly diskless on everything except for storage machines)

> ES uses mmap extensively, which is well-supported
> by Lustre.  I'm not so sure it would do well with Lustre's approach
> to IO (it doesn't do caching exactly like the normal Linux pagecache),
> and I wonder whether Lustre's proclivity for large block sizes might
> cause issues.  offhand, I'd guess it might be appropriate to direct
> each ES node to specific OSTs (lfs setstripe).
>
> I note that ES tends to have a lot of small files.  even just in terms
> of space utilization, that can be problematic for some Lustre configs.
>
Doesn't that depend on the way you create your indexes?
Thanks again,
Eli

>
> regards, mark hahn
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20171128/1aeb4bc1/attachment.html>


More information about the lustre-discuss mailing list