[lustre-discuss] Lustre on Ceph Block Devices

Brian Andrus toomuchit at gmail.com
Tue Feb 21 21:06:39 PST 2017


I had looked at it, but then, why?

There is no benefit using object storage when you are putting lustre 
over top. It would bog down. Supposedly you would want to use CephFS 
over the ceph storage. It talks directly to rados.
If you are able to enunciate the rados block devices, you should also be 
able to send them directly as block devices (iSCSI at least) so lustre 
is able to manage where the data is stored and use it's optimizing. 
Otherwise the data can't be optimized. Lustre would THINK it knows where 
it was, but the rados crush map would have put it somewhere else.

Just my 2cents.

Brian

On 2/21/2017 3:08 PM, Brock Palen wrote:
> Has anyone ever ran Lustre OST's (and maybe MDT's) on Ceph Radios 
> Block Devices?
>
> In theory this would work just like an SAN attached solution.  Has 
> anyone ever done it before?  I know we are seeing decent performance 
> from RBD on our system but I don't have a way to test lustre on it.
>
> I'm looking at a future system where Ceph and Lustre might be needed 
> (Object and High performance HPC) but also not a huge budget to have 
> two full disk stacks.  So an idea was to have lustre servers consume 
> Ceph Block devices, and that same cluster serves object requests.
>
> Thoughts or prior art?  This probably isn't that different than the 
> Cloud Formation script that uses EBS volumes if it works as intended.
>
> Thanks
>
> Brock Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Director Advanced Research Computing - TS
> XSEDE Campus Champion
> brockp at umich.edu <mailto:brockp at umich.edu>
> (734)936-1985
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170221/f4846450/attachment.htm>


More information about the lustre-discuss mailing list