[lustre-discuss] Lustre on Ceph Block Devices

Shinobu Kinjo shinobu.kj at gmail.com
Wed Feb 22 01:54:10 PST 2017


Yeah, that's interesting. But that does not really make sense to use
Lustre. And should not be used for any computations.

If anything goes wrong, troubleshooting would become nightmare.

Have you ever thought of using Lustre on top of GPFS native client?

Anyway if you are going to build Lustre on top of any RADOS client and run
MPI jobs, please share results. I'm really really interested in them.


On Wed, Feb 22, 2017 at 2:06 PM, Brian Andrus <toomuchit at gmail.com> wrote:

> I had looked at it, but then, why?
>
> There is no benefit using object storage when you are putting lustre over
> top. It would bog down. Supposedly you would want to use CephFS over the
> ceph storage. It talks directly to rados.
> If you are able to enunciate the rados block devices, you should also be
> able to send them directly as block devices (iSCSI at least) so lustre is
> able to manage where the data is stored and use it's optimizing. Otherwise
> the data can't be optimized. Lustre would THINK it knows where it was, but
> the rados crush map would have put it somewhere else.
>
> Just my 2cents.
>
> Brian
> On 2/21/2017 3:08 PM, Brock Palen wrote:
>
> Has anyone ever ran Lustre OST's (and maybe MDT's)  on Ceph Radios Block
> Devices?
>
> In theory this would work just like an SAN attached solution.  Has anyone
> ever done it before?  I know we are seeing decent performance from RBD on
> our system but I don't have a way to test lustre on it.
>
> I'm looking at a future system where Ceph and Lustre might be needed
> (Object and High performance HPC) but also not a huge budget to have two
> full disk stacks.  So an idea was to have lustre servers consume Ceph Block
> devices, and that same cluster serves object requests.
>
> Thoughts or prior art?  This probably isn't that different than the Cloud
> Formation script that uses EBS volumes if it works as intended.
>
> Thanks
>
> Brock Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Director Advanced Research Computing - TS
> XSEDE Campus Champion
> brockp at umich.edu
> (734)936-1985 <(734)%20936-1985>
>
>
> _______________________________________________
> lustre-discuss mailing listlustre-discuss at lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170222/c1b00d5c/attachment.htm>


More information about the lustre-discuss mailing list