[lustre-discuss] Lustre on Ceph Block Devices

Shinobu Kinjo shinobu.kj at gmail.com
Wed Feb 22 14:06:48 PST 2017


> If we do test this I'll let you know how it works.
>

Yes, please. I'm pretty curious about that.


>
> Why Lustre on GPFS?  Why not just run GPFS then given it support byte
> range locking / MPI-IO and POSIX (Ignore license costs).
>

Sorry for confusion. I was just asking. I've never thought of that because
of no performance advantage and going to be much more complex.
Troubleshooting would be nightmare.


> I'm trying to limit the number of disk systems to maintain in a system
> of modest size where both MPI-IO and Object is required.    I have
> dedicated Lustre today for larger systems and they will stay that way.
>

Any researchers also are interested in Lustre on Ceph?
Anyway let me know, once you become ready.


> Was just curious if anyone tried this.
>
>
> Brock Palen
> www.umich.edu/~brockp
> Director Advanced Research Computing - TS
> XSEDE Campus Champion
> brockp at umich.edu
> (734)936-1985
>
> On Wed, Feb 22, 2017 at 4:54 AM, Shinobu Kinjo <shinobu.kj at gmail.com>
> wrote:
> >
> > Yeah, that's interesting. But that does not really make sense to use
> Lustre. And should not be used for any computations.
> >
> > If anything goes wrong, troubleshooting would become nightmare.
> >
> > Have you ever thought of using Lustre on top of GPFS native client?
> >
> > Anyway if you are going to build Lustre on top of any RADOS client and
> run MPI jobs, please share results. I'm really really interested in them.
> >
> >
> >
> > On Wed, Feb 22, 2017 at 2:06 PM, Brian Andrus <toomuchit at gmail.com>
> wrote:
> >>
> >> I had looked at it, but then, why?
> >>
> >> There is no benefit using object storage when you are putting lustre
> over top. It would bog down. Supposedly you would want to use CephFS over
> the ceph storage. It talks directly to rados.
> >> If you are able to enunciate the rados block devices, you should also
> be able to send them directly as block devices (iSCSI at least) so lustre
> is able to manage where the data is stored and use it's optimizing.
> Otherwise the data can't be optimized. Lustre would THINK it knows where it
> was, but the rados crush map would have put it somewhere else.
> >>
> >> Just my 2cents.
> >>
> >> Brian
> >>
> >> On 2/21/2017 3:08 PM, Brock Palen wrote:
> >>
> >> Has anyone ever ran Lustre OST's (and maybe MDT's)  on Ceph Radios
> Block Devices?
> >>
> >> In theory this would work just like an SAN attached solution.  Has
> anyone ever done it before?  I know we are seeing decent performance from
> RBD on our system but I don't have a way to test lustre on it.
> >>
> >> I'm looking at a future system where Ceph and Lustre might be needed
> (Object and High performance HPC) but also not a huge budget to have two
> full disk stacks.  So an idea was to have lustre servers consume Ceph Block
> devices, and that same cluster serves object requests.
> >>
> >> Thoughts or prior art?  This probably isn't that different than the
> Cloud Formation script that uses EBS volumes if it works as intended.
> >>
> >> Thanks
> >>
> >> Brock Palen
> >> www.umich.edu/~brockp
> >> Director Advanced Research Computing - TS
> >> XSEDE Campus Champion
> >> brockp at umich.edu
> >> (734)936-1985
> >>
> >>
> >> _______________________________________________
> >> lustre-discuss mailing list
> >> lustre-discuss at lists.lustre.org
> >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> >>
> >>
> >>
> >> _______________________________________________
> >> lustre-discuss mailing list
> >> lustre-discuss at lists.lustre.org
> >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> >>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170223/2a999561/attachment.htm>


More information about the lustre-discuss mailing list