<div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
If we do test this I'll let you know how it works.<br></div></div></blockquote><div><br></div><div>Yes, please. I'm pretty curious about that.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
<br>
Why Lustre on GPFS? Why not just run GPFS then given it support byte<br>
range locking / MPI-IO and POSIX (Ignore license costs).<br></div></div></blockquote><div><br></div><div>Sorry for confusion. I was just asking. I've never thought of that because of no performance advantage and going to be much more complex. Troubleshooting would be nightmare. <br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
I'm trying to limit the number of disk systems to maintain in a system<br>
of modest size where both MPI-IO and Object is required. I have<br>
dedicated Lustre today for larger systems and they will stay that way.<br></div></div></blockquote><div><br></div><div>Any researchers also are interested in Lustre on Ceph?<br></div><div>Anyway let me know, once you become ready.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
Was just curious if anyone tried this.<br>
<br>
<br>
Brock Palen<br>
<a href="http://www.umich.edu/~brockp" rel="noreferrer" target="_blank">www.umich.edu/~brockp</a><br>
Director Advanced Research Computing - TS<br>
XSEDE Campus Champion<br>
<a href="mailto:brockp@umich.edu">brockp@umich.edu</a><br>
<a href="tel:%28734%29936-1985" value="+17349361985">(734)936-1985</a><br>
<br>
On Wed, Feb 22, 2017 at 4:54 AM, Shinobu Kinjo <<a href="mailto:shinobu.kj@gmail.com">shinobu.kj@gmail.com</a>> wrote:<br>
><br>
> Yeah, that's interesting. But that does not really make sense to use Lustre. And should not be used for any computations.<br>
><br>
> If anything goes wrong, troubleshooting would become nightmare.<br>
><br>
> Have you ever thought of using Lustre on top of GPFS native client?<br>
><br>
> Anyway if you are going to build Lustre on top of any RADOS client and run MPI jobs, please share results. I'm really really interested in them.<br>
><br>
><br>
><br>
> On Wed, Feb 22, 2017 at 2:06 PM, Brian Andrus <<a href="mailto:toomuchit@gmail.com">toomuchit@gmail.com</a>> wrote:<br>
>><br>
>> I had looked at it, but then, why?<br>
>><br>
>> There is no benefit using object storage when you are putting lustre over top. It would bog down. Supposedly you would want to use CephFS over the ceph storage. It talks directly to rados.<br>
>> If you are able to enunciate the rados block devices, you should also be able to send them directly as block devices (iSCSI at least) so lustre is able to manage where the data is stored and use it's optimizing. Otherwise the data can't be optimized. Lustre would THINK it knows where it was, but the rados crush map would have put it somewhere else.<br>
>><br>
>> Just my 2cents.<br>
>><br>
>> Brian<br>
>><br>
>> On 2/21/2017 3:08 PM, Brock Palen wrote:<br>
>><br>
>> Has anyone ever ran Lustre OST's (and maybe MDT's) on Ceph Radios Block Devices?<br>
>><br>
>> In theory this would work just like an SAN attached solution. Has anyone ever done it before? I know we are seeing decent performance from RBD on our system but I don't have a way to test lustre on it.<br>
>><br>
>> I'm looking at a future system where Ceph and Lustre might be needed (Object and High performance HPC) but also not a huge budget to have two full disk stacks. So an idea was to have lustre servers consume Ceph Block devices, and that same cluster serves object requests.<br>
>><br>
>> Thoughts or prior art? This probably isn't that different than the Cloud Formation script that uses EBS volumes if it works as intended.<br>
>><br>
>> Thanks<br>
>><br>
>> Brock Palen<br>
>> <a href="http://www.umich.edu/~brockp" rel="noreferrer" target="_blank">www.umich.edu/~brockp</a><br>
>> Director Advanced Research Computing - TS<br>
>> XSEDE Campus Champion<br>
>> <a href="mailto:brockp@umich.edu">brockp@umich.edu</a><br>
>> <a href="tel:%28734%29936-1985" value="+17349361985">(734)936-1985</a><br>
>><br>
>><br>
>> ______________________________<wbr>_________________<br>
>> lustre-discuss mailing list<br>
>> <a href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
>> <a href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org" rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
>><br>
>><br>
>><br>
>> ______________________________<wbr>_________________<br>
>> lustre-discuss mailing list<br>
>> <a href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.<wbr>org</a><br>
>> <a href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org" rel="noreferrer" target="_blank">http://lists.lustre.org/<wbr>listinfo.cgi/lustre-discuss-<wbr>lustre.org</a><br>
>><br>
><br>
</div></div></blockquote></div><br></div></div>