[Lustre-discuss] lustre interoperability

Jeff Blasius jeff.blasius at yale.edu
Sun Oct 14 20:31:45 PDT 2007


Hi Jody,
Thank you for this information. Can or someone else comment on what it
means to upgrade? I assume it's not enough to simply build the
software or update via rpm. After updating, you should go through the
process of having all of the OSS's join the MGS/MDS and verify a
successful client connection?

Just for my curiosity, is there a standard process the server
components undergo during the upgrade?
Thank You,
                         jeff

On 10/12/07, Jody McIntyre <scjody at clusterfs.com> wrote:
> Hi Jeff,
>
> On Fri, Oct 12, 2007 at 12:01:04PM -0400, Jeff Blasius wrote:
>
> > I've seriously trekked through the lustre documentation and haven't
> > found an answer regarding this. Is there an official policy regarding
> > interoperability among different versions of various lustre
> > components?
>
> By coincidence, I just sent information about this to our documentation
> team.  It should eventually reach the manual.  Here it is:
>
> ---
> Our supported upgrades are from one minor version to another, for
> example 1.4.10 to 1.4.11 or 1.6.2 to 1.6.3, and also from the latest
> 1.4.x version to the latest 1.6.x version, so 1.4.11 to 1.6.3 is
> supported.
>
> We also support downgrades within the same ranges.  For example, if you
> upgrade from 1.6.2 to 1.6.3, you can also downgrade to 1.6.2 (but a
> fresh install of 1.6.3 is _not_ guaranteed to be downgradeable.)
>
> Note that other combinations will work and we support them in specific
> cases when requested by customers, but the ranges above will always
> work.
> ---
>
> > For example (and I'm sure other groups are in the same boat here),
> > it's relatively painless to perform a rolling upgrade to the lustre
> > clients, but upgrading the OSS or MDS takes more convincing. Is it OK
> > for me to run a patched but 1.6.0 based OSS with a 1.6.3 client? In
> > this case all of the lustre components (kernel, lustre, ldiskfs) are
> > the same version for each host. Similarly, is it OK to run a lustre
> > kernel version out of sync with the userland tools? For example a
> > 1.6.0 kernel with a 1.6.3 lustre build on the same host?
>
> Not necessarily.  You should do a rolling upgrade to 1.6.1, then 1.6.2,
> then 1.6.3.  Upgrading will be easier if you stay more current - 1.6.0
> is fairly old at this point.
>
> Having said that, I believe 1.6.0 and 1.6.3 is actually something that
> will work, but I'm not 100% certain of this, so I'll allow others to
> correct me.
>
> Cheers,
> Jody
>
> > I understand that many of these combinations do in fact work, I'm more
> > interested if they're likely to lead to data corruption or client
> > evictions. I'm not sure how often incompatibilities arise, but if it's
> > relatively rare, it would be useful if that was announced on the
> > change log. Of course if there's a serious "Do at your own risk"
> > policy that would also be useful to know.
> >
> > Thank You,
> >                      jeff
> >
> > --
> > Jeff Blasius / jeff.blasius at yale.edu
> > Phone: (203)432-9940  51 Prospect Rm. 011
> > High Performance Computing (HPC)
> > Linux Systems Design & Support (LSDS)
> > Yale University Information Technology Services (ITS)
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at clusterfs.com
> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> >
>
> --
>


-- 
Jeff Blasius / jeff.blasius at yale.edu
Phone: (203)432-9940  51 Prospect Rm. 011
High Performance Computing (HPC)
UNIX Systems Administrator, Linux Systems Design & Support (LSDS)
Yale University Information Technology Services (ITS)




More information about the lustre-discuss mailing list