[Lustre-discuss] lustre interoperability

Jody McIntyre scjody at clusterfs.com
Mon Oct 15 20:53:16 PDT 2007


Hi Sheila,

On Fri, Oct 12, 2007 at 12:27:23PM -0600, Sheila Barthel wrote:

> Not a problem to add a section in the Lustre manual re: supported upgrades.
> However, as we do not currently sync Lustre releases with releases of the
> Lustre manual, this wouldn't help us to address the issue that Jeff poses.

I don't think we need to list specific versions or anything.  Just
include the general policies I have outlined below as well as some
examples - that way the information won't need to be revised regularly.

Cheers,
Jody

> For future Lustre releases, could we add a supported upgrade/interop
> statement to the change log? I'll also add this content to the manual (which
> will have more value once Lustre s/w and manual releases are synchronized).
> 
> Sheila
> 
> On 10/12/07, Jody McIntyre <scjody at clusterfs.com> wrote:
> >
> > Hi Jeff,
> >
> > On Fri, Oct 12, 2007 at 12:01:04PM -0400, Jeff Blasius wrote:
> >
> > > I've seriously trekked through the lustre documentation and haven't
> > > found an answer regarding this. Is there an official policy regarding
> > > interoperability among different versions of various lustre
> > > components?
> >
> > By coincidence, I just sent information about this to our documentation
> > team.  It should eventually reach the manual.  Here it is:
> >
> > ---
> > Our supported upgrades are from one minor version to another, for
> > example 1.4.10 to 1.4.11 or 1.6.2 to 1.6.3, and also from the latest
> > 1.4.x version to the latest 1.6.x version, so 1.4.11 to 1.6.3 is
> > supported.
> >
> > We also support downgrades within the same ranges.  For example, if you
> > upgrade from 1.6.2 to 1.6.3, you can also downgrade to 1.6.2 (but a
> > fresh install of 1.6.3 is _not_ guaranteed to be downgradeable.)
> >
> > Note that other combinations will work and we support them in specific
> > cases when requested by customers, but the ranges above will always
> > work.
> > ---
> >
> > > For example (and I'm sure other groups are in the same boat here),
> > > it's relatively painless to perform a rolling upgrade to the lustre
> > > clients, but upgrading the OSS or MDS takes more convincing. Is it OK
> > > for me to run a patched but 1.6.0 based OSS with a 1.6.3 client? In
> > > this case all of the lustre components (kernel, lustre, ldiskfs) are
> > > the same version for each host. Similarly, is it OK to run a lustre
> > > kernel version out of sync with the userland tools? For example a
> > > 1.6.0 kernel with a 1.6.3 lustre build on the same host?
> >
> > Not necessarily.  You should do a rolling upgrade to 1.6.1, then 1.6.2,
> > then 1.6.3.  Upgrading will be easier if you stay more current - 1.6.0
> > is fairly old at this point.
> >
> > Having said that, I believe 1.6.0 and 1.6.3 is actually something that
> > will work, but I'm not 100% certain of this, so I'll allow others to
> > correct me.
> >
> > Cheers,
> > Jody
> >
> > > I understand that many of these combinations do in fact work, I'm more
> > > interested if they're likely to lead to data corruption or client
> > > evictions. I'm not sure how often incompatibilities arise, but if it's
> > > relatively rare, it would be useful if that was announced on the
> > > change log. Of course if there's a serious "Do at your own risk"
> > > policy that would also be useful to know.
> > >
> > > Thank You,
> > >                      jeff
> > >
> > > --
> > > Jeff Blasius / jeff.blasius at yale.edu
> > > Phone: (203)432-9940  51 Prospect Rm. 011
> > > High Performance Computing (HPC)
> > > Linux Systems Design & Support (LSDS)
> > > Yale University Information Technology Services (ITS)
> > >
> > > _______________________________________________
> > > Lustre-discuss mailing list
> > > Lustre-discuss at clusterfs.com
> > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> > >
> >
> > --
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at clusterfs.com
> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> >
> >

-- 




More information about the lustre-discuss mailing list