[Lustre-discuss] lustre interoperability

Jerome, Ron Ron.Jerome at nrc-cnrc.gc.ca
Thu Oct 18 07:49:51 PDT 2007


Could somebody (Andreas maybe :) give a definitive answer on this...

> > Having said that, I believe 1.6.0 and 1.6.3 is actually something
that
> > will work, but I'm not 100% certain of this, so I'll allow others to
> > correct me.

... as I would like to do just this, go from 1.6.0.1 to 1.6.3 and would
rather not have to go through the intermediate versions if I don't have
to.

Thanks in advance. 

Ron Jerome
National Research Council Canada


> -----Original Message-----
> From: lustre-discuss-bounces at clusterfs.com [mailto:lustre-discuss-
> bounces at clusterfs.com] On Behalf Of Jody McIntyre
> Sent: October 15, 2007 11:58 PM
> To: Jeff Blasius
> Cc: lustre-discuss at clusterfs.com
> Subject: Re: [Lustre-discuss] lustre interoperability
> 
> Hi Jeff,
> 
> On Sun, Oct 14, 2007 at 11:31:45PM -0400, Jeff Blasius wrote:
> 
> > Thank you for this information. Can or someone else comment on what
> it
> > means to upgrade? I assume it's not enough to simply build the
> > software or update via rpm. After updating, you should go through
the
> > process of having all of the OSS's join the MGS/MDS and verify a
> > successful client connection?
> 
> This part is in the manual :)  See:
> http://manual.lustre.org/manual/LustreManual16_HTML/DynamicHTML-13-
> 1.html
> 
> > Just for my curiosity, is there a standard process the server
> > components undergo during the upgrade?
> 
> I don't understand what you mean here.  Can you explain your question?
> 
> Cheers,
> Jody
> 
> > Thank You,
> >                          jeff
> >
> > On 10/12/07, Jody McIntyre <scjody at clusterfs.com> wrote:
> > > Hi Jeff,
> > >
> > > On Fri, Oct 12, 2007 at 12:01:04PM -0400, Jeff Blasius wrote:
> > >
> > > > I've seriously trekked through the lustre documentation and
> haven't
> > > > found an answer regarding this. Is there an official policy
> regarding
> > > > interoperability among different versions of various lustre
> > > > components?
> > >
> > > By coincidence, I just sent information about this to our
> documentation
> > > team.  It should eventually reach the manual.  Here it is:
> > >
> > > ---
> > > Our supported upgrades are from one minor version to another, for
> > > example 1.4.10 to 1.4.11 or 1.6.2 to 1.6.3, and also from the
> latest
> > > 1.4.x version to the latest 1.6.x version, so 1.4.11 to 1.6.3 is
> > > supported.
> > >
> > > We also support downgrades within the same ranges.  For example,
if
> you
> > > upgrade from 1.6.2 to 1.6.3, you can also downgrade to 1.6.2 (but
a
> > > fresh install of 1.6.3 is _not_ guaranteed to be downgradeable.)
> > >
> > > Note that other combinations will work and we support them in
> specific
> > > cases when requested by customers, but the ranges above will
always
> > > work.
> > > ---
> > >
> > > > For example (and I'm sure other groups are in the same boat
> here),
> > > > it's relatively painless to perform a rolling upgrade to the
> lustre
> > > > clients, but upgrading the OSS or MDS takes more convincing. Is
> it OK
> > > > for me to run a patched but 1.6.0 based OSS with a 1.6.3 client?
> In
> > > > this case all of the lustre components (kernel, lustre, ldiskfs)
> are
> > > > the same version for each host. Similarly, is it OK to run a
> lustre
> > > > kernel version out of sync with the userland tools? For example
a
> > > > 1.6.0 kernel with a 1.6.3 lustre build on the same host?
> > >
> > > Not necessarily.  You should do a rolling upgrade to 1.6.1, then
> 1.6.2,
> > > then 1.6.3.  Upgrading will be easier if you stay more current -
> 1.6.0
> > > is fairly old at this point.
> > >
> > > Having said that, I believe 1.6.0 and 1.6.3 is actually something
> that
> > > will work, but I'm not 100% certain of this, so I'll allow others
> to
> > > correct me.
> > >
> > > Cheers,
> > > Jody
> > >
> > > > I understand that many of these combinations do in fact work,
I'm
> more
> > > > interested if they're likely to lead to data corruption or
client
> > > > evictions. I'm not sure how often incompatibilities arise, but
if
> it's
> > > > relatively rare, it would be useful if that was announced on the
> > > > change log. Of course if there's a serious "Do at your own risk"
> > > > policy that would also be useful to know.
> > > >
> > > > Thank You,
> > > >                      jeff
> > > >
> > > > --
> > > > Jeff Blasius / jeff.blasius at yale.edu
> > > > Phone: (203)432-9940  51 Prospect Rm. 011
> > > > High Performance Computing (HPC)
> > > > Linux Systems Design & Support (LSDS)
> > > > Yale University Information Technology Services (ITS)
> > > >
> > > > _______________________________________________
> > > > Lustre-discuss mailing list
> > > > Lustre-discuss at clusterfs.com
> > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> > > >
> > >
> > > --
> > >
> >
> >
> > --
> > Jeff Blasius / jeff.blasius at yale.edu
> > Phone: (203)432-9940  51 Prospect Rm. 011
> > High Performance Computing (HPC)
> > UNIX Systems Administrator, Linux Systems Design & Support (LSDS)
> > Yale University Information Technology Services (ITS)
> >
> 
> --
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list