[Lustre-discuss] Question on upgrading Lustre 1.6.6 -> 1.8.0
Arden Wiebe
albert682 at yahoo.com
Sun May 17 09:07:07 PDT 2009
I concur that the upgrade from 1.6 to 1.8 was as simple as upgrading the packages on all the nodes for the clients and the servers.
I searched the mailing list archives and found reference to other posts that mentioned the same upgrade procedure and so I did as well. It was a snap. The upgrade of 4 machines was complete in under half an hour minus the tune2fs of course.
[root at ns2 ~]# uname -rv
2.6.18-92.1.17.el5_lustre.1.8.0smp #1 SMP Thu Mar 5 17:41:12 MST 2009
--- On Sun, 5/17/09, thhsieh <thhsieh at piano.rcas.sinica.edu.tw> wrote:
> From: thhsieh <thhsieh at piano.rcas.sinica.edu.tw>
> Subject: [Lustre-discuss] Question on upgrading Lustre 1.6.6 -> 1.8.0
> To: lustre-discuss at lists.lustre.org
> Date: Sunday, May 17, 2009, 1:33 AM
> Dear All,
>
> I have read the description of Lustre Operation Guide for
> version
> 1.8. But I am still not very sure about the exact
> procedures to
> upgrade from version 1.6.6 to version 1.8.0. Now I try to
> write up
> a plan of upgrading. Please give me your kindly comments on
> my
> procedures. :)
>
> In our system, we have three Lustre filesystems (they are
> all version
> 1.6.6, for all the MGS, MDT, OST, and clients), which are
> configured
> in the following:
>
> 1. fsname="chome"
> MGS: qa1:/dev/sda5
> MDT: qa1:/dev/sda5 (i.e., exactly
> same disk partition as MGS)
> OST: qaX:/dev/sdaX (distributed in
> several OST nodes)
>
> 2. fsname="cwork"
> MGS: qa1:/dev/sda5 (shared with
> that of "chome")
> MDT: qa1:/dev/sda6
> OST: qaY:/dev/sdaY (distributed in
> several OST nodes)
>
> 3. fsname="cwork1"
> MGS: qa1:/dev/sda5 (shared with
> that of "chome")
> MDT: qa1:/dev/sda7
> OST: qaZ:/dev/sdaZ (distributed in
> several OST nodes)
>
> We do not have failover configurations in all the
> filesystems.
>
> I am planing to shutdown all the Lustre filesystems, and
> then perform the
> upgrading, and finally startup them. I guess that would be
> simpler. The
> exact procedures I am going to do are:
>
> 1. For each of the Lustre filesystems, I will perform the
> following
> shutdown procedures (chome should be the
> last one to shutdown, since
> it share the MDT and MGS in the same
> partition):
> - umount all clients
> - umount all OSTs
> - umount MDT
>
> 2. Install the new Lustre-1.8 software and modules and
> reboot all the
> nodes. Then I will upgrade "chome" first,
> and then "cwork", and
> finally "cwork1".
>
> 3. Upgrade MGS and "MDT for chome":
>
> qa1# tunefs.lustre --mgs --mdt
> --fsname=chome /dev/sda5
>
> 4. Upgrade OSTs for chome:
>
> qaX# tunefs.lustre --ost --fsname=chome
> --mgsnode=qa1 /dev/sdaX
>
> Up to this point the "chome" part should
> be ready, I guess.
>
>
> 5. Now the MDT for "cwork". The manual says that we should
> copy the MDT
> and client startup logs from the MDT to
> the MGS, so I guess that I should
>
> - Mount MGS as ldiskfs:
> qa1# mount -t ldiskfs /dev/sda5
> /mnt
>
> - Run script "lustre_up14" on the MDT of
> "cwork" partition:
> qa1# lustre_up14 /dev/sda6 cwork
>
> then I will get the following
> files:
> /tmp/logs/cwork-client
> /tmp/logs/cwork-MDT0000
>
> - Copy these log files to /mnt/CONFIGS/
>
> - Umount MGS:
> qa1# umount /mnt
>
> - Upgrade the MDT:
> qa1# tunefs.lustre --mdt --nomgs
> --fsname=cwork --mgsnode=qa1 /dev/sda6
>
>
> 6. Now the OSTs for "cwork":
>
> qaY# tunefs.lustre --ost --fsname=cwork1
> --mgsnode=qa1 /dev/sdaY
>
> Up to now the filesystem "cwork" should
> be ready.
>
>
> 7. For the MDT and OSTs for "cwork1", we can follow the
> same procedures
> as step 6 and 7.
>
> 8. Start up the new Lustre filesystems:
>
> For chome:
> qa1# mount -t lustre /dev/sda5
> /cfs/chome_mdt
> qaX# mount -t lustre /dev/sdaX
> /cfs/chome_ostX
> mount the clients
>
> for cwork:
> qa1# mount -t lustre /dev/sda6
> /cfs/cwork_mdt
> qaY# mount -t lustre /dev/sdaY
> /cfs/cwork_ostY
> mount the clients
>
> for cwork1:
> qa1# mount -t lustre /dev/sda7
> /cfs/cwork1_mdt
> qaZ# mount -t lustre /dev/sdaZ
> /cfs/cwork1_ostZ
> mount the clients
>
>
> Please kindly give me your comments. Thanks very much.
>
>
> Best Regards,
>
> T.H.Hsieh
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
More information about the lustre-discuss
mailing list