[Lustre-discuss] Question on upgrading Lustre 1.6.6 -> 1.8.0

Daire Byrne Daire.Byrne at framestore.com
Sun May 17 04:10:48 PDT 2009


I think the v1.8 manual is still referring to the upgrade of Lustre v1.4 -> v1.6. If you are upgrading from v1.6 to v1.8 then you should only need to install the newer packages and reboot. You may need to do a tune2fs if you want to enable newer features but I'm not 100% sure of that.

Daire

----- "thhsieh" <thhsieh at piano.rcas.sinica.edu.tw> wrote:

> Dear All,
> 
> I have read the description of Lustre Operation Guide for version
> 1.8. But I am still not very sure about the exact procedures to
> upgrade from version 1.6.6 to version 1.8.0. Now I try to write up
> a plan of upgrading. Please give me your kindly comments on my
> procedures. :)
> 
> In our system, we have three Lustre filesystems (they are all version
> 1.6.6, for all the MGS, MDT, OST, and clients), which are configured
> in the following:
> 
> 1. fsname="chome"
>    MGS: qa1:/dev/sda5
>    MDT: qa1:/dev/sda5  (i.e., exactly same disk partition as MGS)
>    OST: qaX:/dev/sdaX  (distributed in several OST nodes)
> 
> 2. fsname="cwork"
>    MGS: qa1:/dev/sda5  (shared with that of "chome")
>    MDT: qa1:/dev/sda6
>    OST: qaY:/dev/sdaY  (distributed in several OST nodes)
> 
> 3. fsname="cwork1"
>    MGS: qa1:/dev/sda5  (shared with that of "chome")
>    MDT: qa1:/dev/sda7
>    OST: qaZ:/dev/sdaZ  (distributed in several OST nodes)
> 
> We do not have failover configurations in all the filesystems.
> 
> I am planing to shutdown all the Lustre filesystems, and then perform
> the
> upgrading, and finally startup them. I guess that would be simpler.
> The
> exact procedures I am going to do are:
> 
> 1. For each of the Lustre filesystems, I will perform the following
>    shutdown procedures (chome should be the last one to shutdown,
> since
>    it share the MDT and MGS in the same partition):
>    - umount all clients
>    - umount all OSTs
>    - umount MDT
> 
> 2. Install the new Lustre-1.8 software and modules and reboot all the
>    nodes. Then I will upgrade "chome" first, and then "cwork", and
>    finally "cwork1".
> 
> 3. Upgrade MGS and "MDT for chome":
>    
>    qa1# tunefs.lustre --mgs --mdt --fsname=chome /dev/sda5
> 
> 4. Upgrade OSTs for chome:
> 
>    qaX# tunefs.lustre --ost --fsname=chome --mgsnode=qa1 /dev/sdaX
> 
>    Up to this point the "chome" part should be ready, I guess.
> 
> 
> 5. Now the MDT for "cwork". The manual says that we should copy the
> MDT
>    and client startup logs from the MDT to the MGS, so I guess that I
> should
> 
>    - Mount MGS as ldiskfs:
>      qa1# mount -t ldiskfs /dev/sda5 /mnt
> 
>    - Run script "lustre_up14" on the MDT of "cwork" partition:
>      qa1# lustre_up14 /dev/sda6 cwork
> 
>      then I will get the following files:
>      /tmp/logs/cwork-client
>      /tmp/logs/cwork-MDT0000
> 
>    - Copy these log files to /mnt/CONFIGS/
> 
>    - Umount MGS:
>      qa1# umount /mnt
> 
>    - Upgrade the MDT:
>      qa1# tunefs.lustre --mdt --nomgs --fsname=cwork --mgsnode=qa1
> /dev/sda6
> 
> 
> 6. Now the OSTs for "cwork":
> 
>    qaY# tunefs.lustre --ost --fsname=cwork1 --mgsnode=qa1 /dev/sdaY
> 
>    Up to now the filesystem "cwork" should be ready.
> 
> 
> 7. For the MDT and OSTs for "cwork1", we can follow the same
> procedures
>    as step 6 and 7.
> 
> 8. Start up the new Lustre filesystems:
> 
>    For chome:
>    qa1# mount -t lustre /dev/sda5 /cfs/chome_mdt
>    qaX# mount -t lustre /dev/sdaX /cfs/chome_ostX
>    mount the clients
> 
>    for cwork:
>    qa1# mount -t lustre /dev/sda6 /cfs/cwork_mdt
>    qaY# mount -t lustre /dev/sdaY /cfs/cwork_ostY
>    mount the clients
> 
>    for cwork1:
>    qa1# mount -t lustre /dev/sda7 /cfs/cwork1_mdt
>    qaZ# mount -t lustre /dev/sdaZ /cfs/cwork1_ostZ
>    mount the clients
> 
> 
> Please kindly give me your comments. Thanks very much.
> 
> 
> Best Regards,
> 
> T.H.Hsieh
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss



More information about the lustre-discuss mailing list