[lustre-discuss] Migrating an older MGS to a new server with new storage and new IP
Nehring, Shane R [ITS]
snehring at iastate.edu
Thu Oct 23 09:15:45 PDT 2025
I've actually been working on fleshing out the process for this on the
wiki https://wiki.lustre.org/ZFS_Snapshots_for_MDT_backup
while the article is more about MDTs it should work for any zfs backed
target.
You should be able to take a snapshot and send the dataset using zfs
send followed by the writeconf for the ip address change, but you need
to make sure you use the -p argument with send so that the dataset
properties (the most important being the lustre:version, lustre:svname,
lustre:index, and lustre:flags properties). This should also hold true
for the mdts.
If you run into issues I'd fall back to the method defined in the
manual.
Shane
On Thu, 2025-10-23 at 14:48 +0000, Andreas Dilger via lustre-discuss
wrote:
> This should be covered under "backup restore MDT" in the Lustre
> manual. Short answer is "tar --xattrs --include 'trusted.*' ...",
> and then run "writeconf" on all targets to regenerate the config with
> the new IP address.
>
> > On Oct 22, 2025, at 18:33, Sid Young via lustre-discuss
> > <lustre-discuss at lists.lustre.org> wrote:
> >
> >
> >
> >
> >
> > G'Day all,
> > I'm researching how to best move an MGS/MGT on ZFS on a Centos 7.9
> > platform (lustre 2.12.6) (old h/w and old storage) to a new server
> > with Oracle linux 8.10 and different storage (lustre 2.15.5).
> >
> > The MGS box also has two MDS file systems, "mdt-home / fsname =
> > home" and "mdt-lustre / fsname=lustre" also on ZFS. I will then
> > plan to move (after a successful MGS migration), the MDS
> > functionality to two new servers (one for /home and one for
> > /lustre).
> >
> > The MGS IP 10.140.93.42 needs to change to 93.50 and then the MDS
> > will need a change later.
> >
> > So far I can't work out the best way to achieve an MGS migration
> > across platforms with an IP change. There are only 12 clients, so
> > remounting filesystems is not an issue.
> >
> > Does the OSS also need a config change when the MGS changes?
> >
> > Some Info
> >
> >
> >
> > [root at hpc-mds-02 ~]# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > mdthome 81.5G 4.12T 96K /mdthome
> > mdthome/home 77.6G 4.12T 77.6G /mdthome/home
> > mdtlustre 40.9G 5.00T 96K /mdtlustre
> > mdtlustre/lustre 37.1G 5.00T 37.1G /mdtlustre/lustre
> > mgspool 9.06M 860G 96K /mgspool
> > mgspool/mgt 8.02M 860G 8.02M /mgspool/mgt
> > [root at hpc-mds-02 ~]#
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > [root at hpc-mds-02 ~]# lctl dl
> > 0 UP osd-zfs MGS-osd MGS-osd_UUID 4
> > 1 UP mgs MGS MGS 38
> > 2 UP mgc MGC10.140.93.42 at o2ib a4723a3a-dd8a-667f-0128-
> > 71caf5cc56be 4
> > 3 UP osd-zfs home-MDT0000-osd home-MDT0000-osd_UUID 10
> > 4 UP mgc MGC10.140.93.41 at o2ib 68dff2a2-29d9-1468-6ff0-
> > 6d99fa57d383 4
> > 5 UP mds MDS MDS_uuid 2
> > 6 UP lod home-MDT0000-mdtlov home-MDT0000-mdtlov_UUID 3
> > 7 UP mdt home-MDT0000 home-MDT0000_UUID 40
> > 8 UP mdd home-MDD0000 home-MDD0000_UUID 3
> > 9 UP qmt home-QMT0000 home-QMT0000_UUID 3
> > 10 UP osp home-OST0000-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
> > 11 UP osp home-OST0001-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
> > 12 UP osp home-OST0002-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
> > 13 UP osp home-OST0003-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
> > 14 UP lwp home-MDT0000-lwp-MDT0000 home-MDT0000-lwp-MDT0000_UUID 4
> > 15 UP osd-zfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 12
> > 16 UP lod lustre-MDT0000-mdtlov lustre-MDT0000-mdtlov_UUID 3
> > 17 UP mdt lustre-MDT0000 lustre-MDT0000_UUID 44
> > 18 UP mdd lustre-MDD0000 lustre-MDD0000_UUID 3
> > 19 UP qmt lustre-QMT0000 lustre-QMT0000_UUID 3
> > 20 UP osp lustre-OST0000-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 21 UP osp lustre-OST0001-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 22 UP osp lustre-OST0002-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 23 UP osp lustre-OST0003-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 24 UP osp lustre-OST0004-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 25 UP osp lustre-OST0005-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
> > 26 UP lwp lustre-MDT0000-lwp-MDT0000 lustre-MDT0000-lwp-
> > MDT0000_UUID 4
> > [root at hpc-mds-02 ~]#
> >
> >
> >
> >
> > [root at hpc-mds-02 ~]# zpool list
> > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP
> > HEALTH ALTROOT
> > mdthome 4.34T 81.5G 4.26T - 49% 1% 1.00x
> > ONLINE -
> > mdtlustre 5.20T 40.9G 5.16T - 47% 0% 1.00x
> > ONLINE -
> > mgspool 888G 9.12M 888G - 0% 0% 1.00x
> > ONLINE -
> > [root at hpc-mds-02 ~]#
> >
> >
> > pool: mgspool
> > state: ONLINE
> > scan: scrub repaired 0B in 0h0m with 0 errors on Mon Jun 17
> > 13:18:44 2024
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > mgspool ONLINE 0 0 0
> > mirror-0 ONLINE 0 0 0
> > d3710M0 ONLINE 0 0 0
> > d3710M1 ONLINE 0 0 0
> >
> > errors: No known data errors
> > [root at hpc-mds-02 ~]#
> >
> > Sid Young
> > Brisbane, Australia
> >
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
>
>
>
>
> Cheers, Andreas
> —
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud/DDN
>
>
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6084 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20251023/489f4176/attachment.p7s>
More information about the lustre-discuss
mailing list