[lustre-discuss] backup restore docs not quite accurate?

Peter Grandi pg at lustre.list.sabi.co.UK
Wed Oct 18 08:08:02 PDT 2023


>> https://doc.lustre.org/lustre_manual.xhtml#backup_fs_level.restore
>> "Remove old OI and LFSCK files.[oss]# rm -rf oi.16* lfsck_* LFSCK
>> Remove old CATALOGS. [oss]# rm -f CATALOGS"

>> But I am getting a lot of error when removing "oi.16*",

> Removing the OI files is for ldiskfs backup/restore (eg. after
> tar/untar) when the inode numbers are changed.

Thanks for this clarification.

BTW the documentation says to delete just "oi.16*" but I have a
lot of OI directories that don't begin with "oi.16" so perhaps
the example is not general enough.

> That is not needed for ZFS send/recv because the inode numbers
> stay the same after such an operation.

There was in an e-mail of your a mentions something related to
this:

https://lustre-discuss.lustre.narkive.com/Dm5yfCV7/backup-zfs-mdt-or-migrate-from-zfs-back-to-ldiskfs#post5
"Using rsync or tar to backup/restore a ZFS MDT is not
supported, because this changes the dnode numbering, but ZFS OI
Scrub is not yet implemented (there is a Jira ticket for this,
and some work is underway there)."

> If that isn't clear in the manual it should be fixed.

Yes, there is no distinction between 'ldiskfs' and ZFS there IIRC.

>> of the "directory not empty" sort. For example "cannot remove
>> 'oi.16/0x200011b90:0xabe1:0x0': Directory not empty"

I have also found that the relevant directories look good before
attempting  to delete them, but apparently the ZFS structure is
a bit corrupted. This server had some RAM issues in the past.


More information about the lustre-discuss mailing list