[lustre-discuss] [EXTERNAL] ran out of MDT inodes
Mohr, Rick
mohrrf at ornl.gov
Fri Sep 16 08:45:42 PDT 2022
Liam,
If you have another zpool configured somewhere, you could always take a snapshot of your mdt and then used send/received to copy that snapshot to another zpool. I helped someone do this one time in order to move the mdt to new hardware.
-Rick
On 9/14/22, 6:23 PM, "lustre-discuss on behalf of Liam Forbes via lustre-discuss" <lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss at lists.lustre.org> wrote:
Today, in our lustre 2.10.3 filesystem, the MDT ran out of inodes. We are using ZFS as the backing filesystem.
[loforbes at mds02 ~]$ df -i -t lustre
Filesystem Inodes IUsed IFree IUse% Mounted on
digdug-meta/lustre2-mgt-mdt 83703636 83703636 0 100% /mnt/lustre/local/lustre2-MDT0000
[loforbes at mds02 ~]$ sudo zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
digdug-meta 744G 721G 23.2G - 86% 96% 1.00x ONLINE -
mirror 372G 368G 4.25G - 84% 98%
scsi-35000c5003017156b - - - - - -
scsi-35000c500301715e7 - - - - - -
mirror 372G 353G 19.0G - 88% 94%
scsi-35000c5003017155f - - - - - -
scsi-35000c500301715a7 - - - - - -
When we try to delete files, we get the error message:
rm: cannot remove XXXXX: No space left on device
Is there a way to unlink files and free up inodes?
Is it possible to expand the existing zpool and filesystem for the MDT?
Is it possible to do a backup of just our MDT? If so, how?
--
Regards,
-liam
-There are uncountably more irrational fears than rational ones. -P. Dolan
Liam Forbes loforbes at alaska.edu ph: 907.450.8618
UAF GI Research Computing Systems Manager
hxxps://calendly.com/ualoforbes/30min
More information about the lustre-discuss
mailing list