[lustre-discuss] Issues draining OSTs for decommissioning
Andreas Dilger
adilger at whamcloud.com
Thu Mar 7 00:51:19 PST 2024
It's almost certainly just internal files. You could mount as ldiskfs and run "ls -lR" to check.
Cheers, Andreas
> On Mar 6, 2024, at 22:23, Scott Wood via lustre-discuss <lustre-discuss at lists.lustre.org> wrote:
>
> Hi folks,
>
> Time to empty some OSTs to shut down some old arrays. I've been following the docs from https://doc.lustre.org/lustre_manual.xhtml#lustremaint.remove_ost and am emptying with "lfs find /mnt/lustre/ -obd lustre-OST0060 | lfs_migrate -y" (for the various OSTs) and it's looking pretty good but I do have a few questions:
>
> Q1) I've dealt with a few edge cases, missed files, etc and now "lfs find" and "rbh-find" both show that the OSTs have nothing left on them but they pretty much all have 236 inodes still allocated. Is this just overhead?
>
> Q2) Also, one OST shows 237 inodes (lustre-OST0074_UUID shown below) but, again, "lfs find" says its empty. Is that a concern?
>
> Q3) Lastly, this file system is under load. Am I safe to deactivate the OSTs while we're running or should I wait till our next maintenance outage?
>
> For reference:
> [root at hpcpbs02 ~]# lfs df -i |sed -e 's/qimrb/lustre/'
> UUID Inodes IUsed IFree IUse% Mounted on
> ...
> lustre-OST0060_UUID 61002112 236 61001876 1% /mnt/lustre[OST:96]
> lustre-OST0061_UUID 61002112 236 61001876 1% /mnt/lustre[OST:97]
> lustre-OST0062_UUID 61002112 236 61001876 1% /mnt/lustre[OST:98]
> lustre-OST0063_UUID 61002112 236 61001876 1% /mnt/lustre[OST:99]
> lustre-OST0064_UUID 61002112 236 61001876 1% /mnt/lustre[OST:100]
> lustre-OST0065_UUID 61002112 236 61001876 1% /mnt/lustre[OST:101]
> lustre-OST0066_UUID 61002112 236 61001876 1% /mnt/lustre[OST:102]
> lustre-OST0067_UUID 61002112 236 61001876 1% /mnt/lustre[OST:103]
> lustre-OST0068_UUID 61002112 236 61001876 1% /mnt/lustre[OST:104]
> lustre-OST0069_UUID 61002112 236 61001876 1% /mnt/lustre[OST:105]
> lustre-OST006a_UUID 61002112 236 61001876 1% /mnt/lustre[OST:106]
> lustre-OST006b_UUID 61002112 236 61001876 1% /mnt/lustre[OST:107]
> lustre-OST006c_UUID 61002112 236 61001876 1% /mnt/lustre[OST:108]
> lustre-OST006d_UUID 61002112 236 61001876 1% /mnt/lustre[OST:109]
> lustre-OST006e_UUID 61002112 236 61001876 1% /mnt/lustre[OST:110]
> lustre-OST006f_UUID 61002112 236 61001876 1% /mnt/lustre[OST:111]
> lustre-OST0070_UUID 61002112 236 61001876 1% /mnt/lustre[OST:112]
> lustre-OST0071_UUID 61002112 236 61001876 1% /mnt/lustre[OST:113]
> lustre-OST0072_UUID 61002112 236 61001876 1% /mnt/lustre[OST:114]
> lustre-OST0073_UUID 61002112 236 61001876 1% /mnt/lustre[OST:115]
> lustre-OST0074_UUID 61002112 237 61001875 1% /mnt/lustre[OST:116]
> lustre-OST0075_UUID 61002112 236 61001876 1% /mnt/lustre[OST:117]
> lustre-OST0076_UUID 61002112 236 61001876 1% /mnt/lustre[OST:118]
> lustre-OST0077_UUID 61002112 236 61001876 1% /mnt/lustre[OST:119]
> ...
>
> Cheers!
> Scott
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
More information about the lustre-discuss
mailing list