[Lustre-discuss] [wc-discuss] Bad reporting inodes free
Enrico Tagliavini
enrico.tagliavini at gmail.com
Thu Sep 27 05:26:39 PDT 2012
Disclaimer: I'm not 100% sure if this guess is correct, so please correct
me if I'm wrong :).
The amount of available inodes is not limited only by the MDT size. You
datas is physically on OSTs on ldiskfs, so an ext3/4 mod. ldiskfs has
inodes limits as a normal ext4. It is true that you can't create more
inodes then the MDT limit, but at the same time OSTs can full their inodes
too even if the MDT is far from full. In other words you have two limiting
factors for inodes: the total number of inode supported by your MDT(s), and
the sum of the OSTs.
The solution is to add more OSTs or to format them with more inodes. I have
no idea if you can change it without formatting them. If I recall correctly
in ext4 is not possible.
Regards
On Thu, Sep 27, 2012 at 2:17 PM, Alfonso Pardo <alfonso.pardo at ciemat.es>wrote:
> Hello,
>
> When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
>
> Filesystem Inodes
> IUsed IFree IUse% Mounted on
> lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
> /mnt/data
>
>
>
> But if I run lfs df -i i get:
>
> UUID Inodes IUsed IFree
> I Use% Mounted on
> cetafs-MDT0000_UUID 975470592 20949223 954521369 2%
> /mnt/data[MDT:0]
> cetafs-OST0000_UUID 19073280 17822213 1251067 93%
> /mnt/data[OST:0]
> cetafs-OST0001_UUID 19073280 17822532 1250748 93%
> /mnt/data[OST:1]
> cetafs-OST0002_UUID 19073280 17822560 1250720 93%
> /mnt/data[OST:2]
> cetafs-OST0003_UUID 19073280 17822622 1250658 93%
> /mnt/data[OST:3]
> cetafs-OST0004_UUID 19073280 17822181 1251099 93%
> /mnt/data[OST:4]
> cetafs-OST0005_UUID 19073280 17822769 1250511 93%
> /mnt/data[OST:5]
> cetafs-OST0006_UUID 19073280 17822378 1250902 93%
> /mnt/data[OST:6]
> cetafs-OST0007_UUID 19073280 17822131 1251149 93%
> /mnt/data[OST:7]
> cetafs-OST0008_UUID 19073280 17822419 1250861 93%
> /mnt/data[OST:8]
> cetafs-OST0009_UUID 19073280 17822151 1251129 93%
> /mnt/data[OST:9]
> cetafs-OST000a_UUID 19073280 17822894 1250386 93%
> /mnt/data[OST:10]
> cetafs-OST000b_UUID 19073280 17822328 1250952 93%
> /mnt/data[OST:11]
> cetafs-OST000c_UUID 19073280 17822388 1250892 93%
> /mnt/data[OST:12]
> cetafs-OST000d_UUID 19073280 17822336 1250944 93%
> /mnt/data[OST:13]
> cetafs-OST000e_UUID 19073280 17822139 1251141 93%
> /mnt/data[OST:14]
> cetafs-OST000f_UUID 19073280 17823451 1249829 93%
> /mnt/data[OST:15]
> cetafs-OST0010_UUID 19073280 17822354 1250926 93%
> /mnt/data[OST:16]
> cetafs-OST0011_UUID 19073280 17822676 1250604 93%
> /mnt/data[OST:17]
>
> filesystem summary: 975470592 20949223 954521369 2% /mnt/data
>
> I have a 2Tb for MDT wich only 87Gb used.
>
>
> Any suggestion?
>
> --
>
> *Alfonso Pardo Díaz
> Researcher / System Administrator at CETA-Ciemat
> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
> [image: CETA-Ciemat logo] <http://www.ceta-ciemat.es/>*
> ---------------------------- Confidencialidad: Este mensaje y sus
> ficheros adjuntos se dirige exclusivamente a su destinatario y puede
> contener información privilegiada o confidencial. Si no es vd. el
> destinatario indicado, queda notificado de que la utilización, divulgación
> y/o copia sin autorización está prohibida en virtud de la legislación
> vigente. Si ha recibido este mensaje por error, le rogamos que nos lo
> comunique inmediatamente respondiendo al mensaje y proceda a su
> destrucción. Disclaimer: This message and its attached files is intended
> exclusively for its recipients and may contain confidential information. If
> you received this e-mail in error you are hereby notified that any
> dissemination, copy or disclosure of this communication is strictly
> prohibited and may be unlawful. In this case, please notify us by a reply
> and delete this email and its contents immediately.
> ----------------------------
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20120927/d61f4567/attachment.htm>
More information about the lustre-discuss
mailing list