[lustre-discuss] df shows wrong size of lustre file system (on all nodes).

Sid Young sid.young at gmail.com
Mon Oct 18 20:40:41 PDT 2021


I have some stability in my lustre installation after many days of testing,
however df- h now reports the /home filesystem incorrectly.

After mounting the /home I get:
[root at n04 ~]# df -h
10.140.90.42 at tcp:/lustre  286T   59T  228T  21% /lustre
10.140.90.42 at tcp:/home    191T  153T   38T  81% /home

doing it again straight after, I get:

[root at n04 ~]# df -h
10.140.90.42 at tcp:/lustre  286T   59T  228T  21% /lustre
10.140.90.42 at tcp:/home     48T   40T  7.8T  84% /home

The 4 OSTs report as active and present:

[root at n04 ~]# lfs df
....
UUID                   1K-blocks        Used   Available Use% Mounted on
home-MDT0000_UUID     4473805696    41784064  4432019584   1% /home[MDT:0]
home-OST0000_UUID    51097753600 40560842752 10536908800  80% /home[OST:0]
home-OST0001_UUID    51097896960 42786978816  8310916096  84% /home[OST:1]
home-OST0002_UUID    51097687040 38293322752 12804362240  75% /home[OST:2]
home-OST0003_UUID    51097765888 42293640192  8804123648  83% /home[OST:3]

filesystem_summary:  204391103488 163934784512 40456310784  81% /home

[root at n04 ~]#
[root at n04 ~]# lfs osts
OBDS:
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID ACTIVE
3: lustre-OST0003_UUID ACTIVE
4: lustre-OST0004_UUID ACTIVE
5: lustre-OST0005_UUID ACTIVE
OBDS:
0: home-OST0000_UUID ACTIVE
1: home-OST0001_UUID ACTIVE
2: home-OST0002_UUID ACTIVE
3: home-OST0003_UUID ACTIVE
[root at n04 ~]#

Anyone seen this before? Reboots and remounts do not appear to change the
value. zfs pool is reporting as online and a scrub returns 0 errors.

Sid Young
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211019/336b4543/attachment.html>


More information about the lustre-discuss mailing list