[lustre-discuss] ZFS and OST Space Difference

Makia Minich makia at systemfabricworks.com
Tue Apr 6 12:48:21 PDT 2021


I believe this was discussed a while ago, but I was unable to find clear answers, so I’ll re-ask in hopefully a slightly different way.

On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 devices (ashift=12):

[root at lustre47b ~]# zpool list
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
oss55-0  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
oss55-1  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
oss55-2  69.9T  37.4M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
[root at lustre47b ~]#

Running a mkfs.lustre against these (and the lustre mount) and I see:

[root at lustre47b ~]# df -h | grep ost
oss55-0/ost165             52T   27M   52T   1% /lustre/ost165
oss55-1/ost166             52T   27M   52T   1% /lustre/ost166
oss55-2/ost167             52T   27M   52T   1% /lustre/ost167
[root at lustre47b ~]#

Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, so a loss of about 50TB). Is there any insight on where this capacity is disappearing to? If there some mkfs.lustre or zpool option I missed in creating this? Is something just reporting slightly off and that space really is there?

Thanks.

—

Makia Minich
Chief Architect
System Fabric Works
"Fabric Computing that Works”

"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210406/e978b1ea/attachment.html>


More information about the lustre-discuss mailing list