[lustre-discuss] [EXTERNAL] ZFS and OST Space Difference
Mohr, Rick
mohrrf at ornl.gov
Tue Apr 6 13:34:04 PDT 2021
Makia,
The drive sizes are 7.6 TB which translates to about 6.9 TiB (which is the unit that zpool uses for "T"). So the zpool sizes as just 10 x 6.9T = 69T since zpool shows the total amount of disk space available to the pool. The usable space (which is what df is reporting) should be more like 0.8 x 69T = 55T. I am not sure about the discrepancy of 3T. Maybe that is due to some ZFS and/or Lustre overhead?
--Rick
On 4/6/21, 3:49 PM, "lustre-discuss on behalf of Makia Minich" <lustre-discuss-bounces at lists.lustre.org on behalf of makia at systemfabricworks.com> wrote:
I believe this was discussed a while ago, but I was unable to find clear answers, so I’ll re-ask in hopefully a slightly different way.
On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 devices (ashift=12):
[root at lustre47b ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
oss55-0 69.9T 37.3M 69.9T - - 0% 0% 1.00x ONLINE -
oss55-1 69.9T 37.3M 69.9T - - 0% 0% 1.00x ONLINE -
oss55-2 69.9T 37.4M 69.9T - - 0% 0% 1.00x ONLINE -
[root at lustre47b ~]#
Running a mkfs.lustre against these (and the lustre mount) and I see:
[root at lustre47b ~]# df -h | grep ost
oss55-0/ost165 52T 27M 52T 1% /lustre/ost165
oss55-1/ost166 52T 27M 52T 1% /lustre/ost166
oss55-2/ost167 52T 27M 52T 1% /lustre/ost167
[root at lustre47b ~]#
Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, so a loss of about 50TB). Is there any insight on where this capacity is disappearing to? If there some mkfs.lustre or zpool option I missed in creating this? Is something just reporting slightly off and that space really is there?
Thanks.
—
Makia Minich
Chief Architect
System Fabric Works
"Fabric Computing that Works”
"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield
More information about the lustre-discuss
mailing list