[lustre-discuss] [EXTERNAL] Re: ZFS and OST Space Difference

Makia Minich makia at systemfabricworks.com
Thu Apr 8 02:01:04 PDT 2021


Thanks to Rick, Raj, and Laura for helping to understand all of this a bit more.

—

Makia Minich
Chief Architect
System Fabric Works
"Fabric Computing that Works”

"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield

> On Apr 6, 2021, at 5:48 PM, Mohr, Rick via lustre-discuss <lustre-discuss at lists.lustre.org> wrote:
> 
> That sounds about right.  69T x 0.76 = 52.44T
> 
> Laura: Thanks for the info about SPA slop space.
> 
> Raj: Thanks for that URL.  It looks very handy.
> 
> --Rick
> 
> On 4/6/21, 5:19 PM, "lustre-discuss on behalf of Saravanaraj Ayyampalayam via lustre-discuss" <lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss at lists.lustre.org> wrote:
> 
>    I think you are correct. ‘zpool list’ shows raw space, ‘zfs list’ shows the space after reservation for parity, etc.. In a 10 disk raidz2 ~24% of the space is reserved for parity.This website helps in calculating ZFS capacity. https://wintelguy.com/zfs-calc.pl
> 
>    -Raj
> 
> 
>    On Apr 6, 2021, at 4:56 PM, Laura Hild via lustre-discuss <lustre-discuss at lists.lustre.org> wrote:
> 
>> I am not sure about the discrepancy of 3T.  Maybe that is due to some ZFS and/or Lustre overhead?
> 
>    Slop space?
> 
>       https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#spa-slop-shift
> 
>    -Laura
> 
> 
> 
>    ________________________________________
>    Od: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> v imenu Mohr, Rick via lustre-discuss <lustre-discuss at lists.lustre.org>
>    Poslano: torek, 06. april 2021 16:34
>    Za: Makia Minich <makia at systemfabricworks.com>; lustre-discuss at lists.lustre.org <lustre-discuss at lists.lustre.org>
>    Zadeva: Re: [lustre-discuss] [EXTERNAL] ZFS and OST Space Difference 
> 
>    Makia,
> 
>    The drive sizes are 7.6 TB which translates to about 6.9 TiB (which is the unit that zpool uses for "T").  So the zpool sizes as just 10 x 6.9T = 69T since zpool shows the total amount of disk space available to the pool.  The usable space (which is what df is reporting) should be more like 0.8 x 69T = 55T.  I am not sure about the discrepancy of 3T.  Maybe that is due to some ZFS and/or Lustre overhead?
> 
>    --Rick
> 
>    On 4/6/21, 3:49 PM, "lustre-discuss on behalf of Makia Minich" <lustre-discuss-bounces at lists.lustre.org on behalf of makia at systemfabricworks.com> wrote:
> 
>        I believe this was discussed a while ago, but I was unable to find clear answers, so I’ll re-ask in hopefully a slightly different way.
>        On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 devices (ashift=12):
> 
>        [root at lustre47b ~]# zpool list
>        NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
>        oss55-0  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
>        oss55-1  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
>        oss55-2  69.9T  37.4M  69.9T        -         -     0%     0%  1.00x    ONLINE  -
>        [root at lustre47b ~]#
> 
> 
>        Running a mkfs.lustre against these (and the lustre mount) and I see:
> 
>        [root at lustre47b ~]# df -h | grep ost
>        oss55-0/ost165             52T   27M   52T   1% /lustre/ost165
>        oss55-1/ost166             52T   27M   52T   1% /lustre/ost166
>        oss55-2/ost167             52T   27M   52T   1% /lustre/ost167
>        [root at lustre47b ~]#
> 
> 
>        Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, so a loss of about 50TB). Is there any insight on where this capacity is disappearing to? If there some mkfs.lustre or zpool option I missed in creating this? Is something just reporting slightly off and that space really is there?
> 
>        Thanks.
> 
>> 
> 
>        Makia Minich
> 
>        Chief Architect
> 
>        System Fabric Works
>        "Fabric Computing that Works”
> 
>        "Oh, I don't know. I think everything is just as it should be, y'know?”
>        - Frank Fairfield
> 
> 
> 
> 
> 
> 
> 
>    _______________________________________________
>    lustre-discuss mailing list
>    lustre-discuss at lists.lustre.org
>    https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=897kjkV-MEeU1IVizIfc5Q&m=habzcIRCKUXYLTbJVvgv2fPgmEuBnVtUdsgTfIsAHZY&s=M7RWFzL5Xm7uDovhMY_cI9Hvk-jWavZyfLWjpMSAs1E&e= 
> 
> 
>    _______________________________________________
>    lustre-discuss mailing list
>    lustre-discuss at lists.lustre.org
>    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> 
> 
> 
> 
> 
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210408/b7359e2a/attachment.html>


More information about the lustre-discuss mailing list