[lustre-discuss] free space on ldiskfs vs. zfs

Christopher J. Morrone morrone2 at llnl.gov
Mon Aug 24 12:53:33 PDT 2015


If you provide the "zpool list -v" output it might give us a little 
clearer view of what you have going on.

Chris

On 08/19/2015 06:18 AM, Götz Waschk wrote:
> Dear Lustre experts,
>
> I have configured two different Lustre instances, both using Lustre
> 2.5.3, one with ldiskfs on RAID-6 hardware RAID and one using ZFS and
> RAID-Z2, using the same type of hardware. I was wondering, why I 24 TB
> less space available, when I should have the same amount of parity
> used:
>
>   # lfs df
> UUID                   1K-blocks        Used   Available Use% Mounted on
> fs19-MDT0000_UUID       50322916      472696    46494784   1%
> /testlustre/fs19[MDT:0]
> fs19-OST0000_UUID    51923288320       12672 51923273600   0%
> /testlustre/fs19[OST:0]
> fs19-OST0001_UUID    51923288320       12672 51923273600   0%
> /testlustre/fs19[OST:1]
> fs19-OST0002_UUID    51923288320       12672 51923273600   0%
> /testlustre/fs19[OST:2]
> fs19-OST0003_UUID    51923288320       12672 51923273600   0%
> /testlustre/fs19[OST:3]
> filesystem summary:  207693153280       50688 207693094400   0% /testlustre/fs19
> UUID                   1K-blocks        Used   Available Use% Mounted on
> fs18-MDT0000_UUID       47177700      482152    43550028   1%
> /lustre/fs18[MDT:0]
> fs18-OST0000_UUID    58387106064  6014088200 49452733560  11%
> /lustre/fs18[OST:0]
> fs18-OST0001_UUID    58387106064  5919753028 49547068928  11%
> /lustre/fs18[OST:1]
> fs18-OST0002_UUID    58387106064  5944542316 49522279640  11%
> /lustre/fs18[OST:2]
> fs18-OST0003_UUID    58387106064  5906712004 49560109952  11%
> /lustre/fs18[OST:3]
> filesystem summary:  233548424256 23785095548 198082192080  11% /lustre/fs18
>
> fs18 is using ldiskfs, while fs19 is ZFS:
> # zpool list
> NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> lustre-ost1    65T  18,1M  65,0T     0%  1.00x  ONLINE  -
> # zfs list
> NAME               USED  AVAIL  REFER  MOUNTPOINT
> lustre-ost1       13,6M  48,7T   311K  /lustre-ost1
> lustre-ost1/ost1  12,4M  48,7T  12,4M  /lustre-ost1/ost1
>
>
> Any idea on why my 6TB per OST went?
>
> Regards, Götz Waschk
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>



More information about the lustre-discuss mailing list