[lustre-discuss] Lustre Sizing

ANS ans3456 at gmail.com
Mon Dec 31 22:21:24 PST 2018


Thanks Jeff. Currently i am using

modinfo zfs | grep version
version:        0.8.0-rc2
rhelversion:    7.4

lfs --version
lfs 2.12.0

And this is a fresh install. So is there any other possibility to show the
complete zpool lun has been allocated for lustre alone.

Thanks,
ANS



On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <jeff.johnson at aeoncomputing.com>
wrote:

> ANS,
>
> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
> the OSTs are new and empty. As objects are written to OSTs and capacity is
> consumed it gets the sizing of capacity more accurate. At the beginning
> it’s so off that it appears to be an error.
>
> What version are you running? Some patches have been added to make this
> calculation more accurate.
>
> —Jeff
>
> On Mon, Dec 31, 2018 at 22:08 ANS <ans3456 at gmail.com> wrote:
>
>> Dear Team,
>>
>> I am trying to configure lustre with backend ZFS as file system with 2
>> servers in HA. But after compiling and creating zfs pools
>>
>> zpool list
>> NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP
>> DEDUP    HEALTH  ALTROOT
>> lustre-data   54.5T  25.8M  54.5T        -     16.0E     0%     0%
>> 1.00x    ONLINE  -
>> lustre-data1  54.5T  25.1M  54.5T        -     16.0E     0%     0%
>> 1.00x    ONLINE  -
>> lustre-data2  54.5T  25.8M  54.5T        -     16.0E     0%     0%
>> 1.00x    ONLINE  -
>> lustre-data3  54.5T  25.8M  54.5T        -     16.0E     0%     0%
>> 1.00x    ONLINE  -
>> lustre-meta    832G  3.50M   832G        -     16.0E     0%     0%
>> 1.00x    ONLINE  -
>>
>> and when mounted to client
>>
>> lfs df -h
>> UUID                       bytes        Used   Available Use% Mounted on
>> home-MDT0000_UUID         799.7G        3.2M      799.7G   0% /home[MDT:0]
>> home-OST0000_UUID          39.9T       18.0M       39.9T   0% /home[OST:0]
>> home-OST0001_UUID          39.9T       18.0M       39.9T   0% /home[OST:1]
>> home-OST0002_UUID          39.9T       18.0M       39.9T   0% /home[OST:2]
>> home-OST0003_UUID          39.9T       18.0M       39.9T   0% /home[OST:3]
>>
>> filesystem_summary:       159.6T       72.0M      159.6T   0% /home
>>
>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any
>> one give the information regarding this.
>>
>> Also from performance prospective what are the zfs and lustre parameters
>> to be tuned.
>>
>> --
>> Thanks,
>> ANS.
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> ------------------------------
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.johnson at aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>


-- 
Thanks,
ANS.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190101/fd47bec2/attachment-0001.html>


More information about the lustre-discuss mailing list