[lustre-discuss] ZFS-OST layout, number of OSTs

Mannthey, Keith keith.mannthey at intel.com
Thu Oct 26 09:13:00 PDT 2017


I have seen both small and large OST work it just depends on what you want in the system (Size/Performance/Manageability). Do benchmark both as they will differ in overall performance some. 

L2arc read cache can help some workloads.  It takes multi reads for data to be moved into the cache so standard benchmarking (IOR and other streaming benchmarks) won't see much of a change.  

Thanks,
 Keith 

-----Original Message-----
From: lustre-discuss [mailto:lustre-discuss-bounces at lists.lustre.org] On Behalf Of Thomas Roth
Sent: Thursday, October 26, 2017 1:50 AM
To: Lustre Discuss <lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] ZFS-OST layout, number of OSTs

On the other hand if we gather three or four raidz2s into one zpool/OST, loss of one raidz means loss of a 120-160TB OST.
Around here, this is usually the deciding argument. (Even temporarily taking down one OST for whatever repairs would take more data offline).


How is the general experience with having an l2arc on additional disks?
In my test attempts I did not see much benefit under Lustre.

With our type of hardware, we do not have room for one drive per (small) zpool - if there were only one or two zpools per box, this would be possible.

Regards
Thomas

On 10/24/2017 09:41 PM, Cory Spitz wrote:
> It’s also worth noting that if you have small OSTs it’s much easier to bump into a full OST situation.   And specifically, if you singly stripe a file the file size is limited by the size of the OST.
> 
> -Cory
> 
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


More information about the lustre-discuss mailing list