[lustre-discuss] OST partition sizes

Christopher J. Morrone morrone2 at llnl.gov
Tue Apr 28 13:37:18 PDT 2015


We like to keep the OSTs as large as is reasonable, because each OST 
uses up a fixed amount of ram on _every_ Lustre client node.  At our 
scale, we are talking hundreds of megabytes lost on every client when we 
mount lustre.  With one older version of lustre I think we lost over 
2GiB of RAM per client!  That memory usage was fixed though, and we went 
back to having merely hundreds of MB used.

At LLNL we currently use 72TB OSTs in most places.

Our basic building block is a 4U NetApp box with 60 drives and two 
hardware raid controllers.  To that we connect two active OSS nodes with 
failover capability between the two.  Under normal (non-failover) 
conditions, each OSS will serve half of the drives.

We have the NetApp controls configured to export 6 LUNs, each consisting 
of 10 drives in a RAID6 configuration.

The drives are 3TB in size each.  With RAID6, we lose the equivalent of 
2 drives to redundancy overhead in head LUN.  Therefore each LUN is 3TB 
* 8 = 24TB.

We then have ZFS combine three of the LUNs into a single pool: 3 * 24TB 
= 72TB.

We like to keep all of our hardware as uniform as possible, so we've 
been doing the 72TB OST size for over two years now.  It is likely that 
our next IO storage solution will have larger OSTs.

Chris

On 04/28/2015 01:01 PM, Andrus, Brian Contractor wrote:
> Quick question/survey:
>
> What is the partition size folks use for their OSTs and why?
>
> Brian Andrus
>
> ITACS/Research Computing
>
> Naval Postgraduate School
>
> Monterey, California
>
> voice: 831-656-6238
>
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>



More information about the lustre-discuss mailing list