[lustre-discuss] Lustre Server Sizing

Indivar Nair indivar.nair at techterra.in
Thu Jul 23 03:23:43 PDT 2015


Thanks for the input everyone.

At one other site (check other mail, Subject: Speeding up Recovery), I have
two gateway nodes with Samba+CTDB, working fine and is quite stable. I
don't have NFS on it though.

Andreas, I am considering RAID60 to balance space and speed (currently they
have RAID50).
I will consider adding more OSS (budget is a constraint though) as you
suggested.

I was thinking of connecting all clients through the gateway to speed up
recovery after a fail-over.
In my other setup, (with MGS and MDT on the same server, 300 clients,
Lustre 2.4) it takes at least twenty minutes for a recovery.

If I limit the clients to just the 3 gateway nodes, I guess I will be
lowering the failover time to a couple of minutes. I do plan to keep the
MGS and MDT on different boxes this time, but I just want to be doubly sure
that the failover wont take too much time.

Do correct me if I am wrong and tell if there is some other ways / methods
to reduce the recovery time.

Regards,


Indivar Nair


On Wed, Jul 22, 2015 at 8:03 AM, Patrick Farrell <paf at cray.com> wrote:

> Note the other email also seemed to suggest that multiple NFS exports of
> Lustre wouldn't work.  I don't think that's the case, as we have this sort
> of setup at a number of our customers without particular trouble.  In the
> abstract, I could see the possibility of some caching errors between
> different clients, but that would be only namespace stuff, not data.  And I
> think in practice that's ok.
>
> But regardless, as Andreas said, for the Linux clients, Lustre directly
> will give much better results.
> ________________________________________
> From: lustre-discuss [lustre-discuss-bounces at lists.lustre.org] on behalf
> of Dilger, Andreas [andreas.dilger at intel.com]
> Sent: Tuesday, July 21, 2015 6:59 PM
> To: Indivar Nair
> Cc: hpdd-discuss; to: lustre-discuss
> Subject: Re: [lustre-discuss] Lustre Server Sizing
>
> Having only 3 OSS will limit the performance you can get, and having so
> many OSTs on each OSS will give sub-optimal performance. 4-6 OSTs/OSS is
> more reasonable.
>
> It also isn't clear why you want RAID-60 instead of just RAID-10?
>
> Finally, for Linux clients it is much better to use direct Lustre access
> instead of NFS as mentioned in another email.
>
> Cheers, Andreas
>
> On Jul 21, 2015, at 08:58, Indivar Nair <indivar.nair at techterra.in<mailto:
> indivar.nair at techterra.in>> wrote:
>
> Hi ...,
>
> One of our customers has a 3 x 240 Disk SAN Storage Array and would like
> to convert it to Lustre.
>
> They have around 150 Workstations and around 200 Compute (Render) nodes.
> The File Sizes they generally work with are -
> 1 to 1.5 million files (images) of 10-20MB in size.
> And a few thousand files of 500-1000MB in size.
>
> Almost 50% of the infra is on MS Windows or Apple MACs
>
> I was thinking of the following configuration -
> 1 MDS
> 1 Failover MDS
> 3 OSS (failover to each other)
> 3 NFS+CIFS Gateway Servers
> FDR Infiniband backend network (to connect the Gateways to Lustre)
> Each Gateway Server will have 8 x 10GbE Frontend Network (connecting the
> clients)
>
> Option A
>     10+10 Disk RAID60 Array with 64KB Chunk Size i.e. 1MB Stripe Width
>     720 Disks / (10+10) = 36 Arrays.
>     12 OSTs per OSS
>     18 OSTs per OSS in case of Failover
>
> Option B
>     10+10+10+10 Disk RAID60 Array with 128KB Chunk Size i.e. 4MB Stripe
> Width
>     720 Disks / (10+10+10+10) = 18 Arrays
>     6 OSTs per OSS
>     9 OSTs per OSS in case of Failover
>     4MB RPC and I/O
>
> Questions
> 1. Would it be better to let Lustre do most of the striping / file
> distribution (as in Option A) OR would it be better to let the RAID
> Controllers do it (as in Option B)
>
> 2. Will Option B allow us to have lesser CPU/RAM than Option A?
>
> Regards,
>
>
> Indivar Nair
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20150723/19b51ca4/attachment.htm>


More information about the lustre-discuss mailing list