[lustre-discuss] [HPDD-discuss] Lustre Server Sizing
jeff.johnson at aeoncomputing.com
Tue Jul 21 12:53:36 PDT 2015
Since your CIFS or NFS gateways operate as Lustre clients there can be
issues with running multiple NFS or CIFS gateway machines frontending the
same Lustre filesystem. As Lustre clients there are no issues in terms of
file locking but the NFS and CIFS caching and multi-client file access
mechanics don't interface with Lustre's file locking mechanics. Perhaps
that may have changed recently and a developer on the list may comment on
developments there. So while you could provide client access through
multiple NFS or CIFS gateway machines there would not be much in the way of
file locking protection. There is a way to configure pCIFS with CTDB and
get close to what you envision with Samba. I did that configuration once as
a proof of concept (no valuable data). It is a *very* complex configuration
and based on the state of software when I did it I wouldn't say it was a
production grade environment.
As I said before, my understanding may be a year out of date and someone
else could speak to the current state of things. Hopefully that would be a
On Tue, Jul 21, 2015 at 10:26 AM, Indivar Nair <indivar.nair at techterra.in>
> Hi Scott,
> The 3 - SAN Storages with 240 disks each has its own 3 NAS Headers (NAS
> However, even with 240 10K RPM disk and RAID50, it is only providing
> around 1.2 - 1.4GB/s per NAS Header.
> There is no clustered file system, and each NAS Header has its own
> It uses some custom mechanism to present the 3 file systems as single name
> But the directories have to be manually spread across for load-balancing.
> As you can guess, this doesn't work most of the time.
> Many a times, most of the compute nodes access a single NAS Header,
> overloading it.
> The customer wants *at least* 9GB/s throughput from a single file-system.
> But I think, if we architect the Lustre Storage correctly, with these many
> disks, we should get at least 18GB/s throughput, if not more.
> Indivar Nair
> On Tue, Jul 21, 2015 at 10:15 PM, Scott Nolin <scott.nolin at ssec.wisc.edu>
>> An important question is what performance do they have now, and what do
>> they expect if converting it to Lustre. Our more basically, what are they
>> looking for in general in changing?
>> The performance requirements may help drive your OSS numbers for example,
>> or interconnect, and all kinds of stuff.
>> Also I don't have a lot of experience with NFS/CIFS gateways, but that is
>> perhaps it's own topic and may need some close attention.
>> On 7/21/2015 10:57 AM, Indivar Nair wrote:
>>> Hi ...,
>>> One of our customers has a 3 x 240 Disk SAN Storage Array and would like
>>> to convert it to Lustre.
>>> They have around 150 Workstations and around 200 Compute (Render) nodes.
>>> The File Sizes they generally work with are -
>>> 1 to 1.5 million files (images) of 10-20MB in size.
>>> And a few thousand files of 500-1000MB in size.
>>> Almost 50% of the infra is on MS Windows or Apple MACs
>>> I was thinking of the following configuration -
>>> 1 MDS
>>> 1 Failover MDS
>>> 3 OSS (failover to each other)
>>> 3 NFS+CIFS Gateway Servers
>>> FDR Infiniband backend network (to connect the Gateways to Lustre)
>>> Each Gateway Server will have 8 x 10GbE Frontend Network (connecting the
>>> *Option A*
>>> 10+10 Disk RAID60 Array with 64KB Chunk Size i.e. 1MB Stripe Width
>>> 720 Disks / (10+10) = 36 Arrays.
>>> 12 OSTs per OSS
>>> 18 OSTs per OSS in case of Failover
>>> *Option B*
>>> 10+10+10+10 Disk RAID60 Array with 128KB Chunk Size i.e. 4MB Stripe
>>> 720 Disks / (10+10+10+10) = 18 Arrays
>>> 6 OSTs per OSS
>>> 9 OSTs per OSS in case of Failover
>>> 4MB RPC and I/O
>>> 1. Would it be better to let Lustre do most of the striping / file
>>> distribution (as in Option A) OR would it be better to let the RAID
>>> Controllers do it (as in Option B)
>>> 2. Will Option B allow us to have lesser CPU/RAM than Option A?
>>> Indivar Nair
>>> HPDD-discuss mailing list
>>> HPDD-discuss at lists.01.org
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
jeff.johnson at aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-Performance Computing / Lustre Filesystems / Scale-out Storage
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss