[lustre-discuss] 1 MDS and 1 OSS

Jeff Johnson jeff.johnson at aeoncomputing.com
Mon Oct 30 17:02:03 PDT 2017


Amjad,

You might ask your vendor to propose a single MDT comprised of (8 * 500GB)
2.5" disk drives or better, SSDs. With some bio applications you would
benefit from spreading the MDT I/O across more drives.

How many clients to you expect to mount the file system? A standard filer
(or ZFS/NFS server) will perform well compared to Lustre until you
bottleneck somewhere in the server hardware (net, disk, cpu, etc), with
Lustre you can add simply add one or more OSS/OSTs to the file system and
performance potential increases by the number of additional OSS/OST servers.

High-availability is nice to have but it isn't necessary unless your
environment cannot tolerate any interruption or downtime. If your vendor
proposes quality hardware these cases are infrequent.

--Jeff

On Mon, Oct 30, 2017 at 12:04 PM, Amjad Syed <amjadcsu at gmail.com> wrote:

> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
> configuration.
> The OST will be RAID 6  and proposed are 2 OST.
>
>
> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bevans at cray.com> wrote:
>
>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>
>> From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf
>> of Brian Andrus <toomuchit at gmail.com>
>> Date: Monday, October 30, 2017 at 12:24 PM
>> To: "lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>
>> Hmm. That is an odd one from a quick thought...
>>
>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>> not a bad way to get started and used to how everything works. It is
>> basically a single stripe storage.
>>
>> If you are not planning on growing, I would lean towards gluster on 2
>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>> unless something is being done at harware level and that would help in
>> availability.
>> NFS is quite viable too, but you would be splitting the available storage
>> on 2 boxes.
>>
>> Brian Andrus
>>
>>
>>
>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>
>> Hello
>> We are in process in procuring one small Lustre filesystem giving us 120
>> TB  of storage using Lustre 2.X.
>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>> The query we have is that is this configuration enough , or we need more
>> OSS?
>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>> HDD (300GB)
>>
>> Thanks
>> Majid
>>
>>
>> _______________________________________________
>> lustre-discuss mailing listlustre-discuss at lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>


-- 
------------------------------
Jeff Johnson
Co-Founder
Aeon Computing

jeff.johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20171030/b0bc91c4/attachment.html>


More information about the lustre-discuss mailing list