[lustre-discuss] 1 MDS and 1 OSS

Amjad Syed amjadcsu at gmail.com
Mon Oct 30 12:01:35 PDT 2017


Andreas,
Thank you for your email.
The  interconnect  proposed by Vendor is  Infiniband FDR , 56 GB/s.  Each
MDS and OSS will have only FDR Card.
This Lustre will be used to run  Life Sciences/Bioinformatics/genomics
applications .

Will single OSS handle  FDR interconnect.?

On 30 Oct 2017 4:56 p.m., "Dilger, Andreas" <andreas.dilger at intel.com>
wrote:

> First, to answer Amjad's question - the number of OSS nodes you have
> depends
> on the capacity and performance you need.  For 120TB of total storage
> (assume 30x4TB drives, or 20x60TB drives) a single OSS is definitely
> capable of handling this many drives.  I'd also assume you are using 10Gb
> Ethernet (~= 1GB/s), which  a single OSS should be able to saturate (at
> either 40MB/s or 60MB/s per data drive with RAID-6 8+2 LUNs).  If you want
> more capacity or bandwidth, you can add more OSS nodes now or in the future.
>
> As Ravi mentioned, with a single OSS and MDS, you will need to reboot the
> single server in case of failures instead of having automatic failover, but
> for some systems this is fine.
>
> Finally, as for whether Lustre on a single MDS+OSS is better than running
> NFS on a single server, that depends mostly on the application workload.
> NFS is easier to administer than Lustre, and will provide better small file
> performance than Lustre.  NFS also has the benefit that it works with every
> client available.
>
> Interestingly, there are some workloads that users have reported to us
> where a single Lustre OSS will perform better than NFS, because Lustre does
> proper data locking/caching, while NFS has only close-to-open consistency
> semantics, and cannot cache data on the client for a long time.  Any
> workloads where there are multiple writers/readers to the same file will
> just not function properly with NFS.  Lustre will handle a large number of
> clients better than NFS.  For streaming IO loads, Lustre is better able to
> saturate the network (though for slower networks this doesn't really make
> much difference).  Lustre can drive faster networks (e.g. IB) much better
> with LNet than NFS with IPoIB.
>
> And of course, if you think your performance/capacity needs will increase
> in the future, then Lustre can easily scale to virtually any size and
> performance you need, while NFS will not.
>
> In general I wouldn't necessarily recommend Lustre for a single MDS+OSS
> installation, but this depends on your workload and future plans.
>
> Cheers, Andreas
>
> On Oct 30, 2017, at 15:59, E.S. Rosenberg <esr+lustre at mail.hebrew.edu>
> wrote:
> >
> > Maybe someone can answer this in the context of this question, is there
> any performance gain over classic filers when you are using only a single
> OSS?
> >
> > On Mon, Oct 30, 2017 at 9:56 AM, Ravi Konila <ravibhatk at gmail.com>
> wrote:
> > Hi Majid
> >
> > It is better to go for HA for both OSS and MDS. You would need 2 nos of
> MDS and 2 nos of OSS (identical configuration).
> > Also use latest Lustre 2.10.1 release.
> >
> > Regards
> > Ravi Konila
> >
> >
> >> From: Amjad Syed
> >> Sent: Monday, October 30, 2017 1:17 PM
> >> To: lustre-discuss at lists.lustre.org
> >> Subject: [lustre-discuss] 1 MDS and 1 OSS
> >>
> >> Hello
> >> We are in process in procuring one small Lustre filesystem giving us
> 120 TB  of storage using Lustre 2.X.
> >> The vendor has proposed only 1 MDS and 1 OSS as a solution.
> >> The query we have is that is this configuration enough , or we need
> more OSS?
> >> The MDS and OSS server are identical  with regards to RAM (64 GB) and
> HDD (300GB)
> >>
> >> Thanks
> >> Majid
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Intel Corporation
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20171030/0f61860f/attachment.html>


More information about the lustre-discuss mailing list