[lustre-discuss] Frequency vs Cores for OSS/MDS processors

Simon Legrand simon.legrand at inria.fr
Fri Jul 5 08:04:12 PDT 2019


Thanks a lot for your advice. Beyond the fact that ZFS requires more ressources, is it a general rule to privilege frequency over cores for a MDS and inverse for the OSS? 

Simon 

> De: "Jeff Johnson" <jeff.johnson at aeoncomputing.com>
> À: "Simon Legrand" <simon.legrand at inria.fr>
> Cc: "lustre-discuss" <lustre-discuss at lists.lustre.org>
> Envoyé: Jeudi 4 Juillet 2019 22:43:21
> Objet: Re: [lustre-discuss] Frequency vs Cores for OSS/MDS processors

> If you only have those two processor models to choose from I’d do the 5217 for
> MDS and 5218 for OSS. If you were using ZFS for a backend definitely the 5218
> for the OSS. With ZFS your processors are also your RAID controller so you have
> the disk i/o, parity calculation, checksums and ZFS threads on top of the
> Lustre i/o and OS processes.

> —Jeff

> On Thu, Jul 4, 2019 at 13:30 Simon Legrand < [ mailto:simon.legrand at inria.fr |
> simon.legrand at inria.fr ] > wrote:

>> Hello Jeff,

>> Thanks for your quick answer. We plan to use ldiskfs, but I would be interested
>> to know what could fit for zfs.

>> Simon

>>> De: "Jeff Johnson" < [ mailto:jeff.johnson at aeoncomputing.com |
>>> jeff.johnson at aeoncomputing.com ] >
>>> À: "Simon Legrand" < [ mailto:simon.legrand at inria.fr | simon.legrand at inria.fr ]
>>> >
>>> Cc: "lustre-discuss" < [ mailto:lustre-discuss at lists.lustre.org |
>>> lustre-discuss at lists.lustre.org ] >
>>> Envoyé: Jeudi 4 Juillet 2019 20:40:40
>>> Objet: Re: [lustre-discuss] Frequency vs Cores for OSS/MDS processors

>>> Simon,

>>> Which backend do you plan on using? ldiskfs or zfs?

>>> —Jeff

>>> On Thu, Jul 4, 2019 at 10:41 Simon Legrand < [ mailto:simon.legrand at inria.fr |
>>> simon.legrand at inria.fr ] > wrote:

>>>> Dear all,

>>>> We are currently configuring a Lustre filesystem and facing a dilemma. We have
>>>> the choice between two types of processors for an OSS and a MDS.
>>>> - Intel Xeon Gold 5217 3GHz, 11M Cache,10.40GT/s, 2UPI, Turbo, HT,8C/16T (115W)
>>>> - DDR4-2666
>>>> - Intel Xeon Gold 5218 2.3GHz, 22M Cache,10.40GT/s, 2UPI, Turbo, HT,16C/32T
>>>> (105W) - DDR4-2666

>>>> Basically, we have to choose between freequency and number of cores.
>>>> Our current architecture is the following:
>>>> - 1MDS with 11To SDD
>>>> - 3 OSS/OST (~ 3*80To)
>>>> Our final target is 6 OSS/OST with a single MDS.
>>>> Do anyone of you could help us to choose and explain us the reasons?

>>>> Best regards,

>>>> Simon
>>>> _______________________________________________
>>>> lustre-discuss mailing list
>>>> [ mailto:lustre-discuss at lists.lustre.org | lustre-discuss at lists.lustre.org ]
>>>> [ http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org |
>>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org ]

>>> --
>>> ------------------------------
>>> Jeff Johnson
>>> Co-Founder
>>> Aeon Computing

>>> [ mailto:jeff.johnson at aeoncomputing.com | jeff.johnson at aeoncomputing.com ]
>>> [ http://www.aeoncomputing.com/ | www.aeoncomputing.com ]
>>> t: 858-412-3810 x1001 f: 858-412-3845
>>> m: 619-204-9061

>>> [
>>> https://www.google.com/maps/search/4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117?entry=gmail&source=g
>>> | 4170 Morena Boulevard, Suite C - San Diego, CA 92117 ]
>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage

> --
> ------------------------------
> Jeff Johnson
> Co-Founder
> Aeon Computing

> [ mailto:jeff.johnson at aeoncomputing.com | jeff.johnson at aeoncomputing.com ]
> [ http://www.aeoncomputing.com/ | www.aeoncomputing.com ]
> t: 858-412-3810 x1001 f: 858-412-3845
> m: 619-204-9061

> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190705/489712ee/attachment-0001.html>


More information about the lustre-discuss mailing list