[lustre-discuss] Frequency vs Cores for OSS/MDS processors

Jongwoo Han jongwoohan at gmail.com
Fri Jul 5 09:01:16 PDT 2019


MDS always works on small fragmented metadata. OSS is generally works with
big stream of file contents contents. This leads to general rule of fast
clock for MDS, large cache and many cores for OSS.

2019년 7월 6일 (토) 오전 12:04, Simon Legrand <simon.legrand at inria.fr>님이 작성:

> Thanks a lot for your advice. Beyond the fact that ZFS requires more
> ressources, is it a general rule to privilege frequency over cores for a
> MDS and inverse for the OSS?
>
> Simon
>
> ------------------------------
>
> *De: *"Jeff Johnson" <jeff.johnson at aeoncomputing.com>
> *À: *"Simon Legrand" <simon.legrand at inria.fr>
> *Cc: *"lustre-discuss" <lustre-discuss at lists.lustre.org>
> *Envoyé: *Jeudi 4 Juillet 2019 22:43:21
> *Objet: *Re: [lustre-discuss] Frequency vs Cores for OSS/MDS processors
>
> If you only have those two processor models to choose from I’d do the 5217
> for MDS and 5218 for OSS. If you were using ZFS for a backend definitely
> the 5218 for the OSS. With ZFS your processors are also your RAID
> controller so you have the disk i/o, parity calculation, checksums and ZFS
> threads on top of the Lustre i/o and OS processes.
>
> —Jeff
>
> On Thu, Jul 4, 2019 at 13:30 Simon Legrand <simon.legrand at inria.fr> wrote:
>
>> Hello Jeff,
>>
>> Thanks for your quick answer. We plan to use ldiskfs, but I would be
>> interested to know what could fit for zfs.
>>
>> Simon
>>
>> ------------------------------
>>
>> *De: *"Jeff Johnson" <jeff.johnson at aeoncomputing.com>
>> *À: *"Simon Legrand" <simon.legrand at inria.fr>
>> *Cc: *"lustre-discuss" <lustre-discuss at lists.lustre.org>
>> *Envoyé: *Jeudi 4 Juillet 2019 20:40:40
>> *Objet: *Re: [lustre-discuss] Frequency vs Cores for OSS/MDS processors
>>
>> Simon,
>>
>> Which backend do you plan on using? ldiskfs or zfs?
>>
>> —Jeff
>>
>> On Thu, Jul 4, 2019 at 10:41 Simon Legrand <simon.legrand at inria.fr>
>> wrote:
>>
>>> Dear all,
>>>
>>> We are currently configuring a Lustre filesystem and facing a dilemma.
>>> We have the choice between two types of processors for an OSS and a MDS.
>>> - Intel Xeon Gold 5217 3GHz, 11M Cache,10.40GT/s, 2UPI, Turbo, HT,8C/16T
>>> (115W) - DDR4-2666
>>> - Intel Xeon Gold 5218 2.3GHz, 22M Cache,10.40GT/s, 2UPI, Turbo,
>>> HT,16C/32T (105W) - DDR4-2666
>>>
>>> Basically, we have to choose between freequency and number of cores.
>>> Our current architecture is the following:
>>> - 1MDS with 11To SDD
>>> - 3 OSS/OST (~ 3*80To)
>>> Our final target is 6 OSS/OST with a single MDS.
>>> Do anyone of you could help us to choose and explain us the reasons?
>>>
>>> Best regards,
>>>
>>> Simon
>>> _______________________________________________
>>> lustre-discuss mailing list
>>> lustre-discuss at lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>> --
>> ------------------------------
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.johnson at aeoncomputing.com
>> www.aeoncomputing.com
>> t: 858-412-3810 x1001   f: 858-412-3845
>> m: 619-204-9061
>>
>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>> <https://www.google.com/maps/search/4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117?entry=gmail&source=g>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>> --
> ------------------------------
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.johnson at aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>


-- 
Jongwoo Han
+82-505-227-6108
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190706/01e80773/attachment.html>


More information about the lustre-discuss mailing list