[Lustre-discuss] Same performance Infiniband and Ethernet

Sean Brisbane s.brisbane1 at physics.ox.ac.uk
Mon May 19 05:53:35 PDT 2014


I find that “dd” from /dev/zero maxes-out my cpu, so you may want a few threads. You are probably benchmarking CPU here and not disk.

From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss-bounces at lists.lustre.org] On Behalf Of Andrew Holway
Sent: 19 May 2014 13:45
To: Pardo Diaz, Alfonso
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [Lustre-discuss] Same performance Infiniband and Ethernet

dd if=/dev/zero of=test.dat bs=1M count=1000 oflag=direct

oflag=direct forces directIO which is synchronous.

On 19 May 2014 14:41, Pardo Diaz, Alfonso <alfonso.pardo at ciemat.es<mailto:alfonso.pardo at ciemat.es>> wrote:
thank for your ideas,


I have measure the OST RAID performance, and there isn’t a bottleneck in the RAID disk. If I write directly in the RAID I got:

dd if=/dev/zero of=test.dat bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB) copied, 1,34852 s, 778 MB/s

And If i use /dev/urandom as input file I get the same performance again for infiniband and ethernet connection.

How can I write directly forgoing cache?


Thanks again!




El 19/05/2014, a las 13:24, Hammitt, Charles Allen <chammitt at email.unc.edu<mailto:chammitt at email.unc.edu>> escribió:

> Two things:
>
> 1)  Linux write cache is likely getting in the way; you'd be better off trying to write directly forgoing cache
> 2)  you need to write a much bigger file than 1GB; try 50GB
>
>
> Then as the previous poster said, maybe your disks aren't up to snuff or are misconfigured.
> Also, very interesting, and impossible, to get 154MB/s out of a Single GbE link [128MB/s].  Should be more like 100-115.  Less this is 10/40GbE...if so... again, start at #1 and #2.
>
>
>
>
> Regards,
>
> Charles
>
>
>
>
> --
> ===========================================
> Charles Hammitt
> Storage Systems Specialist
> ITS Research Computing @
> The University of North Carolina-CH
> 211 Manning Drive
> Campus Box # 3420, ITS Manning, Room 2504
> Chapel Hill, NC 27599
> ===========================================
>
>
>
>
> -----Original Message-----
> From: lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org> [mailto:lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>] On Behalf Of Vsevolod Nikonorov
> Sent: Monday, May 19, 2014 6:54 AM
> To: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
> Subject: Re: [Lustre-discuss] Same performance Infiniband and Ethernet
>
> What disks do your OSTs have? Maybe you have reached your disk performance limit, so Infiniband gives some speedup, but very small. Did you try to enable striping on your Lustre filesystem? For instance, you can type something like this: "lfs setstripe -c <count of stripes> /mnt/lustre/somefolder" and than copy a file into that folder.
>
> Also, there's an opinion that sequence of zeros is not a good way to test a performance, so maybe you should try using /dev/urandom (which is rather slow, so it's better to have a pre-generated "urandom" file in /ram, or /dev/shm, or where your memory space is mounted to, and copy that file to Lustre filesystem as a test).
>
>
>
> Pardo Diaz, Alfonso писал 2014-05-19 14:33:
>> Hi,
>>
>> I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS
>> and clients with Infiniband QDR interfaces.
>> I have compile lustre with OFED 3.2 and I have configured lnet module
>> with:
>>
>> options lent networks=“o2ib(ib0),tcp(eth0)”
>>
>>
>> But when I try to compare the lustre performance across Infiniband
>> (o2ib), I get the same performance than across ethernet (tcp):
>>
>> INFINIBAND TEST:
>> dd if=/dev/zero of=test.dat bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s
>>
>> ETHERNET TEST:
>> dd if=/dev/zero of=test.dat bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s
>>
>>
>> And this is my scenario:
>>
>> - 1 MDs with SSD RAID10 MDT
>> - 10 OSS with 2 OST per OSS
>> - Infiniband interface in connected mode
>> - Centos 6.5
>> - Lustre 2.5.1
>> - Striped filesystem “lfs setstripe -s 1M -c 10"
>>
>>
>> I know my infiniband running correctly, because if I use IPERF3
>> between client and servers I got 40Gb/s by infiniband and 1Gb/s by
>> ethernet connections.
>>
>>
>>
>> Could you help me?
>>
>
>>
>> Regards,
>>
>>
>>
>>
>>
>> Alfonso Pardo Diaz
>> System Administrator / Researcher
>> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
>> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>>
>>
>>
>>
>> ----------------------------
>> Confidencialidad:
>> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su
>> destinatario y puede contener información privilegiada o confidencial.
>> Si no es vd. el destinatario indicado, queda notificado de que la
>> utilización, divulgación y/o copia sin autorización está prohibida en
>> virtud de la legislación vigente. Si ha recibido este mensaje por
>> error, le rogamos que nos lo comunique inmediatamente respondiendo al
>> mensaje y proceda a su destrucción.
>>
>> Disclaimer:
>> This message and its attached files is intended exclusively for its
>> recipients and may contain confidential information. If you received
>> this e-mail in error you are hereby notified that any dissemination,
>> copy or disclosure of this communication is strictly prohibited and
>> may be unlawful. In this case, please notify us by a reply and delete
>> this email and its contents immediately.
>> ----------------------------
>>
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org<mailto:Lustre-discuss at lists.lustre.org>
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>> Email secured by Check Point
>
> --
> Никоноров Всеволод Дмитриевич, ОИТТиС, НИКИЭТ
>
> Vsevolod D. Nikonorov, JSC NIKET
>
> Email secured by Check Point
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org<mailto:Lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org<mailto:Lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

_______________________________________________
Lustre-discuss mailing list
Lustre-discuss at lists.lustre.org<mailto:Lustre-discuss at lists.lustre.org>
http://lists.lustre.org/mailman/listinfo/lustre-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20140519/af5bb54e/attachment.htm>


More information about the lustre-discuss mailing list