[Lustre-discuss] [HPDD-discuss] Same performance Infiniband and Ethernet

Mohr Jr, Richard Frank (Rick Mohr) rmohr at utk.edu
Mon May 19 08:37:37 PDT 2014


Alfonso,

Based on my attempts to benchmark single client Lustre performance, here is some advice/comments that I have.  (YMMV)

1) On the IB client, I recommend disabling checksums (lctl set_param osc.*.checksums=0).  Having checksums enabled sometimes results in a significant performance hit.

2) Single-threaded tests (like dd) will usually bottleneck before you can max out the total client performance.  You need to use a multi-threaded tool (like xdd) and have several threads perform IO at the same time in order to measure aggregate single client performance.

3) When using a tool like xdd, set up the test to run for a fixed amount of time rather than having each thread write a fixed amount of data.  If all threads write a fixed amount of data (say 1 GB), and if any of the threads run slower than others, you might get skewed results for the aggregate throughput because of the stragglers.

4) In order to avoid contention at the ost level among the multiple threads on a single client, precreate the output files with stripe_count=1 and statically assign them evenly to the different osts.  Have each thread write to a different file so that no two processes write to the same ost.  If you don't have enough osts to saturate the client, you can always have two files per ost.  Going beyond that will likely hurt more than help, at least for an ldiskfs backend.

5) In my testing, I seem to get worse results using direct I/O for write tests,  so I usually just use buffered I/O.  Based on my understanding, the max_dirty_mb parameter on the client (which defaults to 32 MB) limits the amount of dirty written data than can be cached on each ost.  Unless you have increased this to a very large number, that parameter will likely mitigate any effects of client caching on the test results.  (NOTE: This reasoning only applies to write tests.  Any written data can still be cached by the client, and a subsequent read test might very well pull data from cache unless you have taken steps to flush the cached data.)

If you have 10 oss nodes and 20 osts in your file system, I would start by running a test with 10 threads and have each thread write to a single ost on different servers.  You can increase/decrease the number of threads as needed to see if the aggregate performance gets better/worse.  On my clients with QDR IB, I typically see aggregate write speeds in the range of 2.5-3.0 GB/s.

You are probably already aware of this, but just in case, make sure that the IB clients you use for testing don't also have ethernet connections to your OSS servers.  If the client has an ethernet and an IB path to the same server, it will choose one of the paths to use.  It could end up choosing ethernet instead of IB and mess up your results.

-- 
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
http://www.nics.tennessee.edu


On May 19, 2014, at 6:33 AM, "Pardo Diaz, Alfonso" <alfonso.pardo at ciemat.es>
 wrote:

> Hi,
> 
> I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS and clients with Infiniband QDR interfaces.
> I have compile lustre with OFED 3.2 and I have configured lnet module with:
> 
> options lent networks=“o2ib(ib0),tcp(eth0)”
> 
> 
> But when I try to compare the lustre performance across Infiniband (o2ib), I get the same performance than across ethernet (tcp):
> 
> INFINIBAND TEST:
> dd if=/dev/zero of=test.dat bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s
> 
> ETHERNET TEST:
> dd if=/dev/zero of=test.dat bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s
> 
> 
> And this is my scenario:
> 
> - 1 MDs with SSD RAID10 MDT
> - 10 OSS with 2 OST per OSS
> - Infiniband interface in connected mode
> - Centos 6.5
> - Lustre 2.5.1
> - Striped filesystem “lfs setstripe -s 1M -c 10"
> 
> 
> I know my infiniband running correctly, because if I use IPERF3 between client and servers I got 40Gb/s by infiniband and 1Gb/s by ethernet connections.
> 
> 
> 
> Could you help me?
> 
> 
> Regards,
> 
> 
> 
> 
> 
> Alfonso Pardo Diaz
> System Administrator / Researcher
> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
> 
> 
> 
> 
> ----------------------------
> Confidencialidad: 
> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener información privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilización, divulgación y/o copia sin autorización está prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucción.
> 
> Disclaimer: 
> This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately. 
> ----------------------------
> 
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss at lists.01.org
> https://lists.01.org/mailman/listinfo/hpdd-discuss






More information about the lustre-discuss mailing list