[Lustre-discuss] max write speed of QDR HCA card

Aaron Knister aaron.knister at gmail.com
Fri Jan 1 07:19:11 PST 2010


If you do indeed have DDR infiniband, then your ib_write_bw tests are  
achieving fairly close to the maximum possible throughput of  
2gigabytes/second. As far as where your bottleneck lies, my guess is  
within the storage connected to your OSSs. What are you using for back- 
end storage?

On Jan 1, 2010, at 1:02 AM, lakshmana swamy wrote:

> Iam Sorry,
>
> It is DDR not QDR...but still which is the bottleneck
>
>
> Thank you
>
> laxman
>
> Date: Thu, 31 Dec 2009 12:12:40 -0500
> Subject: Re: [Lustre-discuss] max write speed of QDR HCA card
> From: erik.froese at gmail.com
> To: klakshman03 at hotmail.com
> CC: atul.vidwansa at sun.com; lustre-discuss at lists.lustre.org
>
> Are you sure the IB card is connected at QDR rate?
>
> ibstat will tell you. Look at the "Rate" line.
>
> What kind of machines are you using?
>
> ibstat
> CA 'mlx4_0'
>         CA type: MT26428
>         Number of ports: 2
>         Firmware version: 2.6.0
>         Hardware version: a0
>         Node GUID: 0x00212800013e5432
>         System image GUID: 0x00212800013e5435
>         Port 1:
>                 State: Active
>                 Physical state: LinkUp
>                 Rate: 40
>                 Base lid: 13
>                 LMC: 0
>                 SM lid: 2
>                 Capability mask: 0x02510868
>                 Port GUID: 0x00212800013e5433
>         Port 2:
>                 State: Active
>                 Physical state: LinkUp
>                 Rate: 40
>                 Base lid: 14
>                 LMC: 0
>                 SM lid: 2
>                 Capability mask: 0x02510868
>                 Port GUID: 0x00212800013e5434
>
>
> On Thu, Dec 31, 2009 at 2:04 AM, lakshmana swamy <klakshman03 at hotmail.com 
> > wrote:
> ThanQ Atul
>
> I have done the following test, According to the below test, it  
> reaching 1869.74 MB/sec.
>
> This is the command I used for benchmarking.
>
> # mpiexec -machinefile machs1 -np 1024 IOR -N 1024 -a MPIIO -t 1024K  
> -b 1G -F -v -o /mnt/file1
>
> Max Write: 421.31 MiB/sec (441.77 MB/sec)
> Max Read:  1029.68 MiB/sec (1079.70 MB/sec)
>
> ***
> 1. In the aboove command I used MPIIO with 1024 process, I got the  
> almost same result when I fired same command with 512 process
>
> 2. I tested with stripe and without stripe with both the apis MPIIO  
> and POSIX
>
>
> **
> node0:~ # ib_write_bw
> ------------------------------------------------------------------
>                     RDMA_Write BW Test
> Number of qp's running 1
> Connection type : RC
> Each Qp will post up to 100 messages each time
> Inline data is used up to 1 bytes message
>   local address:  LID 0x100, QPN 0x28004e, PSN 0xc17c83 RKey  
> 0x8002d00 VAddr 0x002b5fd9964000
>   remote address: LID 0x101, QPN 0x240051, PSN 0x5e61e, RKey  
> 0x8002d00 VAddr 0x002ae797667000
> Mtu : 2048
> ------------------------------------------------------------------
>  #bytes #iterations    BW peak[MB/sec]    BW average[MB/sec]
> node0:~ #
> node1:~ # ib_write_bw node0
> ------------------------------------------------------------------
>                     RDMA_Write BW Test
> Number of qp's running 1
> Connection type : RC
> Each Qp will post up to 100 messages each time
> Inline data is used up to 1 bytes message
>   local address:  LID 0x101, QPN 0x240051, PSN 0x5e61e RKey  
> 0x8002d00 VAddr 0x002ae797667000
>   remote address: LID 0x100, QPN 0x28004e, PSN 0xc17c83, RKey  
> 0x8002d00 VAddr 0x002b5fd9964000
> Mtu : 2048
> ------------------------------------------------------------------
>  #bytes #iterations    BW peak[MB/sec]    BW average[MB/sec]
>   65536        5000            1869.76               1869.74
> ------------------------------------------------------------------
> node1:~ #
>
>
>
>
> Date: Thu, 31 Dec 2009 17:43:44 +1100
> From: Atul.Vidwansa at Sun.COM
> Subject: Re: [Lustre-discuss] max write speed of QDR HCA card
> To: klakshman03 at hotmail.com
> CC: lustre-discuss at lists.lustre.org
>
>
> Hi Lakshmana,
>
> You can use standard ofed tools like ib_rdma_bw to check Infiniband  
> bandwidth between pair of nodes.
>
> Lustre provides lnet selftest tool for measuring IB bandwidth  
> between pair or group of nodes. Check lustre manual for how tos.
>
> Finally, can you post IOR parameters you are using?
>
> Cheers,
> Atul
>
>
>
> Sent from my iPhone
>
> On Dec 31, 2009, at 5:13 PM, lakshmana swamy  
> <klakshman03 at hotmail.com> wrote:
>
>
>   Dear All,
>
>   Whats the maximum write speed can be achieved with Infiniband QDR  
> HCA card.
>
>
>  The Question arised from my lustre  setup where Iam not able to  
> exceed 500MB/s (write speed) with two OSSs and one MDS
>
>  Each OSS has four OSTs and one HCA card. I would like to know which  
> is the bottleneck here.
>
>
>  I have been using IOR for benchmarking
>
> Thank you
>
> LakshmaN
>
> New Windows 7: Simplify what you do everyday. Find the right PC for  
> you.
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> New Windows 7: Find the right PC for you. Learn more.
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
>
> http://windows.microsoft.com/shop Find the right PC for you.  
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100101/c960b23e/attachment.htm>


More information about the lustre-discuss mailing list