[Lustre-discuss] Another Infiniband Question

Peter Kjellstrom cap at nsc.liu.se
Fri Feb 12 02:09:08 PST 2010


On Thursday 11 February 2010, Jagga Soorma wrote:
> I have a QDR ib switch that should support up to 40Gbps.  After installing
> the kernel-ib and lustre client rpms on my SuSe nodes I see the following:
>
> hpc102:~ # ibstatus mlx4_0:1
> Infiniband device 'mlx4_0' port 1 status:
>     default gid:     fe80:0000:0000:0000:0002:c903:0006:de19
>     base lid:     0x7
>     sm lid:         0x1
>     state:         4: ACTIVE
>     phys state:     5: LinkUp
>     rate:         20 Gb/sec (4X DDR)
>
> Why is this only picking up 4X DDR at 20Gb/sec?  Do the lustre rpm's not
> support QDR?  Is there something that I need to do on my side to force
> 40Gb/sec on these ports?

This is a bit OT, but, a 20G rate typically means that you have a problem with 
one of: switch, hca, cable. Maybe your HCA is a DDR HCA? Maybe you need to 
upgrade the HCA firmware?

/Peter

> Thanks in advance,
> -J
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100212/8d350469/attachment.pgp>


More information about the lustre-discuss mailing list