[Lustre-discuss] Another Infiniband Question

Jagga Soorma jagga13 at gmail.com
Thu Feb 11 09:28:15 PST 2010


I have a QDR ib switch that should support up to 40Gbps.  After installing
the kernel-ib and lustre client rpms on my SuSe nodes I see the following:

hpc102:~ # ibstatus mlx4_0:1
Infiniband device 'mlx4_0' port 1 status:
    default gid:     fe80:0000:0000:0000:0002:c903:0006:de19
    base lid:     0x7
    sm lid:         0x1
    state:         4: ACTIVE
    phys state:     5: LinkUp
    rate:         20 Gb/sec (4X DDR)

Why is this only picking up 4X DDR at 20Gb/sec?  Do the lustre rpm's not
support QDR?  Is there something that I need to do on my side to force
40Gb/sec on these ports?

Thanks in advance,
-J
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100211/559f29c4/attachment.htm>


More information about the lustre-discuss mailing list