[lustre-discuss] Lustre with 100 Gbps Mellanox CX5 card

Pinkesh Valdria pinkesh.valdria at oracle.com
Wed Jan 22 01:06:12 PST 2020


Hello Lustre Community, 

 

I am trying to configure lustre for 100 Gbps Mellanox CX5 card.   I tried using 2.12.3 version first, but it failed when I tried to run lnetctl net add --net o2ib0 --if enp94s0f0,  so I started looking at the lustre binaries and found the below repos for ib.   

Is the below a special build for Mellanox cards?  or should I still be using the common Lustre binaries which are also used for tcp/ksocklnd networks. 

 

 

[hpddLustreserver]

name=CentOS- - Lustre

baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0-ib/MOFED-4.7-1.0.0.1/el7/server/

gpgcheck=0

 

[e2fsprogs]

name=CentOS- - Ldiskfs

baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7/

gpgcheck=0

 

[hpddLustreclient]

name=CentOS- - Lustre

baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0-ib/MOFED-4.7-1.0.0.1/el7/client/

gpgcheck=0

 

When I use the above repos,  the below command returns success, but the options I passed are not taking effect.   NIC card:  enp94s0f0 is my 100 Gbps card.  

lnetctl net add --net o2ib0 --if enp94s0f0 –peer-timeout 100 –peer-credits 16 –credits 2560

 

Similarly, when I try to configure some options via this file: /etc/modprobe.d/ko2iblnd.conf,  they are not taking effect and are not applied when I run the command:  

 

cat  /etc/modprobe.d/ko2iblnd.conf

alias ko2iblnd ko2iblnd

options ko2iblnd map_on_demand=256 concurrent_sends=63 peercredits_hiw=31 fmr_pool_size=1280 fmr_flush_trigger=1024 fmr_cache=1

 

 

lnetctl net show -v --net o2ib

net:

    - net type: o2ib

      local NI(s):

        - nid: 192.168.1.2 at o2ib

          status: up

          interfaces:

              0: enp94s0f0

          statistics:

              send_count: 0

              recv_count: 0

              drop_count: 0

          tunables:

              peer_timeout: 100

              peer_credits: 16

              peer_buffer_credits: 0

              credits: 2560

              peercredits_hiw: 8

              map_on_demand: 0

              concurrent_sends: 16

              fmr_pool_size: 512

              fmr_flush_trigger: 384

              fmr_cache: 1

              ntx: 512

              conns_per_peer: 1

          lnd tunables:

          dev cpt: 0

          tcp bonding: 0

          CPT: "[0,1]"

[root at inst-ran1f-lustre ~]#

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200122/cc90bc20/attachment.html>


More information about the lustre-discuss mailing list