[lustre-discuss] LNet nid down after some thing changed the NICs

腐朽银 woshifuxiuyin at gmail.com
Fri Feb 17 00:52:21 PST 2023


Hi,

I encountered a problem when using Lustre Client on k8s with kubenet. Very
happy if you could help me.

My LNet configuration is:

net:
    - net type: lo
      local NI(s):
        - nid: 0 at lo
          status: up
    - net type: tcp
      local NI(s):
        - nid: 10.224.0.5 at tcp
          status: up
          interfaces:
              0: eth0

It works. But after I deploy or delete a pod on the node. The nid goes down
like:

- nid: 10.224.0.5 at tcp
          status: down
          interfaces:
              0: eth0

k8s uses veth pairs, so it will add or delete network interfaces when
deploying or deleting pods. But it doesn't touch the eth0 NIC. I can fix it
by deleting the tcp net by `lnetctl net del` and re-add it by `lnetctl net
add`. But I need to do this every time after a pod is scheduled to this
node.

My node OS is Ubuntu 18.04 5.4.0-1101-azure. The Lustre Client is built by
myself from 2.15.1. Is this an expected LNet behavior or I got something
wrong? I re-build and tested it several times and got the same problem.

Regards,
Chuanjun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20230217/9c2436e5/attachment.htm>


More information about the lustre-discuss mailing list