[lustre-discuss] luster 2.10.3 lnetctl configurations not persisting through reboot

Kurt Strosahl strosahl at jlab.org
Tue Apr 17 13:56:24 PDT 2018


OK, I was following http://wiki.lustre.org/LNet_Router_Config_Guide.

so what about all the peers the export command generates?  Wouldn't that accumulate bad data over time if nodes are retired or IPs change for some reason?



----- Original Message -----
From: "aik" <aik at fnal.gov>
To: "Kurt Strosahl" <strosahl at jlab.org>
Cc: lustre-discuss at lists.lustre.org
Sent: Tuesday, April 17, 2018 4:45:55 PM
Subject: Re: [lustre-discuss] luster 2.10.3 lnetctl configurations not persisting through reboot

File /etc/lnet.conf is described on lustre wiki:

http://wiki.lustre.org/Dynamic_LNet_Configuration_and_lnetctl



Alex.





On 4/17/18, 3:37 PM, "lustre-discuss on behalf of Kurt Strosahl" <lustre-discuss-bounces at lists.lustre.org on behalf of strosahl at jlab.org> wrote:



    I configured an lnet router today with luster 2.10.3 as the lustre software.  I then connfigured the lnet router using the following lnetctl commands

    

    

    lnetctl lnet configure

    lnetctl net add --net o2ib0 --if ib1

    lnetctl net add --net o2ib1 --if ib0

    lnetctl set routing 1

    

    When I rebooted the router the configuration didn't stick.  Is there a way to make this persist through a reboot?

    

    I also notices that when I do an export of the lnetctl configuration it contains

    

        - net type: o2ib1

          local NI(s):

            - nid: <ip of a compute node>@o2ib1

              status: up

              interfaces:

                  0: ib0

              statistics:

                  send_count: 2958318

                  recv_count: 2948077

                  drop_count: 0

              tunables:

                  peer_timeout: 180

                  peer_credits: 8

                  peer_buffer_credits: 0

                  credits: 256

              lnd tunables:

                  peercredits_hiw: 4

                  map_on_demand: 256

                  concurrent_sends: 8

                  fmr_pool_size: 512

                  fmr_flush_trigger: 384

                  fmr_cache: 1

                  ntx: 512

                  conns_per_peer: 1

              tcp bonding: 0

              dev cpt: 0

              CPT: "[0,1]"

    

    Is this expected behavior?

    

    w/r,

    Kurt J. Strosahl

    System Administrator: Lustre, HPC

    Scientific Computing Group, Thomas Jefferson National Accelerator Facility

    _______________________________________________

    lustre-discuss mailing list

    lustre-discuss at lists.lustre.org

    https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&d=DwIGaQ&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=KkQX5c9DC-mEANNS9ekPdC4ng8ttyYlGGgdX1DmNd1U&m=npHDhaAmS5ohN9LIgeMq2W-qgyHGOJ4nMQWsmnMn_rw&s=Sab0aXn7dh8XQ1UG_y5LWQHAsjTLOGNz2iQdM9lwYDA&e=


More information about the lustre-discuss mailing list