<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hello Ms Megan</p>
    <p><br>
    </p>
    <p><br>
    </p>
    <p>I am happy it is resolved</p>
    <p><br>
    </p>
    <p>it was a problem of UUID</p>
    <p><br>
    </p>
    <p>I will post later on the solution+ problem</p>
    <p><br>
    </p>
    <p><br>
    </p>
    <p>Cheers<br>
    </p>
    <div class="moz-cite-prefix">Le 19/05/2021 à 13:45, Abdeslam Tahari
      a écrit :<br>
    </div>
    <blockquote type="cite"
cite="mid:CA+LuYSLzPpz6XLs05ws6KPZCMtMU5tEA0nd_VVjiR=2c6JU5xQ@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Hello Ms Megan
        <div><br>
        </div>
        <div>Thank you for the reply and your help</div>
        <div><br>
        </div>
        <div>I have checked the lctl ping</div>
        <div>it seems to be ok the result</div>
        <div> lctl ping 10.0.1.70<br>
          12345-0@lo<br>
          12345-10.0.1.70@tcp<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>the ping is good it is always ok .</div>
        <div><br>
        </div>
        <div>the problem is when i mount the luster file system</div>
        <div><br>
        </div>
        <div>mount -t lustre /dev/sda /mds</div>
        <div><br>
        </div>
        <div>i have the following output</div>
        <div> lctl dl<br>
            0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID
          3<br>
            2 UP mgc MGC10.0.1.70@tcp
          3ec79ce9-5167-9661-9bd6-0b897fcc42f2 4<br>
            3 UP mds MDS MDS_uuid 2<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>if i execute the command for the second time i will have no
          output at all</div>
        <div>and the filesystem in reality is not mounted</div>
        <div><br>
        </div>
        <div>i think but i am not sure it complains about the UUID of
          the MDT </div>
        <div><br>
        </div>
        <div>from the output of the </div>
        <div><br>
        </div>
        <div>lctl dk</div>
        <div>00000100:00080000:78.0:1621365812.955564:0:84913:0:(pinger.c:413:ptlrpc_pinger_del_import())
          removing pingable import
          lustre-MDT0000-lwp-MDT0000_UUID->lustre-MDT0000_UUID<br>
00000100:00080000:78.0:1621365812.955567:0:84913:0:(import.c:86:import_set_state_nolock())
          ffff9b985701b800 lustre-MDT0000_UUID: changing import state
          from DISCONN to CLOSED<br>
          <b>00000100:00080000:78.0:1621365812.955571:0:84913:0:(import.c:157:ptlrpc_deactivate_import_nolock())
            setting import lustre-MDT0000_UUID INVALID</b><br>
10000000:01000000:78.0:1621365812.965420:0:84913:0:(mgc_request.c:151:config_log_put())
          dropping config log lustre-mdtir<br>
        </div>
        <div><br>
        </div>
        <div>Kind regards</div>
        <div><br>
        </div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">Le mer. 19 mai 2021 à 03:15,
          Ms. Megan Larko via lustre-discuss <<a
            href="mailto:lustre-discuss@lists.lustre.org"
            moz-do-not-send="true">lustre-discuss@lists.lustre.org</a>>
          a écrit :<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div dir="ltr">
            <div dir="ltr">Hello Tahari,
              <div>What is the result of "lctl ping 10.0.1.70@tcp_0"
                from the box on which you are trying to mount the Lustre
                File System?   Is the ping successful and then fails
                after 03 seconds? If yes, you may wish to check the
                /etc/lnet.conf file for Lustre LNet path "discovery"  (1
                allows LNet discovery while 0 does not), and
                drop_asym_route (0 disallows asymmetrical routing while
                1 permits it).   I have worked with a few complex
                networks in which we chose to turn off LNet discovery
                and specify, via /etc/lnet.conf, the routes.  On one
                system the asymmetrical routing (we have 16 LNet boxes
                between the system and the Lustre storage) seemed to be
                a problem, but we couldn't pin it to any particular
                box.  On that system disallowing asymmetrical routing
                seemed to help maintain LNet/Lustre connectivity.  </div>
              <div><br>
              </div>
              <div>One may check the lctl ping to narrow down net
                connectivity from other possibilities.</div>
              <div><br>
              </div>
              <div>Cheers,</div>
              <div>megan</div>
            </div>
          </div>
          <br>
          <div class="gmail_quote">
            <div dir="ltr" class="gmail_attr">On Mon, May 17, 2021 at
              3:50 PM <<a
                href="mailto:lustre-discuss-request@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss-request@lists.lustre.org</a>>
              wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">Send lustre-discuss
              mailing list submissions to<br>
                      <a href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
              <br>
              To subscribe or unsubscribe via the World Wide Web, visit<br>
                      <a
                href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
              or, via email, send a message with subject or body 'help'
              to<br>
                      <a
                href="mailto:lustre-discuss-request@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss-request@lists.lustre.org</a><br>
              <br>
              You can reach the person managing the list at<br>
                      <a
                href="mailto:lustre-discuss-owner@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss-owner@lists.lustre.org</a><br>
              <br>
              When replying, please edit your Subject line so it is more
              specific<br>
              than "Re: Contents of lustre-discuss digest..."<br>
              <br>
              <br>
              Today's Topics:<br>
              <br>
                 1. Re: problems to mount MDS and MDT (Abdeslam Tahari)<br>
                 2. Re: problems to mount MDS and MDT (Colin Faber)<br>
              <br>
              <br>
----------------------------------------------------------------------<br>
              <br>
              Message: 1<br>
              Date: Mon, 17 May 2021 21:35:34 +0200<br>
              From: Abdeslam Tahari <<a
                href="mailto:abeslam@gmail.com" target="_blank"
                moz-do-not-send="true">abeslam@gmail.com</a>><br>
              To: Colin Faber <<a href="mailto:cfaber@gmail.com"
                target="_blank" moz-do-not-send="true">cfaber@gmail.com</a>><br>
              Cc: lustre-discuss <<a
                href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a>><br>
              Subject: Re: [lustre-discuss] problems to mount MDS and
              MDT<br>
              Message-ID:<br>
                      <CA+LuYSL9_TTcHopwHYbFRosZNgUFK=<a
                href="mailto:bxeCePEn5DzZD%2BQXnwiQ@mail.gmail.com"
                target="_blank" moz-do-not-send="true">bxeCePEn5DzZD+QXnwiQ@mail.gmail.com</a>><br>
              Content-Type: text/plain; charset="utf-8"<br>
              <br>
              Thank you Colin<br>
              <br>
              No i don't have iptables or rules<br>
              <br>
              firewalled is stopped selinux disabled as well<br>
               iptables -L<br>
              Chain INPUT (policy ACCEPT)<br>
              target     prot opt source               destination<br>
              <br>
              Chain FORWARD (policy ACCEPT)<br>
              target     prot opt source               destination<br>
              <br>
              Chain OUTPUT (policy ACCEPT)<br>
              target     prot opt source               destination<br>
              <br>
              <br>
              Regards<br>
              <br>
              <br>
              Regards<br>
              <br>
              Le lun. 17 mai 2021 ? 21:29, Colin Faber <<a
                href="mailto:cfaber@gmail.com" target="_blank"
                moz-do-not-send="true">cfaber@gmail.com</a>> a ?crit
              :<br>
              <br>
              > Firewall rules dealing with localhost?<br>
              ><br>
              > On Mon, May 17, 2021 at 11:33 AM Abdeslam Tahari via
              lustre-discuss <<br>
              > <a href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a>>
              wrote:<br>
              ><br>
              >> Hello<br>
              >><br>
              >> i have a problem to mount the mds/mdt luster, it
              wont mount at all and<br>
              >> there is no message errors at the console<br>
              >><br>
              >> -it does not show errors or messages while
              mounting it<br>
              >><br>
              >> here are some debug file logs<br>
              >><br>
              >><br>
              >> i specify it is a new project that i am doing.<br>
              >><br>
              >> the version and packages of luter installed:<br>
              >> kmod-lustre-2.12.5-1.el7.x86_64<br>
              >> kernel-devel-3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >> lustre-2.12.5-1.el7.x86_64<br>
              >> lustre-resource-agents-2.12.5-1.el7.x86_64<br>
              >> kernel-3.10.0-1160.2.1.el7_lustre.x86_64<br>
              >>
              kernel-debuginfo-common-x86_64-3.10.0-1160.2.1.el7_lustre.x86_64<br>
              >> kmod-lustre-osd-ldiskfs-2.12.5-1.el7.x86_64<br>
              >> kernel-3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >> lustre-osd-ldiskfs-mount-2.12.5-1.el7.x86_64<br>
              >><br>
              >><br>
              >><br>
              >> the system(os) Centos 7<br>
              >><br>
              >> the kernel<br>
              >> Linux lustre-mds1
              3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >>  cat /etc/redhat-release<br>
              >><br>
              >><br>
              >> when i mount the luster file-system it wont show
              up and no errors<br>
              >><br>
              >> mount -t lustre /dev/sda /mds<br>
              >><br>
              >> lctl dl  does not show up<br>
              >><br>
              >> df -h   no mount point for /dev/sda<br>
              >><br>
              >><br>
              >> lctl dl<br>
              >><br>
              >> shows this:<br>
              >> lctl dl<br>
              >>   0 UP osd-ldiskfs lustre-MDT0000-osd
              lustre-MDT0000-osd_UUID 3<br>
              >>   2 UP mgc MGC10.0.1.70@tcp
              57e06c2d-5294-f034-fd95-460cee4f92b7 4<br>
              >>   3 UP mds MDS MDS_uuid 2<br>
              >><br>
              >><br>
              >> but unfortunately it disappears after 03 seconds<br>
              >><br>
              >> lctl  dl shows nothing<br>
              >><br>
              >> lctl dk<br>
              >><br>
              >> shows this debug output<br>
              >><br>
              >><br>
              >>
00000020:00000080:18.0:1621276062.004338:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >> processing cmd: cf006<br>
              >>
00000020:00000080:18.0:1621276062.004341:0:13403:0:(obd_config.c:1147:class_process_config())<br>
              >> removing mappings for uuid MGC10.0.1.70@tcp_0<br>
              >>
00000020:01000004:18.0:1621276062.004346:0:13403:0:(obd_mount.c:661:lustre_put_lsi())<br>
              >> put ffff9bbbf91d5800 1<br>
              >>
00000020:00000080:18.0:1621276062.004351:0:13403:0:(genops.c:1501:class_disconnect())<br>
              >> disconnect: cookie 0x256dd92fc5bf929c<br>
              >>
00000020:00000080:18.0:1621276062.004354:0:13403:0:(genops.c:1024:class_export_put())<br>
              >> final put
              ffff9bbf3e66a400/lustre-MDT0000-osd_UUID<br>
              >>
00000020:01000000:18.0:1621276062.004361:0:13403:0:(obd_config.c:2100:class_manual_cleanup())<br>
              >> Manual cleanup of lustre-MDT0000-osd (flags='')<br>
              >>
00000020:00000080:18.0:1621276062.004368:0:821:0:(genops.c:974:class_export_destroy())<br>
              >> destroying export
              ffff9bbf3e66a400/lustre-MDT0000-osd_UUID for<br>
              >> lustre-MDT0000-osd<br>
              >>
00000020:00000080:18.0:1621276062.004376:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >> processing cmd: cf004<br>
              >>
00000020:00000080:18.0:1621276062.004379:0:13403:0:(obd_config.c:659:class_cleanup())<br>
              >> lustre-MDT0000-osd: forcing exports to
              disconnect: 0/0<br>
              >>
00000020:00080000:18.0:1621276062.004382:0:13403:0:(genops.c:1590:class_disconnect_exports())<br>
              >> OBD device 0 (ffff9bbf47141080) has no exports<br>
              >>
00000020:00000080:18.0:1621276062.004788:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >> processing cmd: cf002<br>
              >>
00000020:00000080:18.0:1621276062.004791:0:13403:0:(obd_config.c:589:class_detach())<br>
              >> detach on obd lustre-MDT0000-osd (uuid
              lustre-MDT0000-osd_UUID)<br>
              >>
00000020:00000080:18.0:1621276062.004794:0:13403:0:(genops.c:1024:class_export_put())<br>
              >> final put
              ffff9bbf48800c00/lustre-MDT0000-osd_UUID<br>
              >>
00000020:00000080:18.0:1621276062.004796:0:13403:0:(genops.c:974:class_export_destroy())<br>
              >> destroying export
              ffff9bbf48800c00/lustre-MDT0000-osd_UUID for<br>
              >> lustre-MDT0000-osd<br>
              >>
00000020:01000000:18.0:1621276062.004799:0:13403:0:(genops.c:481:class_free_dev())<br>
              >> finishing cleanup of obd lustre-MDT0000-osd
              (lustre-MDT0000-osd_UUID)<br>
              >>
00000020:01000004:18.0:1621276062.450759:0:13403:0:(obd_mount.c:605:lustre_free_lsi())<br>
              >> Freeing lsi ffff9bbbf91d6800<br>
              >>
00000020:01000000:18.0:1621276062.450805:0:13403:0:(obd_config.c:2100:class_manual_cleanup())<br>
              >> Manual cleanup of MDS (flags='F')<br>
              >>
00000020:00000080:18.0:1621276062.450806:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >> processing cmd: cf004<br>
              >>
00000020:00000080:18.0:1621276062.450807:0:13403:0:(obd_config.c:659:class_cleanup())<br>
              >> MDS: forcing exports to disconnect: 0/0<br>
              >>
00000020:00080000:18.0:1621276062.450809:0:13403:0:(genops.c:1590:class_disconnect_exports())<br>
              >> OBD device 3 (ffff9bbf43fdd280) has no exports<br>
              >>
00000020:00000080:58.0F:1621276062.490781:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >> processing cmd: cf002<br>
              >>
00000020:00000080:58.0:1621276062.490787:0:13403:0:(obd_config.c:589:class_detach())<br>
              >> detach on obd MDS (uuid MDS_uuid)<br>
              >>
00000020:00000080:58.0:1621276062.490788:0:13403:0:(genops.c:1024:class_export_put())<br>
              >> final put ffff9bbf3e668800/MDS_uuid<br>
              >>
00000020:00000080:58.0:1621276062.490790:0:13403:0:(genops.c:974:class_export_destroy())<br>
              >> destroying export ffff9bbf3e668800/MDS_uuid for
              MDS<br>
              >>
00000020:01000000:58.0:1621276062.490791:0:13403:0:(genops.c:481:class_free_dev())<br>
              >> finishing cleanup of obd MDS (MDS_uuid)<br>
              >>
00000020:02000400:58.0:1621276062.490877:0:13403:0:(obd_mount_server.c:1642:server_put_super())<br>
              >> server umount lustre-MDT0000 complete<br>
              >>
00000400:02020000:42.0:1621276086.284109:0:5400:0:(acceptor.c:321:lnet_accept())<br>
              >> 120-3: Refusing connection from 127.0.0.1 for
              127.0.0.1@tcp: No matching<br>
              >> NI<br>
              >>
00000800:00020000:6.0:1621276086.284152:0:5383:0:(socklnd_cb.c:1817:ksocknal_recv_hello())<br>
              >> Error -104 reading HELLO from 127.0.0.1<br>
              >>
00000400:02020000:6.0:1621276086.284174:0:5383:0:(acceptor.c:127:lnet_connect_console_error())<br>
              >> 11b-b: Connection to 127.0.0.1@tcp at host
              127.0.0.1 on port 988 was<br>
              >> reset: is it running a compatible version of
              Lustre and is 127.0.0.1@tcp<br>
              >> one of its NIDs?<br>
              >>
00000800:00000100:6.0:1621276086.284189:0:5383:0:(socklnd_cb.c:438:ksocknal_txlist_done())<br>
              >> Deleting packet type 2 len 0
              10.0.1.70@tcp->127.0.0.1@tcp<br>
              >>
00000800:00000100:34.0:1621276136.363882:0:5401:0:(socklnd_cb.c:979:ksocknal_launch_packet())<br>
              >> No usable routes to 12345-127.0.0.1@tcp<br>
              >>
00000400:02020000:42.0:1621276186.440095:0:5400:0:(acceptor.c:321:lnet_accept())<br>
              >> 120-3: Refusing connection from 127.0.0.1 for
              127.0.0.1@tcp: No matching<br>
              >> NI<br>
              >>
00000800:00020000:44.0:1621276186.446533:0:5386:0:(socklnd_cb.c:1817:ksocknal_recv_hello())<br>
              >> Error -104 reading HELLO from 127.0.0.1<br>
              >>
00000400:02020000:44.0:1621276186.452996:0:5386:0:(acceptor.c:127:lnet_connect_console_error())<br>
              >> 11b-b: Connection to 127.0.0.1@tcp at host
              127.0.0.1 on port 988 was<br>
              >> reset: is it running a compatible version of
              Lustre and is 127.0.0.1@tcp<br>
              >> one of its NIDs?<br>
              >>
00000800:00000100:44.0:1621276186.461433:0:5386:0:(socklnd_cb.c:438:ksocknal_txlist_done())<br>
              >> Deleting packet type 2 len 0
              10.0.1.70@tcp->127.0.0.1@tcp<br>
              >> Debug log: 872 lines, 872 kept, 0 dropped, 0 bad.<br>
              >><br>
              >><br>
              >><br>
              >> I just cant find out any help would be very
              appreciated<br>
              >><br>
              >><br>
              >> Thanks all<br>
              >><br>
              >><br>
              >><br>
              >><br>
              >><br>
              >><br>
              >> --<br>
              >> Tahari.Abdeslam<br>
              >> _______________________________________________<br>
              >> lustre-discuss mailing list<br>
              >> <a href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
              >> <a
                href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
              >><br>
              ><br>
              <br>
              -- <br>
              Tahari.Abdeslam<br>
              -------------- next part --------------<br>
              An HTML attachment was scrubbed...<br>
              URL: <<a
href="http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/1decdc97/attachment-0001.html"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/1decdc97/attachment-0001.html</a>><br>
              <br>
              ------------------------------<br>
              <br>
              Message: 2<br>
              Date: Mon, 17 May 2021 13:50:03 -0600<br>
              From: Colin Faber <<a href="mailto:cfaber@gmail.com"
                target="_blank" moz-do-not-send="true">cfaber@gmail.com</a>><br>
              To: Abdeslam Tahari <<a href="mailto:abeslam@gmail.com"
                target="_blank" moz-do-not-send="true">abeslam@gmail.com</a>><br>
              Cc: lustre-discuss <<a
                href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a>><br>
              Subject: Re: [lustre-discuss] problems to mount MDS and
              MDT<br>
              Message-ID:<br>
                      <CAJcXmB=T884j=<a
                href="mailto:5N8nhWspFBvNS%2BnAOoMa9b8xJUdhXT-fBoysw@mail.gmail.com"
                target="_blank" moz-do-not-send="true">5N8nhWspFBvNS+nAOoMa9b8xJUdhXT-fBoysw@mail.gmail.com</a>><br>
              Content-Type: text/plain; charset="utf-8"<br>
              <br>
              It appears part of the debug data is missing (the part
              before you posted<br>
              it), Can you try again, lctl dk > /dev/null to clear it
              then try your mount<br>
              and grab the debug again?<br>
              <br>
              On Mon, May 17, 2021 at 1:35 PM Abdeslam Tahari <<a
                href="mailto:abeslam@gmail.com" target="_blank"
                moz-do-not-send="true">abeslam@gmail.com</a>> wrote:<br>
              <br>
              > Thank you Colin<br>
              ><br>
              > No i don't have iptables or rules<br>
              ><br>
              > firewalled is stopped selinux disabled as well<br>
              >  iptables -L<br>
              > Chain INPUT (policy ACCEPT)<br>
              > target     prot opt source               destination<br>
              ><br>
              > Chain FORWARD (policy ACCEPT)<br>
              > target     prot opt source               destination<br>
              ><br>
              > Chain OUTPUT (policy ACCEPT)<br>
              > target     prot opt source               destination<br>
              ><br>
              ><br>
              > Regards<br>
              ><br>
              ><br>
              > Regards<br>
              ><br>
              > Le lun. 17 mai 2021 ? 21:29, Colin Faber <<a
                href="mailto:cfaber@gmail.com" target="_blank"
                moz-do-not-send="true">cfaber@gmail.com</a>> a ?crit
              :<br>
              ><br>
              >> Firewall rules dealing with localhost?<br>
              >><br>
              >> On Mon, May 17, 2021 at 11:33 AM Abdeslam Tahari
              via lustre-discuss <<br>
              >> <a href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a>>
              wrote:<br>
              >><br>
              >>> Hello<br>
              >>><br>
              >>> i have a problem to mount the mds/mdt luster,
              it wont mount at all and<br>
              >>> there is no message errors at the console<br>
              >>><br>
              >>> -it does not show errors or messages while
              mounting it<br>
              >>><br>
              >>> here are some debug file logs<br>
              >>><br>
              >>><br>
              >>> i specify it is a new project that i am
              doing.<br>
              >>><br>
              >>> the version and packages of luter installed:<br>
              >>> kmod-lustre-2.12.5-1.el7.x86_64<br>
              >>>
              kernel-devel-3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >>> lustre-2.12.5-1.el7.x86_64<br>
              >>> lustre-resource-agents-2.12.5-1.el7.x86_64<br>
              >>> kernel-3.10.0-1160.2.1.el7_lustre.x86_64<br>
              >>>
              kernel-debuginfo-common-x86_64-3.10.0-1160.2.1.el7_lustre.x86_64<br>
              >>> kmod-lustre-osd-ldiskfs-2.12.5-1.el7.x86_64<br>
              >>> kernel-3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >>> lustre-osd-ldiskfs-mount-2.12.5-1.el7.x86_64<br>
              >>><br>
              >>><br>
              >>><br>
              >>> the system(os) Centos 7<br>
              >>><br>
              >>> the kernel<br>
              >>> Linux lustre-mds1
              3.10.0-1127.8.2.el7_lustre.x86_64<br>
              >>>  cat /etc/redhat-release<br>
              >>><br>
              >>><br>
              >>> when i mount the luster file-system it wont
              show up and no errors<br>
              >>><br>
              >>> mount -t lustre /dev/sda /mds<br>
              >>><br>
              >>> lctl dl  does not show up<br>
              >>><br>
              >>> df -h   no mount point for /dev/sda<br>
              >>><br>
              >>><br>
              >>> lctl dl<br>
              >>><br>
              >>> shows this:<br>
              >>> lctl dl<br>
              >>>   0 UP osd-ldiskfs lustre-MDT0000-osd
              lustre-MDT0000-osd_UUID 3<br>
              >>>   2 UP mgc MGC10.0.1.70@tcp
              57e06c2d-5294-f034-fd95-460cee4f92b7 4<br>
              >>>   3 UP mds MDS MDS_uuid 2<br>
              >>><br>
              >>><br>
              >>> but unfortunately it disappears after 03
              seconds<br>
              >>><br>
              >>> lctl  dl shows nothing<br>
              >>><br>
              >>> lctl dk<br>
              >>><br>
              >>> shows this debug output<br>
              >>><br>
              >>><br>
              >>>
00000020:00000080:18.0:1621276062.004338:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >>> processing cmd: cf006<br>
              >>>
00000020:00000080:18.0:1621276062.004341:0:13403:0:(obd_config.c:1147:class_process_config())<br>
              >>> removing mappings for uuid MGC10.0.1.70@tcp_0<br>
              >>>
00000020:01000004:18.0:1621276062.004346:0:13403:0:(obd_mount.c:661:lustre_put_lsi())<br>
              >>> put ffff9bbbf91d5800 1<br>
              >>>
00000020:00000080:18.0:1621276062.004351:0:13403:0:(genops.c:1501:class_disconnect())<br>
              >>> disconnect: cookie 0x256dd92fc5bf929c<br>
              >>>
00000020:00000080:18.0:1621276062.004354:0:13403:0:(genops.c:1024:class_export_put())<br>
              >>> final put
              ffff9bbf3e66a400/lustre-MDT0000-osd_UUID<br>
              >>>
00000020:01000000:18.0:1621276062.004361:0:13403:0:(obd_config.c:2100:class_manual_cleanup())<br>
              >>> Manual cleanup of lustre-MDT0000-osd
              (flags='')<br>
              >>>
00000020:00000080:18.0:1621276062.004368:0:821:0:(genops.c:974:class_export_destroy())<br>
              >>> destroying export
              ffff9bbf3e66a400/lustre-MDT0000-osd_UUID for<br>
              >>> lustre-MDT0000-osd<br>
              >>>
00000020:00000080:18.0:1621276062.004376:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >>> processing cmd: cf004<br>
              >>>
00000020:00000080:18.0:1621276062.004379:0:13403:0:(obd_config.c:659:class_cleanup())<br>
              >>> lustre-MDT0000-osd: forcing exports to
              disconnect: 0/0<br>
              >>>
00000020:00080000:18.0:1621276062.004382:0:13403:0:(genops.c:1590:class_disconnect_exports())<br>
              >>> OBD device 0 (ffff9bbf47141080) has no
              exports<br>
              >>>
00000020:00000080:18.0:1621276062.004788:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >>> processing cmd: cf002<br>
              >>>
00000020:00000080:18.0:1621276062.004791:0:13403:0:(obd_config.c:589:class_detach())<br>
              >>> detach on obd lustre-MDT0000-osd (uuid
              lustre-MDT0000-osd_UUID)<br>
              >>>
00000020:00000080:18.0:1621276062.004794:0:13403:0:(genops.c:1024:class_export_put())<br>
              >>> final put
              ffff9bbf48800c00/lustre-MDT0000-osd_UUID<br>
              >>>
00000020:00000080:18.0:1621276062.004796:0:13403:0:(genops.c:974:class_export_destroy())<br>
              >>> destroying export
              ffff9bbf48800c00/lustre-MDT0000-osd_UUID for<br>
              >>> lustre-MDT0000-osd<br>
              >>>
00000020:01000000:18.0:1621276062.004799:0:13403:0:(genops.c:481:class_free_dev())<br>
              >>> finishing cleanup of obd lustre-MDT0000-osd
              (lustre-MDT0000-osd_UUID)<br>
              >>>
00000020:01000004:18.0:1621276062.450759:0:13403:0:(obd_mount.c:605:lustre_free_lsi())<br>
              >>> Freeing lsi ffff9bbbf91d6800<br>
              >>>
00000020:01000000:18.0:1621276062.450805:0:13403:0:(obd_config.c:2100:class_manual_cleanup())<br>
              >>> Manual cleanup of MDS (flags='F')<br>
              >>>
00000020:00000080:18.0:1621276062.450806:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >>> processing cmd: cf004<br>
              >>>
00000020:00000080:18.0:1621276062.450807:0:13403:0:(obd_config.c:659:class_cleanup())<br>
              >>> MDS: forcing exports to disconnect: 0/0<br>
              >>>
00000020:00080000:18.0:1621276062.450809:0:13403:0:(genops.c:1590:class_disconnect_exports())<br>
              >>> OBD device 3 (ffff9bbf43fdd280) has no
              exports<br>
              >>>
00000020:00000080:58.0F:1621276062.490781:0:13403:0:(obd_config.c:1128:class_process_config())<br>
              >>> processing cmd: cf002<br>
              >>>
00000020:00000080:58.0:1621276062.490787:0:13403:0:(obd_config.c:589:class_detach())<br>
              >>> detach on obd MDS (uuid MDS_uuid)<br>
              >>>
00000020:00000080:58.0:1621276062.490788:0:13403:0:(genops.c:1024:class_export_put())<br>
              >>> final put ffff9bbf3e668800/MDS_uuid<br>
              >>>
00000020:00000080:58.0:1621276062.490790:0:13403:0:(genops.c:974:class_export_destroy())<br>
              >>> destroying export ffff9bbf3e668800/MDS_uuid
              for MDS<br>
              >>>
00000020:01000000:58.0:1621276062.490791:0:13403:0:(genops.c:481:class_free_dev())<br>
              >>> finishing cleanup of obd MDS (MDS_uuid)<br>
              >>>
00000020:02000400:58.0:1621276062.490877:0:13403:0:(obd_mount_server.c:1642:server_put_super())<br>
              >>> server umount lustre-MDT0000 complete<br>
              >>>
00000400:02020000:42.0:1621276086.284109:0:5400:0:(acceptor.c:321:lnet_accept())<br>
              >>> 120-3: Refusing connection from 127.0.0.1 for
              127.0.0.1@tcp: No<br>
              >>> matching NI<br>
              >>>
00000800:00020000:6.0:1621276086.284152:0:5383:0:(socklnd_cb.c:1817:ksocknal_recv_hello())<br>
              >>> Error -104 reading HELLO from 127.0.0.1<br>
              >>>
00000400:02020000:6.0:1621276086.284174:0:5383:0:(acceptor.c:127:lnet_connect_console_error())<br>
              >>> 11b-b: Connection to 127.0.0.1@tcp at host
              127.0.0.1 on port 988 was<br>
              >>> reset: is it running a compatible version of
              Lustre and is 127.0.0.1@tcp<br>
              >>> one of its NIDs?<br>
              >>>
00000800:00000100:6.0:1621276086.284189:0:5383:0:(socklnd_cb.c:438:ksocknal_txlist_done())<br>
              >>> Deleting packet type 2 len 0
              10.0.1.70@tcp->127.0.0.1@tcp<br>
              >>>
00000800:00000100:34.0:1621276136.363882:0:5401:0:(socklnd_cb.c:979:ksocknal_launch_packet())<br>
              >>> No usable routes to 12345-127.0.0.1@tcp<br>
              >>>
00000400:02020000:42.0:1621276186.440095:0:5400:0:(acceptor.c:321:lnet_accept())<br>
              >>> 120-3: Refusing connection from 127.0.0.1 for
              127.0.0.1@tcp: No<br>
              >>> matching NI<br>
              >>>
00000800:00020000:44.0:1621276186.446533:0:5386:0:(socklnd_cb.c:1817:ksocknal_recv_hello())<br>
              >>> Error -104 reading HELLO from 127.0.0.1<br>
              >>>
00000400:02020000:44.0:1621276186.452996:0:5386:0:(acceptor.c:127:lnet_connect_console_error())<br>
              >>> 11b-b: Connection to 127.0.0.1@tcp at host
              127.0.0.1 on port 988 was<br>
              >>> reset: is it running a compatible version of
              Lustre and is 127.0.0.1@tcp<br>
              >>> one of its NIDs?<br>
              >>>
00000800:00000100:44.0:1621276186.461433:0:5386:0:(socklnd_cb.c:438:ksocknal_txlist_done())<br>
              >>> Deleting packet type 2 len 0
              10.0.1.70@tcp->127.0.0.1@tcp<br>
              >>> Debug log: 872 lines, 872 kept, 0 dropped, 0
              bad.<br>
              >>><br>
              >>><br>
              >>><br>
              >>> I just cant find out any help would be very
              appreciated<br>
              >>><br>
              >>><br>
              >>> Thanks all<br>
              >>><br>
              >>><br>
              >>><br>
              >>><br>
              >>><br>
              >>><br>
              >>> --<br>
              >>> Tahari.Abdeslam<br>
              >>>
              _______________________________________________<br>
              >>> lustre-discuss mailing list<br>
              >>> <a
                href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
              >>> <a
                href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
              >>><br>
              >><br>
              ><br>
              > --<br>
              > Tahari.Abdeslam<br>
              ><br>
              -------------- next part --------------<br>
              An HTML attachment was scrubbed...<br>
              URL: <<a
href="http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/2adc6c81/attachment.html"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/2adc6c81/attachment.html</a>><br>
              <br>
              ------------------------------<br>
              <br>
              Subject: Digest Footer<br>
              <br>
              _______________________________________________<br>
              lustre-discuss mailing list<br>
              <a href="mailto:lustre-discuss@lists.lustre.org"
                target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
              <a
                href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
              <br>
              <br>
              ------------------------------<br>
              <br>
              End of lustre-discuss Digest, Vol 182, Issue 12<br>
              ***********************************************<br>
            </blockquote>
          </div>
          _______________________________________________<br>
          lustre-discuss mailing list<br>
          <a href="mailto:lustre-discuss@lists.lustre.org"
            target="_blank" moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
          <a
            href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
            rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
        </blockquote>
      </div>
      <br clear="all">
      <div><br>
      </div>
      -- <br>
      <div dir="ltr" class="gmail_signature">Tahari.Abdeslam</div>
    </blockquote>
  </body>
</html>