<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Dear All, <br>
    </p>
    <p>we have a similar setup with Lustre on ZFS and we make regular
      use of snapshots for the purpose of backups (backups on tape use
      snapshots as source). We would like to use robinhood in future and
      the question is now how to do it. <br>
    </p>
    <p>Would it be a workaround to disable the robinhood daemon
      temporary during the mount process?<br>
      Does the problem only occur when changelogs are consumed during
      the process of mounting a snapshot? Or is it also a problem when
      changelogs are consumed while the snapshot remains mounted (which
      is for us typically several hours)? <br>
      Is there already an LU-ticket about this issue?</p>
    <p>Thanks!<br>
      Robert<br>
    </p>
    -- <br>
    Dr. Robert Redl<br>
    Scientific Programmer, "Waves to Weather" (SFB/TRR165)<br>
    Meteorologisches Institut<br>
    Ludwig-Maximilians-Universität München<br>
    Theresienstr. 37, 80333 München, Germany<br>
    <br>
    <div class="moz-cite-prefix">Am 03.09.2018 um 08:16 schrieb Yong,
      Fan:<br>
    </div>
    <blockquote type="cite"
cite="mid:7FB055E0B36B6F4EB93E637E0640A56FCC03F958@FMSMSX125.amr.corp.intel.com">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <style><!--
/* Font Definitions */
@font-face
        {font-family:宋体;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:"\@宋体";
        panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:宋体;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:#1F497D;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US">I would say that it is not your operations
            order caused trouble. Instead, it is related with the
            snapshot mount logic. As mentioned in former reply, we need
            some patch for the llog logic to avoid modifying llog under
            snapshot mode.<o:p></o:p></span></p>
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US"><o:p> </o:p></span></p>
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US"><o:p> </o:p></span></p>
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US">--<o:p></o:p></span></p>
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US">Cheers,<o:p></o:p></span></p>
        <p class="MsoNormal"><span
style="font-size:10.5pt;font-family:"Calibri",sans-serif;color:#1F497D"
            lang="EN-US">Nasf<o:p></o:p></span></p>
        <div>
          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0cm 0cm 0cm">
            <p class="MsoNormal"
              style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><b><span
style="font-size:11.0pt;font-family:"Calibri",sans-serif"
                  lang="EN-US">From:</span></b><span
                style="font-size:11.0pt;font-family:"Calibri",sans-serif"
                lang="EN-US"> Kirk, Benjamin (JSC-EG311)
                [<a class="moz-txt-link-freetext" href="mailto:benjamin.kirk@nasa.gov">mailto:benjamin.kirk@nasa.gov</a>] <br>
                <b>Sent:</b> Tuesday, August 28, 2018 7:53 PM<br>
                <b>To:</b> <a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a><br>
                <b>Cc:</b> Andreas Dilger <a class="moz-txt-link-rfc2396E" href="mailto:adilger@whamcloud.com"><adilger@whamcloud.com></a>;
                Yong, Fan <a class="moz-txt-link-rfc2396E" href="mailto:fan.yong@intel.com"><fan.yong@intel.com></a><br>
                <b>Subject:</b> Re: [lustre-discuss] Lustre/ZFS
                snapshots mount error<o:p></o:p></span></p>
          </div>
        </div>
        <p class="MsoNormal"
          style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
            lang="EN-US"><o:p> </o:p></span></p>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">The MDS situation is very basic:
              active/passive mds0/mds1 for both fas & fsB.  fsA has
              the combined msg/mdt in a single zfs filesystem, and fsB
              has its own mdt in a separate zfs filesystem.  mds0 is
              primary for all.<o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US"><o:p> </o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">fsA & fsB DO both have changelogs enabled
              to feed robinhood databases.<o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US"><o:p> </o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">What’s the recommended procedure here we
              should follow before mounting the snapshots?<o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US"><o:p> </o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">1) disable changelogs on the active mdt’s
              (this will compromise robinhood, requiring a rescan…), or
               <o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">2) temporarily halt changelog consumption /
              cleanup (e.g. stop robinhood in our case) and then mount
              the snapshot?<o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US"><o:p> </o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">Thanks for the help!
              <o:p></o:p></span></p>
          <div>
            <p class="MsoNormal"
              style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                lang="EN-US"><o:p> </o:p></span></p>
          </div>
        </div>
        <div>
          <p class="MsoNormal"
            style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
              lang="EN-US">--<o:p></o:p></span></p>
          <div>
            <div>
              <div>
                <div>
                  <p class="MsoNormal"
                    style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                      style="color:black" lang="EN-US">Benjamin S. Kirk,
                      Ph.D.<o:p></o:p></span></p>
                </div>
                <div>
                  <p class="MsoNormal"
                    style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                      style="color:black" lang="EN-US">NASA Lyndon B.
                      Johnson Space Center<o:p></o:p></span></p>
                </div>
                <div>
                  <p class="MsoNormal"
                    style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                      style="color:black" lang="EN-US">Acting Chief,
                      Aeroscience & Flight Mechanics Division<o:p></o:p></span></p>
                </div>
                <div>
                  <p class="MsoNormal"
                    style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                      style="color:black" lang="EN-US"><o:p> </o:p></span></p>
                </div>
              </div>
            </div>
            <div>
              <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
                <div>
                  <p class="MsoNormal"
                    style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                      lang="EN-US">On Aug 27, 2018, at 7:33 PM, Yong,
                      Fan <<a href="mailto:fan.yong@intel.com"
                        moz-do-not-send="true">fan.yong@intel.com</a>>
                      wrote:<o:p></o:p></span></p>
                </div>
                <p class="MsoNormal"
                  style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                    lang="EN-US"><o:p> </o:p></span></p>
                <div>
                  <div>
                    <p class="MsoNormal"
                      style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                        lang="EN-US">According to the stack trace,
                        someone was trying to cleanup old empty llogs
                        during mount the snapshot. We do NOT allow any
                        modification during mount snapshot; otherwise,
                        it will trigger ZFS backend BUG(). That is why
                        we add LASSERT() when start the transaction. One
                        possible solution is that, we can add some check
                        in the llog logic to avoid modifying llog under
                        snapshot mode.<br>
                        <br>
                        <br>
                        --<br>
                        Cheers,<br>
                        Nasf<br>
                        <br>
                        -----Original Message-----<br>
                        From: lustre-discuss [<a
                          href="mailto:lustre-discuss-bounces@lists.lustre.org"
                          moz-do-not-send="true">mailto:lustre-discuss-bounces@lists.lustre.org</a>]
                        On Behalf Of Andreas Dilger<br>
                        Sent: Tuesday, August 28, 2018 5:57 AM<br>
                        To: Kirk, Benjamin (JSC-EG311) <<a
                          href="mailto:benjamin.kirk@nasa.gov"
                          moz-do-not-send="true">benjamin.kirk@nasa.gov</a>><br>
                        Cc: <a
                          href="mailto:lustre-discuss@lists.lustre.org"
                          moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
                        Subject: Re: [lustre-discuss] Lustre/ZFS
                        snapshots mount error<br>
                        <br>
                        It's probably best to file an LU ticket for this
                        issue.<br>
                        <br>
                        It looks like there is something with the log
                        processing at mount that is trying to modify the
                        configuration files.  I'm not sure whether that
                        should be allowed or not.<br>
                        <br>
                        Does fab have the same MGS as fsA?  Does it have
                        the same MDS node as fsA?<br>
                        If it has a different MDS, you might consider to
                        give it its own MGS as well.<br>
                        That doesn't have to be a separate MGS node,
                        just a separate filesystem (ZFS fileset in the
                        same zpool) on the MDS node.<br>
                        <br>
                        Cheers, Andreas<br>
                        <br>
                        <br>
                        <o:p></o:p></span></p>
                    <blockquote
                      style="margin-top:5.0pt;margin-bottom:5.0pt">
                      <p class="MsoNormal"
                        style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                          lang="EN-US">On Aug 27, 2018, at 10:18, Kirk,
                          Benjamin (JSC-EG311) <<a
                            href="mailto:benjamin.kirk@nasa.gov"
                            moz-do-not-send="true">benjamin.kirk@nasa.gov</a>>
                          wrote:<br>
                          <br>
                          Hi all,<br>
                          <br>
                          We have two filesystems, fsA & fsB (eadc
                          below).  Both of which get snapshots taken
                          daily, rotated over a week.  It’s a beautiful
                          feature we’ve been using in production ever
                          since it was introduced with 2.10.<br>
                          <br>
                          -) We’ve got Lustre/ZFS 2.10.4 on CentOS 7.5.<br>
                          -) Both fsA & fsB have changelogs active.<br>
                          -) fsA has combined mgt/mdt on a single ZFS
                          filesystem.<br>
                          -) fsB has a single mdt on a single ZFS
                          filesystem.<br>
                          -) for fsA, I have no issues mounting any of
                          the snapshots via lctl.<br>
                          -) for fsB, I can mount the most three recent
                          snapshots, then encounter errors:<br>
                          <br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_mount -F
                          eadc -n eadc_AutoSS-Mon <br>
                          mounted the snapshot eadc_AutoSS-Mon with
                          fsname 3d40bbc<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_umount
                          -F eadc -n <br>
                          eadc_AutoSS-Mon<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_mount -F
                          eadc -n eadc_AutoSS-Sun <br>
                          mounted the snapshot eadc_AutoSS-Sun with
                          fsname 584c07a<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_umount
                          -F eadc -n <br>
                          eadc_AutoSS-Sun<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_mount -F
                          eadc -n eadc_AutoSS-Sat <br>
                          mounted the snapshot eadc_AutoSS-Sat with
                          fsname 4e646fe<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_umount
                          -F eadc -n <br>
                          eadc_AutoSS-Sat<br>
                          [root@hpfs-fsl-mds0 ~]# lctl snapshot_mount -F
                          eadc -n eadc_AutoSS-Fri<br>
                          mount.lustre: mount
                          metadata/meta-eadc@eadc_AutoSS-Fri at <br>
                          /mnt/eadc_AutoSS-Fri_MDT0000 failed: Read-only
                          file system Can't mount <br>
                          the snapshot eadc_AutoSS-Fri: Read-only file
                          system<br>
                          <br>
                          The relevant bits from dmesg are<br>
                          [1353434.417762] Lustre: 3d40bbc-MDT0000: set
                          dev_rdonly on this <br>
                          device [1353434.417765] Lustre: Skipped 3
                          previous similar messages <br>
                          [1353434.649480] Lustre: 3d40bbc-MDT0000:
                          Imperative Recovery enabled, <br>
                          recovery window shrunk from 300-900 down to
                          150-900 [1353434.649484] <br>
                          Lustre: Skipped 3 previous similar messages
                          [1353434.866228] Lustre: <br>
                          3d40bbc-MDD0000: changelog on [1353434.866233]
                          Lustre: Skipped 1 <br>
                          previous similar message [1353435.427744]
                          Lustre: 3d40bbc-MDT0000: <br>
                          Connection restored to <a
                            href="mailto:...@tcp" moz-do-not-send="true">...@tcp</a>
                          (at <a href="mailto:...@tcp"
                            moz-do-not-send="true">
                            ...@tcp</a>) [1353435.427747] Lustre: <br>
                          Skipped 23 previous similar messages
                          [1353445.255899] Lustre: Failing <br>
                          over 3d40bbc-MDT0000 [1353445.255903] Lustre:
                          Skipped 3 previous <br>
                          similar messages [1353445.256150] LustreError:
                          11-0: <br>
                          3d40bbc-OST0000-osc-MDT0000: operation
                          ost_disconnect to node <a
                            href="mailto:...@tcp" moz-do-not-send="true">
                            ...@tcp</a> <br>
                          failed: rc = -107 [1353445.257896]
                          LustreError: Skipped 23 previous <br>
                          similar messages [1353445.353874] Lustre:
                          server umount <br>
                          3d40bbc-MDT0000 complete [1353445.353877]
                          Lustre: Skipped 3 previous <br>
                          similar messages [1353475.302224] Lustre:
                          4e646fe-MDD0000: changelog <br>
                          on [1353475.302228] Lustre: Skipped 1 previous
                          similar message [1353498.964016] LustreError:
                          25582:0:(osd_handler.c:341:osd_trans_create())
                          36ca26b-MDT0000-osd: someone try to start
                          transaction under readonly mode, should be
                          disabled.<br>
                          [1353498.967260] LustreError:
                          25582:0:(osd_handler.c:341:osd_trans_create())
                          Skipped 1 previous similar message<br>
                          [1353498.968829] CPU: 6 PID: 25582 Comm:
                          mount.lustre Kdump: loaded Tainted: P
                                    OE  ------------
                            3.10.0-862.6.3.el7.x86_64 #1<br>
                          [1353498.968830] Hardware name: Supermicro
                          SYS-6027TR-D71FRF/X9DRT, <br>
                          BIOS 3.2a 08/04/2015 [1353498.968832] Call
                          Trace:<br>
                          [1353498.968841]  [<ffffffffb5b0e80e>]
                          dump_stack+0x19/0x1b <br>
                          [1353498.968851]  [<ffffffffc0cbe5db>]
                          osd_trans_create+0x38b/0x3d0 <br>
                          [osd_zfs] [1353498.968876]
                           [<ffffffffc1116044>] <br>
                          llog_destroy+0x1f4/0x3f0 [obdclass]
                          [1353498.968887]  <br>
                          [<ffffffffc111f0f6>]
                          llog_cat_reverse_process_cb+0x246/0x3f0 <br>
                          [obdclass] [1353498.968897]
                           [<ffffffffc111a32c>] <br>
                          llog_reverse_process+0x38c/0xaa0 [obdclass]
                          [1353498.968910]  <br>
                          [<ffffffffc111eeb0>] ?
                          llog_cat_process_cb+0x4e0/0x4e0 [obdclass] <br>
                          [1353498.968922]  [<ffffffffc111af69>] <br>
                          llog_cat_reverse_process+0x179/0x270
                          [obdclass] [1353498.968932]  <br>
                          [<ffffffffc1115585>] ?
                          llog_init_handle+0xd5/0x9a0 [obdclass] <br>
                          [1353498.968943]  [<ffffffffc1116e78>] ?
                          llog_open_create+0x78/0x320 <br>
                          [obdclass] [1353498.968949]
                           [<ffffffffc12e55f0>] ? <br>
                          mdd_root_get+0xf0/0xf0 [mdd] [1353498.968954]
                           [<ffffffffc12ec7af>] <br>
                          mdd_prepare+0x13ff/0x1c70 [mdd]
                          [1353498.968966]  [<ffffffffc166b037>] <br>
                          mdt_prepare+0x57/0x3b0 [mdt] [1353498.968983]
                           [<ffffffffc1183afd>] <br>
                          server_start_targets+0x234d/0x2bd0 [obdclass]
                          [1353498.968999]  <br>
                          [<ffffffffc1153500>] ?
                          class_config_dump_handler+0x7e0/0x7e0 <br>
                          [obdclass] [1353498.969012]
                           [<ffffffffc118541d>] <br>
                          server_fill_super+0x109d/0x185a [obdclass]
                          [1353498.969025]  <br>
                          [<ffffffffc115cef8>]
                          lustre_fill_super+0x328/0x950 [obdclass] <br>
                          [1353498.969038]  [<ffffffffc115cbd0>] ?
                          <br>
                          lustre_common_put_super+0x270/0x270 [obdclass]
                          [1353498.969041]  <br>
                          [<ffffffffb561f3bf>]
                          mount_nodev+0x4f/0xb0 [1353498.969053]  <br>
                          [<ffffffffc1154f18>]
                          lustre_mount+0x38/0x60 [obdclass] <br>
                          [1353498.969055]  [<ffffffffb561ff3e>]
                          mount_fs+0x3e/0x1b0 [1353498.969060]
                           [<ffffffffb563d4b7>]
                          vfs_kern_mount+0x67/0x110 [1353498.969062]
                           [<ffffffffb563fadf>]
                          do_mount+0x1ef/0xce0 [1353498.969066]
                           [<ffffffffb55f7c2c>] ?
                          kmem_cache_alloc_trace+0x3c/0x200
                          [1353498.969069]  [<ffffffffb5640913>]
                          SyS_mount+0x83/0xd0 [1353498.969074]
                           [<ffffffffb5b20795>]
                          system_call_fastpath+0x1c/0x21
                          [1353498.969079] LustreError:
                          25582:0:(llog_cat.c:1027:llog_cat_reverse_process_cb())
                          36ca26b-MDD0000: fail to destroy empty log: rc
                          = -30<br>
                          [1353498.970785] CPU: 6 PID: 25582 Comm:
                          mount.lustre Kdump: loaded Tainted: P
                                    OE  ------------
                            3.10.0-862.6.3.el7.x86_64 #1<br>
                          [1353498.970786] Hardware name: Supermicro
                          SYS-6027TR-D71FRF/X9DRT, <br>
                          BIOS 3.2a 08/04/2015 [1353498.970787] Call
                          Trace:<br>
                          [1353498.970790]  [<ffffffffb5b0e80e>]
                          dump_stack+0x19/0x1b <br>
                          [1353498.970795]  [<ffffffffc0cbe5db>]
                          osd_trans_create+0x38b/0x3d0 <br>
                          [osd_zfs] [1353498.970807]
                           [<ffffffffc1117921>] <br>
                          llog_cancel_rec+0xc1/0x880 [obdclass]
                          [1353498.970817]  <br>
                          [<ffffffffc111e13b>]
                          llog_cat_cleanup+0xdb/0x380 [obdclass] <br>
                          [1353498.970827]  [<ffffffffc111f14d>] <br>
                          llog_cat_reverse_process_cb+0x29d/0x3f0
                          [obdclass] [1353498.970838]  <br>
                          [<ffffffffc111a32c>]
                          llog_reverse_process+0x38c/0xaa0 [obdclass] <br>
                          [1353498.970848]  [<ffffffffc111eeb0>] ?
                          <br>
                          llog_cat_process_cb+0x4e0/0x4e0 [obdclass]
                          [1353498.970858]  <br>
                          [<ffffffffc111af69>]
                          llog_cat_reverse_process+0x179/0x270
                          [obdclass] <br>
                          [1353498.970868]  [<ffffffffc1115585>] ?
                          llog_init_handle+0xd5/0x9a0 <br>
                          [obdclass] [1353498.970878]
                           [<ffffffffc1116e78>] ? <br>
                          llog_open_create+0x78/0x320 [obdclass]
                          [1353498.970883]  <br>
                          [<ffffffffc12e55f0>] ?
                          mdd_root_get+0xf0/0xf0 [mdd] [1353498.970887]
                           <br>
                          [<ffffffffc12ec7af>]
                          mdd_prepare+0x13ff/0x1c70 [mdd]
                          [1353498.970894]  <br>
                          [<ffffffffc166b037>]
                          mdt_prepare+0x57/0x3b0 [mdt] [1353498.970908]
                           <br>
                          [<ffffffffc1183afd>]
                          server_start_targets+0x234d/0x2bd0 [obdclass]
                          <br>
                          [1353498.970924]  [<ffffffffc1153500>] ?
                          <br>
                          class_config_dump_handler+0x7e0/0x7e0
                          [obdclass] [1353498.970938]  <br>
                          [<ffffffffc118541d>]
                          server_fill_super+0x109d/0x185a [obdclass] <br>
                          [1353498.970950]  [<ffffffffc115cef8>]
                          lustre_fill_super+0x328/0x950 <br>
                          [obdclass] [1353498.970962]
                           [<ffffffffc115cbd0>] ? <br>
                          lustre_common_put_super+0x270/0x270 [obdclass]
                          [1353498.970964]  <br>
                          [<ffffffffb561f3bf>]
                          mount_nodev+0x4f/0xb0 [1353498.970976]  <br>
                          [<ffffffffc1154f18>]
                          lustre_mount+0x38/0x60 [obdclass] <br>
                          [1353498.970978]  [<ffffffffb561ff3e>]
                          mount_fs+0x3e/0x1b0 <br>
                          [1353498.970980]  [<ffffffffb563d4b7>]
                          vfs_kern_mount+0x67/0x110 <br>
                          [1353498.970982]  [<ffffffffb563fadf>]
                          do_mount+0x1ef/0xce0 <br>
                          [1353498.970984]  [<ffffffffb55f7c2c>] ?
                          <br>
                          kmem_cache_alloc_trace+0x3c/0x200
                          [1353498.970986]  <br>
                          [<ffffffffb5640913>] SyS_mount+0x83/0xd0
                          [1353498.970989]  <br>
                          [<ffffffffb5b20795>]
                          system_call_fastpath+0x1c/0x21
                          [1353498.970996] <br>
                          LustreError:
                          25582:0:(mdd_device.c:354:mdd_changelog_llog_init())
                          <br>
                          36ca26b-MDD0000: changelog init failed: rc =
                          -30 [1353498.972790] <br>
                          LustreError:
                          25582:0:(mdd_device.c:427:mdd_changelog_init())
                          <br>
                          36ca26b-MDD0000: changelog setup during init
                          failed: rc = -30 <br>
                          [1353498.974525] LustreError: <br>
                          25582:0:(mdd_device.c:1061:mdd_prepare())
                          36ca26b-MDD0000: failed to <br>
                          initialize changelog: rc = -30
                          [1353498.976229] LustreError: <br>
25582:0:(obd_mount_server.c:1879:server_fill_super()) Unable to start <br>
                          targets: -30 [1353499.072002] LustreError: <br>
                          25582:0:(obd_mount.c:1582:lustre_fill_super())
                          Unable to mount  (-30)<br>
                          <br>
                          <br>
                          I’m hoping those traces mean something to
                          someone - any ideas?<br>
                          <br>
                          Thanks!<br>
                          <br>
                          --<br>
                          Benjamin S. Kirk<br>
                          <br>
_______________________________________________<br>
                          lustre-discuss mailing list<br>
                          <a
                            href="mailto:lustre-discuss@lists.lustre.org"
                            moz-do-not-send="true">lustre-discuss@lists.lustre.org</a><br>
                          <a
                            href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
                            moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><o:p></o:p></span></p>
                    </blockquote>
                    <p class="MsoNormal"
style="mso-margin-top-alt:0cm;margin-right:0cm;margin-bottom:12.0pt;margin-left:21.0pt;mso-margin-top-alt:0cm;mso-para-margin-right:0cm;mso-para-margin-bottom:12.0pt;mso-para-margin-left:1.75gd"><span
                        lang="EN-US"><br>
                        Cheers, Andreas<br>
                        ---<br>
                        Andreas Dilger<br>
                        CTO Whamcloud<br>
                        <br>
                        <br>
                        <br>
                        <o:p></o:p></span></p>
                  </div>
                </div>
              </blockquote>
            </div>
            <p class="MsoNormal"
              style="margin-left:21.0pt;mso-para-margin-left:1.75gd"><span
                lang="EN-US"><o:p> </o:p></span></p>
          </div>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
lustre-discuss mailing list
<a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a>
<a class="moz-txt-link-freetext" href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>