<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hello again,</p>
    <p><br>
    </p>
    <p>Yesterday the MDS server crashed twice (the whole machine). <br>
    </p>
    <p>The first one was berore 22:57. The second one was at 00:15 of
      today. <br>
    </p>
    <p>Here you can see the Lustre related logs. The server was manually
      rebooted from the first hang at 22:57 and Lustre started the MDT
      recovery. After recovery, the whole system was working 'propertly'
      until 23:00 where the data started to be unaccesible for the
      clients. Finally, the server hangs at 00:15, but the last lustre
      log is at 23:26.</p>
    <p><br>
    </p>
    <p>Here I can see a different line I have not seen before: "<i>$$$
        failed to release quota space on glimpse 0!=60826269226353608"</i></p>
    <p><i><br>
      </i></p>
    <p><br>
    </p>
    <p><br>
    </p>
    <p><i>Jan 24 22:57:08 srv-lustre11 kernel: Lustre: LUSTRE-MDT0000:
        Imperative Recovery not enabled, recovery window 300-900<br>
        Jan 24 22:57:08 srv-lustre11 kernel: Lustre: LUSTRE-MDT0000: in
        recovery but waiting for the first client to connect<br>
        Jan 24 22:57:08 srv-lustre11 kernel: Lustre: LUSTRE-MDT0000:
        Will be in recovery for at least 5:00, or until 125 clients
        reconnect<br>
        Jan 24 22:57:13 srv-lustre11 kernel: LustreError:
        3949134:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@000000003892d67b
        x1812509961058304/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0c20_UUID@10.5.33.243@o2ib1:274/0">LUSTRE-MDT0000-lwp-OST0c20_UUID@10.5.33.243@o2ib1:274/0</a>
        lens 336/0 e 0 to 0 dl 1737755839 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:13 srv-lustre11 kernel: LustreError:
        3949134:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 1 previous similar message<br>
        Jan 24 22:57:20 srv-lustre11 kernel: LustreError:
        3949407:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@000000009a279624
        x1812509773308160/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0fa7_UUID@10.5.33.244@o2ib1:281/0">LUSTRE-MDT0000-lwp-OST0fa7_UUID@10.5.33.244@o2ib1:281/0</a>
        lens 336/0 e 0 to 0 dl 1737755846 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:20 srv-lustre11 kernel: LustreError:
        3949407:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 9 previous similar messages<br>
        Jan 24 22:57:21 srv-lustre11 kernel: LustreError:
        3949413:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@000000000db38b1b
        x1812509961083456/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0c1e_UUID@10.5.33.243@o2ib1:282/0">LUSTRE-MDT0000-lwp-OST0c1e_UUID@10.5.33.243@o2ib1:282/0</a>
        lens 336/0 e 0 to 0 dl 1737755847 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:21 srv-lustre11 kernel: LustreError:
        3949413:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 12 previous similar messages<br>
        Jan 24 22:57:24 srv-lustre11 kernel: LustreError:
        3949411:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@0000000034e830d1
        x1812509773318336/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0fa1_UUID@10.5.33.244@o2ib1:285/0">LUSTRE-MDT0000-lwp-OST0fa1_UUID@10.5.33.244@o2ib1:285/0</a>
        lens 336/0 e 0 to 0 dl 1737755850 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:24 srv-lustre11 kernel: LustreError:
        3949411:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 8 previous similar messages<br>
        Jan 24 22:57:30 srv-lustre11 kernel: LustreError:
        3949406:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@00000000e40a36e5
        x1812509961108224/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0bbc_UUID@10.5.33.243@o2ib1:291/0">LUSTRE-MDT0000-lwp-OST0bbc_UUID@10.5.33.243@o2ib1:291/0</a>
        lens 336/0 e 0 to 0 dl 1737755856 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:30 srv-lustre11 kernel: LustreError:
        3949406:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 24 previous similar messages<br>
        Jan 24 22:57:38 srv-lustre11 kernel: LustreError:
        3949413:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@000000004a78941b
        x1812509961124480/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST0c1d_UUID@10.5.33.243@o2ib1:299/0">LUSTRE-MDT0000-lwp-OST0c1d_UUID@10.5.33.243@o2ib1:299/0</a>
        lens 336/0 e 0 to 0 dl 1737755864 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:38 srv-lustre11 kernel: LustreError:
        3949413:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 57 previous similar messages<br>
        Jan 24 22:57:57 srv-lustre11 kernel: LustreError:
        3949482:0:(tgt_handler.c:539:tgt_filter_recovery_request()) @@@
        not permitted during recovery  req@000000002220d707
        x1812509773390720/t0(0)
        o601-><a class="moz-txt-link-abbreviated" href="mailto:LUSTRE-MDT0000-lwp-OST139c_UUID@10.5.33.244@o2ib1:318/0">LUSTRE-MDT0000-lwp-OST139c_UUID@10.5.33.244@o2ib1:318/0</a>
        lens 336/0 e 0 to 0 dl 1737755883 ref 1 fl Interpret:/0/ffffffff
        rc 0/-1 job:'lquota_wb_LUSTR.0'<br>
        Jan 24 22:57:57 srv-lustre11 kernel: LustreError:
        3949482:0:(tgt_handler.c:539:tgt_filter_recovery_request())
        Skipped 99 previous similar messages<br>
        Jan 24 22:58:15 srv-lustre11 kernel: Lustre: LUSTRE-MDT0000:
        Recovery over after 1:07, of 125 clients 125 recovered and 0
        were evicted.<br>
        Jan 24 22:58:50 srv-lustre11 kernel: LustreError:
        3949159:0:(qmt_handler.c:798:qmt_dqacq0()) $$$ Release too much!
        uuid:LUSTRE-MDT0000-lwp-OST0bc4_UUID release: 60826269226353608
        granted:66040, total:13781524  qmt:LUSTRE-QMT0000 pool:dt-0x0
        id:2949 enforced:1 hard:62914560 soft:52428800 granted:13781524
        time:0 qunit: 262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
        Jan 24 22:58:50 srv-lustre11 kernel: LustreError:
        3949159:0:(qmt_lock.c:425:qmt_lvbo_update()) <b>$$$ failed to
          release quota space on glimpse 0!=60826269226353608</b> : rc =
        -22#012  qmt:LUSTRE-QMT0000 pool:dt-0x0 id:2949 enforced:1
        hard:62914560 soft:52428800 granted:13781524 time:0 qunit:
        262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
        Jan 24 23:08:52 srv-lustre11 kernel: Lustre:
        LUSTRE-OST1389-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:09:39 srv-lustre11 kernel: Lustre:
        LUSTRE-OST138b-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:10:24 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST138d-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:10:32 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ef-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:10:32 srv-lustre11 kernel: LustreError: Skipped 5
        previous similar messages<br>
        Jan 24 23:11:18 srv-lustre11 kernel: Lustre:
        LUSTRE-OST138d-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:11:25 srv-lustre11 kernel: Lustre:
        LUSTRE-OST138a-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:12:09 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST1390-osc-MDT0000: operation ost_connect to node
        10.5.33.244@o2ib1 failed: rc = -19<br>
        Jan 24 23:12:09 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:12:09 srv-lustre11 kernel: Lustre:
        LUSTRE-OST138e-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:12:23 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ef-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:12:23 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:12:58 srv-lustre11 kernel: Lustre:
        LUSTRE-OST138f-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:13:46 srv-lustre11 kernel: Lustre:
        LUSTRE-OST1390-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:13:46 srv-lustre11 kernel: Lustre: Skipped 1 previous
        similar message<br>
        Jan 24 23:14:35 srv-lustre11 kernel: Lustre:
        LUSTRE-OST1391-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:14:36 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST1392-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:14:36 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:16:48 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ef-osc-MDT0000: operation ost_connect to node
        10.5.33.244@o2ib1 failed: rc = -19<br>
        Jan 24 23:16:48 srv-lustre11 kernel: LustreError: Skipped 4
        previous similar messages<br>
        Jan 24 23:17:02 srv-lustre11 kernel: Lustre:
        LUSTRE-OST03f3-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:17:02 srv-lustre11 kernel: Lustre: Skipped 1 previous
        similar message<br>
        Jan 24 23:19:33 srv-lustre11 kernel: Lustre:
        LUSTRE-OST03f6-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:19:33 srv-lustre11 kernel: Lustre: Skipped 2 previous
        similar messages<br>
        Jan 24 23:19:41 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ed-osc-MDT0000: operation ost_connect to node
        10.5.33.244@o2ib1 failed: rc = -19<br>
        Jan 24 23:19:41 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:22:11 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ed-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:22:11 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:23:59 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13ed-osc-MDT0000: operation ost_connect to node
        10.5.33.245@o2ib1 failed: rc = -19<br>
        Jan 24 23:23:59 srv-lustre11 kernel: LustreError: Skipped 3
        previous similar messages<br>
        Jan 24 23:24:29 srv-lustre11 kernel: Lustre:
        LUSTRE-OST03fc-osc-MDT0000: Connection restored to
        10.5.33.245@o2ib1 (at 10.5.33.245@o2ib1)<br>
        Jan 24 23:24:29 srv-lustre11 kernel: Lustre: Skipped 5 previous
        similar messages<br>
        Jan 24 23:26:33 srv-lustre11 kernel: LustreError: 11-0:
        LUSTRE-OST13f0-osc-MDT0000: operation ost_connect to node
        10.5.33.244@o2ib1 failed: rc = -19<br>
        Jan 24 23:26:33 srv-lustre11 kernel: LustreError: Skipped 5
        previous similar messages</i><br>
    </p>
    <p><br>
    </p>
    <p><br>
    </p>
    <p>Thanks.</p>
    <p>Jose.<br>
    </p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">El 21/01/2025 a las 10:34, Jose Manuel
      Martínez García escribió:<br>
    </div>
    <blockquote type="cite"
      cite="mid:e06d82c1-06b2-4ef6-a2d5-9bdc78f7593e@scayle.es">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <p>Hello everybody.<br>
      </p>
      <p><br>
      </p>
      <p>I am dealing with an issue with a relatively new Lustre
        installation. The Metadata Server (MDS) hangs randomly without
        any common pattern. It can take anywhere from 30 minutes to 30
        days, but it always ends up hanging without a consistent pattern
        (at least, I haven't found one). The logs don't show anything
        unusual at the time of the failure. The only thing I
        continuously see are these messages:<br>
        <br>
        <i>[lun ene 20 14:17:10 2025] LustreError:
          7068:0:(qsd_handler.c:340:qsd_req_completion()) $$$ DQACQ
          failed with -22, flags:0x4  qsd:LUSTRE-OST138f qtype:prj
          id:2325 enforced:1 granted: 16304159618662232032 pending:0
          waiting:0 req:1 usage: 114636 qunit:262144 qtune:65536
          edquot:0 default:yes<br>
          [lun ene 20 14:17:10 2025] LustreError:
          7068:0:(qsd_handler.c:340:qsd_req_completion()) Skipped 39
          previous similar messages<br>
          [lun ene 20 14:21:52 2025] LustreError:
          1895328:0:(qmt_handler.c:798:qmt_dqacq0()) $$$ Release too
          much! uuid:LUSTRE-MDT0000-lwp-OST0c1f_UUID release:
          15476132855418716160 granted:262144, total:14257500 
          qmt:LUSTRE-QMT0000 pool:dt-0x0 id:2582 enforced:1
          hard:62914560 soft:52428800 granted:14257500 time:0 qunit:
          262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
          [lun ene 20 14:21:52 2025] LustreError:
          1947381:0:(qmt_handler.c:798:qmt_dqacq0()) $$$ Release too
          much! uuid:LUSTRE-MDT0000-lwp-OST0fb2_UUID release:
          13809297465413342331 granted:66568, total:14179564 
          qmt:LUSTRE-QMT0000 pool:dt-0x0 id:2325 enforced:1
          hard:62914560 soft:52428800 granted:14179564 time:0 qunit:
          262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
          [lun ene 20 14:21:52 2025] LustreError:
          1947381:0:(qmt_handler.c:798:qmt_dqacq0()) Skipped 802
          previous similar messages<br>
          [lun ene 20 14:21:52 2025] LustreError:
          1895328:0:(qmt_handler.c:798:qmt_dqacq0()) Skipped 802
          previous similar messages<br>
          [lun ene 20 14:27:24 2025] LustreError:
          7047:0:(qsd_handler.c:340:qsd_req_completion()) $$$ DQACQ
          failed with -22, flags:0x4  qsd:LUSTRE-OST138f qtype:prj
          id:2325 enforced:1 granted: 16304159618662232032 pending:0
          waiting:0 req:1 usage: 114636 qunit:262144 qtune:65536
          edquot:0 default:yes<br>
          [lun ene 20 14:27:24 2025] LustreError:
          7047:0:(qsd_handler.c:340:qsd_req_completion()) Skipped 39
          previous similar messages<br>
          [lun ene 20 14:31:52 2025] LustreError:
          1844354:0:(qmt_handler.c:798:qmt_dqacq0()) $$$ Release too
          much! uuid:LUSTRE-MDT0000-lwp-OST1399_UUID release:
          12882711387029922688 granted:66116, total:14078012 
          qmt:LUSTRE-QMT0000 pool:dt-0x0 id:2586 enforced:1
          hard:62914560 soft:52428800 granted:14078012 time:0 qunit:
          262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
          [lun ene 20 14:31:52 2025] LustreError:
          1844354:0:(qmt_handler.c:798:qmt_dqacq0()) Skipped 785
          previous similar messages<br>
          [lun ene 20 14:37:39 2025] LustreError:
          7054:0:(qsd_handler.c:340:qsd_req_completion()) $$$ DQACQ
          failed with -22, flags:0x4  qsd:LUSTRE-OST138f qtype:prj
          id:2325 enforced:1 granted: 16304159618662232032 pending:0
          waiting:0 req:1 usage: 114636 qunit:262144 qtune:65536
          edquot:0 default:yes<br>
          [lun ene 20 14:37:39 2025] LustreError:
          7054:0:(qsd_handler.c:340:qsd_req_completion()) Skipped 39
          previous similar messages<br>
          [lun ene 20 14:41:54 2025] LustreError:
          1895328:0:(qmt_handler.c:798:qmt_dqacq0()) $$$ Release too
          much! uuid:LUSTRE-MDT0000-lwp-OST0faa_UUID release:
          13811459193234480169 granted:65632, total:14179564 
          qmt:LUSTRE-QMT0000 pool:dt-0x0 id:2325 enforced:1
          hard:62914560 soft:52428800 granted:14179564 time:0 qunit:
          262144 edquot:0 may_rel:0 revoke:0 default:yes<br>
          [lun ene 20 14:41:54 2025] LustreError:
          1895328:0:(qmt_handler.c:798:qmt_dqacq0()) Skipped 798
          previous similar messages<br>
          [lun ene 20 14:47:53 2025] LustreError:
          7052:0:(qsd_handler.c:340:qsd_req_completion()) $$$ DQACQ
          failed with -22, flags:0x4  qsd:LUSTRE-OST138f qtype:prj
          id:2325 enforced:1 granted: 16304159618662232032 pending:0
          waiting:0 req:1 usage: 114636 qunit:262144 qtune:65536
          edquot:0 default:yes<br>
          [lun ene 20 14:47:53 2025] LustreError:
          7052:0:(qsd_handler.c:340:qsd_req_completion()) Skipped 39
          previous similar messages<br>
        </i><br>
        I have ruled out hardware failure since the MDS service has been
        moved between different servers, and it happens with all of
        them.<br>
        <br>
        Linux distribution: AlmaLinux release 8.10 (Cerulean Leopard)<br>
        Kernel: Linux srv-lustre15 4.18.0-553.5.1.el8_lustre.x86_64 #1
        SMP Fri Jun 28 18:44:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux<br>
        Lustre release: lustre-2.15.5-1.el8.x86_64<br>
        Not using ZFS.<br>
        <br>
        Any ideas on where to continue investigating?<br>
        Is the error appearing in dmesg a bug, or is it a corruption in
        the quota database?<br>
        <br>
        The possible bugs affecting quotas that might be related seem to
        be fixed in version 2.15.</p>
      <p><br>
      </p>
      <p>Thanks in advance.<br>
      </p>
      <div class="moz-signature">-- <br>
        <!--?xml version="1.0" encoding="UTF-8"?-->
        <!--This file was converted to xhtml by LibreOffice - see https://cgit.freedesktop.org/libreoffice/core/tree/filter/source/xslt for the code.-->
        <title xml:lang="en-US
">- no title specified</title>
        <meta name="DCTERMS.title" content="" xml:lang="en-US
">
        <meta name="DCTERMS.language" content="en-US
" scheme="DCTERMS.RFC4646">
        <meta name="DCTERMS.source"
          content="http://xml.openoffice.org/odf2xhtml">
        <meta name="DCTERMS.issued" content="2024-07-04T11:24:00"
          scheme="DCTERMS.W3CDTF">
        <meta name="DCTERMS.modified" content="2024-07-04T11:24:00"
          scheme="DCTERMS.W3CDTF">
        <meta name="DCTERMS.provenance" content="
" xml:lang="en-US
">
        <meta name="xsl:vendor" content="libxslt">
        <link rel="schema.DC" href="http://purl.org/dc/elements/1.1/"
          hreflang="en">
        <link rel="schema.DCTERMS" href="http://purl.org/dc/terms/"
          hreflang="en">
        <link rel="schema.DCTYPE" href="http://purl.org/dc/dcmitype/"
          hreflang="en">
        <link rel="schema.DCAM" href="http://purl.org/dc/dcam/"
          hreflang="en">
        <style>table { border-collapse:collapse; border-spacing:0; empty-cells:show }td, th { vertical-align:top; font-size:12pt;}h1, h2, h3, h4, h5, h6 { clear:both;}ol, ul { margin:0; padding:0;}li { list-style: none; margin:0; padding:0;}span.footnodeNumber { padding-right:1em; }span.annotation_style_by_filter { font-size:95%; font-family:Arial; background-color:#fff000;  margin:0; border:0; padding:0;  }span.heading_numbering { margin-right: 0.8rem; }* { margin:0;}.fr1 { font-size:11pt; font-family:Calibri; text-align:center; vertical-align:top; writing-mode:horizontal-tb; direction:ltr; border-top-style:none; border-left-style:none; border-bottom-style:none; border-right-style:none; margin-left:0in; margin-right:0in; margin-top:0in; margin-bottom:0in; background-color:transparent; padding:0in; }.P1 { font-size:12pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:left ! important; font-family:'Times New Roman'; writing-mode:horizontal-tb; direction:ltr; }.P2 { font-size:9pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:left ! important; font-family:'AvenirNext LT Pro Regular'; writing-mode:horizontal-tb; direction:ltr; color:#999999; }.P4 { font-size:11pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:left ! important; font-family:Calibri; writing-mode:horizontal-tb; direction:ltr; }.P5 { font-size:11pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:left ! important; font-family:Calibri; writing-mode:horizontal-tb; direction:ltr; }.P6 { font-size:11pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:left ! important; font-family:Calibri; writing-mode:horizontal-tb; direction:ltr; }.P7 { font-size:11pt; line-height:100%; margin-bottom:0in; margin-top:0in; text-align:justify ! important; font-family:Calibri; writing-mode:horizontal-tb; direction:ltr; }.Standard { font-size:11pt; font-family:Calibri; writing-mode:horizontal-tb; direction:ltr; margin-top:0in; margin-bottom:0.111in; line-height:108%; text-align:left ! important; }.Table1 { width:7.3799in; margin-left:0in; margin-top:0in; margin-bottom:0in; margin-right:auto;writing-mode:horizontal-tb; direction:ltr; }.Table1_A1 { border-top-style:none; border-left-style:none; border-bottom-style:none; border-right-style:none; padding-left:0in; padding-right:0.075in; padding-top:0in; padding-bottom:0in; writing-mode:horizontal-tb; direction:ltr; }.Table1_B2 { border-top-style:none; border-left-style:none; border-bottom-style:none; border-right-style:none; vertical-align:middle; padding-left:0in; padding-right:0.075in; padding-top:0in; padding-bottom:0in; writing-mode:horizontal-tb; direction:ltr; }.Table1_A { width:4.2326in; }.Table1_B { width:0.1938in; }.Table1_C { width:2.9535in; }.Internet_20_link { color:#0563c1; text-decoration:underline; }.ListLabel_20_1 { letter-spacing:-0.0071in; }.ListLabel_20_4 { color:#000000; letter-spacing:normal; font-style:normal; text-decoration:none ! important; font-weight:normal; display:true; }.T1 { color:#304452; font-family:'AvenirNext LT Pro Regular'; font-size:9pt; font-weight:bold; }.T10 { font-family:'AvenirNext LT Pro Regular'; font-size:8pt; }.T11 { font-family:'AvenirNext LT Pro Regular'; font-size:8pt; font-weight:bold; background-color:#ffffff; }.T2 { color:#304452; font-family:'AvenirNext LT Pro Regular'; font-size:9pt; font-weight:bold; }.T4 { color:#304452; font-family:'Times New Roman'; font-size:12pt; }.T5 { color:#666666; font-family:'AvenirNext LT Pro Regular'; font-size:9pt; }.T6 { color:#666666; font-family:'AvenirNext LT Pro Regular'; font-size:8pt; background-color:#ffffff; }.T7 { color:#999999; font-family:'AvenirNext LT Pro Regular'; font-size:9pt; }.T8 { font-family:'AvenirNext LT Pro Regular'; font-size:9pt; }.T9 { font-family:'AvenirNext LT Pro Regular'; font-size:9pt; }</style>
        <table border="0" cellspacing="0" cellpadding="0" class="Table1">
          <colgroup><col width="470"><col width="22"><col width="328"></colgroup><tbody>
            <tr class="Table11">
              <td style="text-align:left;width:4.2326in; "
                class="Table1_A1">
                <p class="P4"><span class="T1">Jose Manuel Martínez
                    García</span><span class="T4"></span></p>
                <p class="P5"><span class="T2">Coordinador de Sistemas</span><span
                    class="T2"></span></p>
                <p class="P5"><span class="T2">Supercomputación de
                    Castilla y León</span><span class="T2"></span></p>
                <p class="P5"><span class="T5">Tel: 987 293 174</span><span
                    class="T7"></span></p>
              </td>
              <td rowspan="2" style="text-align:left;width:0.1938in; "
                class="Table1_A1">
                <p class="P2"> </p>
              </td>
              <td rowspan="2" style="text-align:left;width:2.9535in; "
                class="Table1_A1"><!--Next 'div' was a 'text:p'.-->
                <div class="P6"><!--Next '
            span' is a draw:frame.
        --><span style="height:0.622in;width:2.8957in; padding:0; "
                    class="fr1" id="Imagen_9"><img
                      style="height:1.5799cm;width:7.3551cm;" alt=""
                      src="cid:part1.SNnSYTN8.TPkR4tjz@scayle.es"
                      class=""></span><span class="T8"></span></div>
                <div
style="clear:both; line-height:0; width:0; height:0; margin:0; padding:0;"> </div>
              </td>
            </tr>
            <tr class="Table12">
              <td style="text-align:left;width:4.2326in; "
                class="Table1_A1">
                <p class="P5"><span class="T5">Edificio CRAI-TIC, Campus
                    de Vegazana, s/n Universidad de León - 24071 León,
                    España</span><span class="T5"></span></p>
              </td>
            </tr>
            <tr class="Table13">
              <td colspan="3" style="text-align:left;width:4.2326in; "
                class="Table1_A1">
                <div class="P1"><a href="https://www.scayle.es/"
                    moz-do-not-send="true"><!--Next '
            span' is a draw:frame.
        --><span style="height:0.2398in;width:7.3047in; padding:0; "
                      class="fr1" id="Imagen_11"><img
                        style="height:0.6091cm;width:18.5539cm;" alt=""
                        src="cid:part2.pShmpYpN.DHhAOgq0@scayle.es"
                        class=""></span></a><span class="T9"></span></div>
              </td>
            </tr>
            <tr class="Table14">
              <td colspan="3" style="text-align:left;width:4.2326in; "
                class="Table1_B2">
                <p class="P7"><span class="T6">Le informamos, como
                    destinatario de este mensaje, que el correo
                    electrónico y las comunicaciones por medio de
                    Internet no permiten asegurar ni garantizar la
                    confidencialidad de los mensajes transmitidos, así
                    como tampoco su integridad o su correcta recepción,
                    por lo que SCAYLE no asume responsabilidad alguna
                    por tales circunstancias. Si no consintiese en la
                    utilización del correo electrónico o de las
                    comunicaciones vía Internet le rogamos nos lo
                    comunique y ponga en nuestro conocimiento de manera
                    inmediata. Para más información visite nuestro </span><a
                    href="https://www.scayle.es/aviso-legal/"
                    class="Internet_20_link" moz-do-not-send="true"><span
                      class="Internet_20_link"><span class="T11">Aviso
                        Legal</span></span></a><span class="T6">.</span><span
                    class="T10"></span></p>
              </td>
            </tr>
          </tbody>
        </table>
        <p class="Standard"> </p>
      </div>
      <br>
      <fieldset class="moz-mime-attachment-header"></fieldset>
      <pre wrap="" class="moz-quote-pre">_______________________________________________
lustre-discuss mailing list
<a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a>
<a class="moz-txt-link-freetext" href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a>
</pre>
    </blockquote>
  </body>
</html>