[Lustre-discuss] IO-Node issue

Wojciech Turek wjt27 at cam.ac.uk
Mon Jul 18 16:06:54 PDT 2011


Ok , Good so at least you have now the OST mounting. I would have run fsck
even though it said that LDISKFS was recovered correctly.

As for you Heartbeat problems, If you use the old V1 heartbeat configuration
with haresources file then I don't think that STONITH has anything to do
with your filesystem resources not starting.
>From your logs it looks like STONITH device is not configured properly so
first you need to test yourSTONITH config as follows:

# stonith -t external/ipmi -n
HOSTNAME  IP_ADDR  IPMI_USER  IPMI_PASSWD_FILE


# stonith -t external/ipmi -p "oss02 10.145.245.2 root
/etc/ha.d/ipmitool.passwd" -lS
stonith: external/ipmi device OK.

As you can see in my config stonith command returns OK so you need to look
at your config and tweak it so it also return OK.

regards

Wojciech

On 18 July 2011 23:02, DaMiri Young <damiri at unt.edu> wrote:

> So you were right about the I/O node losing contact with the OST. In short,
> after enabling lustre debugging, restarting opensmd and openibd services on
> the troubled node the OSTs were remounted and lustre entered recovery:
> --------------------------- messages ------------------------------**
> ----------
> Jul 18 10:02:56 IO-10 kernel: ib_ipath 0000:05:00.0: We got a lid: 0x75
> Jul 18 10:02:56 IO-10 kernel: ib_srp: ASYNC event= 11 on device= ipath0
> Jul 18 10:02:56 IO-10 kernel: ib_srp: ASYNC event= 13 on device= ipath0
> Jul 18 10:02:56 IO-10 kernel: ib_srp: ASYNC event= 17 on device= ipath0
> Jul 18 10:02:56 IO-10 kernel: ib_srp: ASYNC event= 9 on device= ipath0
> Jul 18 10:02:59 IO-10 kernel: ADDRCONF(NETDEV_CHANGE): ib0: link becomes
> ready
> Jul 18 10:03:01 IO-10 avahi-daemon[24939]: New relevant interface ib0.IPv6
> for mDNS.
> Jul 18 10:03:01 IO-10 avahi-daemon[24939]: Joining mDNS multicast group on
> interface ib0.IPv6 with address fe80::211:7500:ff:7bf6.
> Jul 18 10:03:01 IO-10 avahi-daemon[24939]: Registering new address record
> for fe80::211:7500:ff:7bf6 on ib0.
> Jul 18 10:03:15 IO-10 ntpd[23084]: synchronized to 10.0.0.1, stratum 3
> Jul 18 11:41:40 IO-10 kernel: megasas: 00.00.03.15-RH1 Wed Nov. 21 10:29:45
> PST 2007
> Jul 18 11:41:41 IO-10 kernel: Lustre: OBD class driver,
> http://www.lustre.org/
> Jul 18 11:41:41 IO-10 kernel:         Lustre Version: 1.6.6
> Jul 18 11:41:41 IO-10 kernel:         Build Version: 1.6.6-1.6.6-ddn3.1-**
> 20090527173746
> Jul 18 11:41:41 IO-10 kernel: Lustre: 28686:0:(o2iblnd_modparams.c:**324:kiblnd_tunables_init())
> Concurrent sends 7 is lower than message queue size: 8, performance may drop
> slightly.
> Jul 18 11:41:41 IO-10 kernel: Lustre: Added LNI 10.1.0.229 at o2ib [8/64]
> Jul 18 11:41:41 IO-10 kernel: Lustre: Lustre Client File System;
> http://www.lustre.org/
> Jul 18 11:42:07 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:42:07 IO-10 kernel: LDISKFS-fs warning: checktime reached,
> running e2fsck is recommended
> Jul 18 11:42:07 IO-10 kernel: LDISKFS FS on dm-11, internal journal
> Jul 18 11:42:07 IO-10 kernel: LDISKFS-fs: recovery complete.
> Jul 18 11:42:07 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:42:07 IO-10 multipathd: dm-11: umount map (uevent)
> Jul 18 11:42:18 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:42:18 IO-10 kernel: LDISKFS-fs warning: checktime reached,
> running e2fsck is recommended
> Jul 18 11:42:18 IO-10 kernel: LDISKFS FS on dm-11, internal journal
> Jul 18 11:42:18 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:42:18 IO-10 kernel: LDISKFS-fs: file extents enabled
> Jul 18 11:42:18 IO-10 kernel: LDISKFS-fs: mballoc enabled
> Jul 18 11:42:18 IO-10 kernel: fsfilt_ldiskfs: no version for
> "ldiskfs_free_blocks" found: kernel tainted.
> Jul 18 11:42:18 IO-10 kernel: Lustre: Filtering OBD driver;
> http://www.lustre.org/
> Jul 18 11:42:18 IO-10 kernel: Lustre: 29999:0:(filter.c:868:filter_**init_server_data())
> RECOVERY: service es1-OST000a, 249 recoverable clients, last_rcvd 469628325
> Jul 18 11:42:18 IO-10 kernel: Lustre: OST es1-OST000a now serving dev
> (es1-OST000a/15fae56a-7dae-**ba24-4423-347c0a118367), but will be in
> recovery for at least 5:00, or until 249 clients reconnect. During this time
> new clients will not be allowed to connect. Recovery progress can be
> monitored by watching /proc/fs/lustre/obdfilter/es1-**
> OST000a/recovery_status.
> Jul 18 11:42:18 IO-10 kernel: Lustre: es1-OST000a.ost: set parameter
> quota_type=ug
> Jul 18 11:42:18 IO-10 kernel: Lustre: Server es1-OST000a on device
> /dev/mpath/lun_11 has started
> Jul 18 11:42:19 IO-10 kernel: Lustre: 28952:0:(ldlm_lib.c:1226:**
> check_and_start_recovery_**timer()) es1-OST000a: starting recovery timer
> Jul 18 11:42:19 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000c_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:19 IO-10 kernel: LustreError: 28957:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff810311f9f400 x36077513/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007439 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:19 IO-10 kernel: LustreError: Skipped 3 previous similar
> messages
> Jul 18 11:42:19 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000b_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:19 IO-10 kernel: LustreError: 28985:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102f81ce800 x8649866/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007439 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:19 IO-10 kernel: LustreError: 28985:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 3 previous similar messages
> Jul 18 11:42:19 IO-10 kernel: LustreError: Skipped 3 previous similar
> messages
> Jul 18 11:42:19 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000b_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:19 IO-10 kernel: Lustre: 29068:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 248 recoverable clients
> remain
> Jul 18 11:42:19 IO-10 kernel: LustreError: 29010:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102f81f2c00 x368697/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007439 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:19 IO-10 kernel: LustreError: 29010:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 19 previous similar messages
> Jul 18 11:42:19 IO-10 kernel: LustreError: Skipped 19 previous similar
> messages
> Jul 18 11:42:19 IO-10 kernel: Lustre: 29012:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 247 recoverable clients
> remain
> Jul 18 11:42:20 IO-10 kernel: Lustre: 29106:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 240 recoverable clients
> remain
> Jul 18 11:42:20 IO-10 kernel: Lustre: 29106:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 6 previous similar messages
> Jul 18 11:42:20 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000b_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:20 IO-10 kernel: LustreError: 29149:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff81030eff2850 x68565826/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007440 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:20 IO-10 kernel: LustreError: 29149:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 31 previous similar messages
> Jul 18 11:42:20 IO-10 kernel: LustreError: Skipped 31 previous similar
> messages
> Jul 18 11:42:21 IO-10 kernel: Lustre: 29196:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 232 recoverable clients
> remain
> Jul 18 11:42:21 IO-10 kernel: Lustre: 29196:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 7 previous similar messages
> Jul 18 11:42:22 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000b_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:22 IO-10 kernel: LustreError: 29275:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff810302713c50 x519337/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007442 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:22 IO-10 kernel: LustreError: 29275:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 47 previous similar messages
> Jul 18 11:42:22 IO-10 kernel: LustreError: Skipped 47 previous similar
> messages
> Jul 18 11:42:23 IO-10 kernel: Lustre: 29320:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 221 recoverable clients
> remain
> Jul 18 11:42:23 IO-10 kernel: Lustre: 29320:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 10 previous similar messages
> Jul 18 11:42:27 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000c_UUID'
> is not available  for connect (no target)
> Jul 18 11:42:27 IO-10 kernel: LustreError: 29030:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102f87bac00 x435304948/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007447 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:42:27 IO-10 kernel: LustreError: 29030:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 91 previous similar messages
> Jul 18 11:42:27 IO-10 kernel: LustreError: Skipped 91 previous similar
> messages
> Jul 18 11:42:27 IO-10 kernel: Lustre: 29182:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000a: 196 recoverable clients
> remain
> Jul 18 11:42:27 IO-10 kernel: Lustre: 29182:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 24 previous similar messages
> Jul 18 11:42:46 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:42:46 IO-10 kernel: LDISKFS-fs warning: checktime reached,
> running e2fsck is recommended
> Jul 18 11:42:46 IO-10 kernel: LDISKFS FS on dm-10, internal journal
> Jul 18 11:42:46 IO-10 kernel: LDISKFS-fs: recovery complete.
> Jul 18 11:42:46 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:42:46 IO-10 multipathd: dm-10: umount map (uevent)
> Jul 18 11:42:58 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:42:58 IO-10 kernel: LDISKFS-fs warning: checktime reached,
> running e2fsck is recommended
> Jul 18 11:42:58 IO-10 kernel: LDISKFS FS on dm-10, internal journal
> Jul 18 11:42:58 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:42:58 IO-10 kernel: LDISKFS-fs: file extents enabled
> Jul 18 11:42:58 IO-10 kernel: LDISKFS-fs: mballoc enabled
> Jul 18 11:42:58 IO-10 kernel: Lustre: 30227:0:(filter.c:868:filter_**init_server_data())
> RECOVERY: service es1-OST000b, 249 recoverable clients, last_rcvd 608808684
> Jul 18 11:42:58 IO-10 kernel: Lustre: OST es1-OST000b now serving dev
> (es1-OST000b/1f38b48f-9a67-**b3a6-4374-b25762e71391), but will be in
> recovery for at least 5:00, or until 249 clients reconnect. During this time
> new clients will not be allowed to connect. Recovery progress can be
> monitored by watching /proc/fs/lustre/obdfilter/es1-**
> OST000b/recovery_status.
> Jul 18 11:42:58 IO-10 kernel: Lustre: es1-OST000b.ost: set parameter
> quota_type=ug
> Jul 18 11:42:58 IO-10 kernel: Lustre: Server es1-OST000b on device
> /dev/mpath/lun_12 has started
> Jul 18 11:43:09 IO-10 kernel: Lustre: 28975:0:(ldlm_lib.c:1226:**
> check_and_start_recovery_**timer()) es1-OST000b: starting recovery timer
> Jul 18 11:43:09 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000c_UUID'
> is not available  for connect (no target)
> Jul 18 11:43:09 IO-10 kernel: LustreError: Skipped 111 previous similar
> messages
> Jul 18 11:43:09 IO-10 kernel: LustreError: 29079:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102eb3cb000 x36077574/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007489 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:43:09 IO-10 kernel: LustreError: 29079:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 114 previous similar messages
> Jul 18 11:43:09 IO-10 kernel: Lustre: 28999:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000b: 248 recoverable clients
> remain
> Jul 18 11:43:09 IO-10 kernel: Lustre: 28999:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 25 previous similar messages
> Jul 18 11:43:21 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:43:21 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:43:21 IO-10 kernel: LDISKFS FS on dm-12, internal journal
> Jul 18 11:43:21 IO-10 kernel: LDISKFS-fs: recovery complete.
> Jul 18 11:43:21 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:43:21 IO-10 multipathd: dm-12: umount map (uevent)
> Jul 18 11:43:32 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:43:32 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:43:32 IO-10 kernel: LDISKFS FS on dm-12, internal journal
> Jul 18 11:43:32 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:43:32 IO-10 kernel: LDISKFS-fs: file extents enabled
> Jul 18 11:43:32 IO-10 kernel: LDISKFS-fs: mballoc enabled
> Jul 18 11:43:32 IO-10 kernel: Lustre: 30436:0:(filter.c:868:filter_**init_server_data())
> RECOVERY: service es1-OST000c, 249 recoverable clients, last_rcvd 370809064
> Jul 18 11:43:32 IO-10 kernel: Lustre: OST es1-OST000c now serving dev
> (es1-OST000c/f8c1bf77-11b3-**88be-4438-016f059a91b5), but will be in
> recovery for at least 5:00, or until 249 clients reconnect. During this time
> new clients will not be allowed to connect. Recovery progress can be
> monitored by watching /proc/fs/lustre/obdfilter/es1-**
> OST000c/recovery_status.
> Jul 18 11:43:32 IO-10 kernel: Lustre: es1-OST000c.ost: set parameter
> quota_type=ug
> Jul 18 11:43:32 IO-10 kernel: Lustre: Server es1-OST000c on device
> /dev/mpath/lun_13 has started
> Jul 18 11:43:46 IO-10 kernel: Lustre: 29050:0:(ldlm_lib.c:1226:**
> check_and_start_recovery_**timer()) es1-OST000c: starting recovery timer
> Jul 18 11:43:46 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000d_UUID'
> is not available  for connect (no target)
> Jul 18 11:43:46 IO-10 kernel: LustreError: Skipped 229 previous similar
> messages
> Jul 18 11:43:46 IO-10 kernel: LustreError: 29123:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102f6e36000 x36721236/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007526 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:43:46 IO-10 kernel: LustreError: 29123:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 229 previous similar messages
> Jul 18 11:43:46 IO-10 kernel: Lustre: 28982:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000b: 171 recoverable clients
> remain
> Jul 18 11:43:46 IO-10 kernel: Lustre: 28982:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 76 previous similar messages
> Jul 18 11:43:54 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:43:54 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:43:54 IO-10 kernel: LDISKFS FS on dm-13, internal journal
> Jul 18 11:43:54 IO-10 kernel: LDISKFS-fs: recovery complete.
> Jul 18 11:43:54 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:43:55 IO-10 multipathd: dm-13: umount map (uevent)
> Jul 18 11:44:06 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:44:06 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:44:06 IO-10 kernel: LDISKFS FS on dm-13, internal journal
> Jul 18 11:44:06 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:44:06 IO-10 kernel: LDISKFS-fs: file extents enabled
> Jul 18 11:44:06 IO-10 kernel: LDISKFS-fs: mballoc enabled
> Jul 18 11:44:06 IO-10 kernel: Lustre: 30686:0:(filter.c:868:filter_**init_server_data())
> RECOVERY: service es1-OST000d, 249 recoverable clients, last_rcvd 694562245
> Jul 18 11:44:06 IO-10 kernel: Lustre: OST es1-OST000d now serving dev
> (es1-OST000d/cf608dbd-accd-**89b7-471a-f4487e9f8ba3), but will be in
> recovery for at least 5:00, or until 249 clients reconnect. During this time
> new clients will not be allowed to connect. Recovery progress can be
> monitored by watching /proc/fs/lustre/obdfilter/es1-**
> OST000d/recovery_status.
> Jul 18 11:44:06 IO-10 kernel: Lustre: es1-OST000d.ost: set parameter
> quota_type=ug
> Jul 18 11:44:06 IO-10 kernel: Lustre: Server es1-OST000d on device
> /dev/mpath/lun_14 has started
> Jul 18 11:44:06 IO-10 kernel: Lustre: 29293:0:(ldlm_lib.c:1226:**
> check_and_start_recovery_**timer()) es1-OST000d: starting recovery timer
> Jul 18 11:44:18 IO-10 kernel: LustreError: 137-5: UUID 'es1-OST000e_UUID'
> is not available  for connect (no target)
> Jul 18 11:44:18 IO-10 kernel: LustreError: Skipped 199 previous similar
> messages
> Jul 18 11:44:18 IO-10 kernel: Lustre: 29068:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) es1-OST000d: 175 recoverable clients
> remain
> Jul 18 11:44:18 IO-10 kernel: LustreError: 29135:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> @@@ processing error (-19)  req at ffff8102f4c1cc00 x56000488/t0
> o8-><?>@<?>:0/0 lens 304/0 e 0 to 0 dl 1311007558 ref 1 fl Interpret:/0/0 rc
> -19/0
> Jul 18 11:44:18 IO-10 kernel: LustreError: 29135:0:(ldlm_lib.c:1619:**target_send_reply_msg())
> Skipped 199 previous similar messages
> Jul 18 11:44:18 IO-10 kernel: Lustre: 29068:0:(ldlm_lib.c:1567:**
> target_queue_last_replay_**reply()) Skipped 331 previous similar messages
> Jul 18 11:44:28 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:44:28 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:44:28 IO-10 kernel: LDISKFS FS on dm-14, internal journal
> Jul 18 11:44:28 IO-10 kernel: LDISKFS-fs: recovery complete.
> Jul 18 11:44:28 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:44:28 IO-10 multipathd: dm-14: umount map (uevent)
> Jul 18 11:44:39 IO-10 kernel: kjournald starting.  Commit interval 5
> seconds
> Jul 18 11:44:39 IO-10 kernel: LDISKFS-fs warning: maximal mount count
> reached, running e2fsck is recommended
> Jul 18 11:44:39 IO-10 kernel: LDISKFS FS on dm-14, internal journal
> Jul 18 11:44:39 IO-10 kernel: LDISKFS-fs: mounted filesystem with ordered
> data mode.
> Jul 18 11:44:39 IO-10 kernel: LDISKFS-fs: file extents enabled
> Jul 18 11:44:39 IO-10 kernel: LDISKFS-fs: mballoc enabled
> Jul 18 11:44:39 IO-10 kernel: Lustre: 30893:0:(filter.c:868:filter_**init_server_data())
> RECOVERY: service es1-OST000e, 249 recoverable clients, last_rcvd 613643608
> Jul 18 11:44:39 IO-10 kernel: Lustre: OST es1-OST000e now serving dev
> (es1-OST000e/478c7dc4-4936-**bfe2-45ac-2fb7a2e69f62), but will be in
> recovery for at least 5:00, or until 249 clients reconnect. During this time
> new clients will not be allowed to connect. Recovery progress can be
> monitored by watching /proc/fs/lustre/obdfilter/es1-**
> OST000e/recovery_status.
> Jul 18 11:44:39 IO-10 kernel: Lustre: es1-OST000e.ost: set parameter
> quota_type=ug
> Jul 18 11:44:39 IO-10 kernel: Lustre: Server es1-OST000e on device
> /dev/mpath/lun_15 has started
> Jul 18 11:44:40 IO-10 kernel: Lustre: 29214:0:(ldlm_lib.c:1226:**
> check_and_start_recovery_**timer()) es1-OST000e: starting recovery timer
> Jul 18 11:44:49 IO-10 kernel: Lustre: 29236:0:(service.c:939:ptlrpc_**server_handle_req_in())
> @@@ Slow req_in handling 6s  req at ffff8102f4419c00 x738214853/t0
> o101-><?>@<?>:0/0 lens 232/0 e 0 to 0 dl 0 ref 1 fl New:/0/0 rc 0/0
> Jul 18 11:44:49 IO-10 kernel: Lustre: 28992:0:(service.c:939:ptlrpc_**server_handle_req_in())
> @@@ Slow req_in handling 6s  req at ffff8102f4419400 x738214855/t0
> o101-><?>@<?>:0/0 lens 232/0 e 0 to 0 dl 0 ref 1 fl New:/0/0 rc 0/0
> ---------------------- end messages -----------------------------
>
> It mentioned completing the recovery so I didn't bother with running
> another fsck, should I? The problem now seems to be that STONITH on the
> troubled node's failover can't reset the node. It tries and fails
> incessantly:
> ------------------------ messages ------------------------------**-
> Jul 18 16:45:17 IO-11 heartbeat: [25037]: info: Resetting node
> io-10.internal.acs.unt.prv with [IPMI STONITH device ]
> Jul 18 16:45:18 IO-11 heartbeat: [25037]: info: glib: external_run_cmd:
> Calling '/usr/lib64/stonith/plugins/**external/ipmi reset
> io-10.internal.acs.unt.prv' returned 256
> Jul 18 16:45:18 IO-11 heartbeat: [25037]: ERROR: glib: external_reset_req:
> 'ipmi reset' for host io-10.internal.acs.unt.prv failed with rc 256
> Jul 18 16:45:18 IO-11 heartbeat: [25037]: ERROR: Host
> io-10.internal.acs.unt.prv not reset!
> Jul 18 16:45:18 IO-11 heartbeat: [15803]: WARN: Managed STONITH
> io-10.internal.acs.unt.prv process 25037 exited with return code 1.
> Jul 18 16:45:18 IO-11 heartbeat: [15803]: ERROR: STONITH of
> io-10.internal.acs.unt.prv failed.  Retrying...
> ---------------------- end messages ------------------------------**---
>
> I've checked the logic in usr/lib64/stonith/plugins/**external/ipmi which
> doesn't seem to be using the correct address for the BMC controller. It's
> possible that the HA facilites could prevent mounting of the final OSTs
> isn't it?
>
>
>
> Wojciech Turek wrote:
>
>> Hi Damiri,
>>
>>  From the logs you have provided it looks like you have a problem with
>> your back end storage. First of all we can see that your SRP connection to
>> backend storage reports abort and reset (I guess your backend storage
>> hardware is connected via Infiniband if you are using SRP). Then Lustre
>> reports slow messages and eventually kernel reports SCSI errors. Device
>> mapper reports that both paths to the device are failed and Lustre remounts
>> filesystem read-only due to I/O error. All these means that your I/O node
>> lost contact with the OST due to some errors either on IB network connecting
>> your host to the storage hardware or on the storage hardware itself. From
>> the first part of the log we can see that the device being in trouble is OST
>> es1-OST000b (dm-11). In the second part of your log I can not see that
>> device being mounted. From your log I can see that only OST  es1-OST000a
>> (dm-10) is mounted and enters recovery
>>
>
>
> --
> DaMiri Young
> HPC System Engineer
> High Performance Computing Team | ACUS/CITC | UNT
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110719/e57ba3c2/attachment.htm>


More information about the lustre-discuss mailing list