[lustre-discuss] Lustre Timeouts/Filesystem Hanging

Louis Allen louisallen at live.co.uk
Mon Oct 28 10:16:12 PDT 2019


Hello,

Lustre (2.12) seem to be hanging quite frequently (5+ times a day) for us and one of the OSS servers (out of 4) is reporting an extremely high load average (150+) but the CPU usage of that server is actually very low - so it must be related to something else - possibly CPU_IO_WAIT.

The OSS server we are seeing the high load averages we can also see multiple LustreError messages in /var/log/messages:

Oct 28 11:22:23 pazlustreoss001 kernel: LNet: Service thread pid 2403 was inactive for 200.08s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Oct 28 11:22:23 pazlustreoss001 kernel: LNet: Skipped 4 previous similar messages
Oct 28 11:22:23 pazlustreoss001 kernel: Pid: 2403, comm: ll_ost00_068 3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Sun May 26 21:48:35 UTC 2019
Oct 28 11:22:23 pazlustreoss001 kernel: Call Trace:
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc03747c5>] jbd2_log_wait_commit+0xc5/0x140 [jbd2]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0375e52>] jbd2_complete_transaction+0x52/0xa0 [jbd2]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0732da2>] ldiskfs_sync_file+0x2e2/0x320 [ldiskfs]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa52760b0>] vfs_fsync_range+0x20/0x30
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0c8b651>] osd_object_sync+0xb1/0x160 [osd_ldiskfs]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0ab48a7>] tgt_sync+0xb7/0x270 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0dc3731>] ofd_sync_hdl+0x111/0x530 [ofd]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0aba1da>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0a5f80b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0a6313c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa50c1c71>] kthread+0xd1/0xe0
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa5775c37>] ret_from_fork_nospec_end+0x0/0x39
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffffffffff>] 0xffffffffffffffff
Oct 28 11:22:23 pazlustreoss001 kernel: LustreError: dumping log to /tmp/lustre-log.1572261743.2403
Oct 28 11:22:23 pazlustreoss001 kernel: Pid: 2292, comm: ll_ost03_043 3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Sun May 26 21:48:35 UTC 2019
Oct 28 11:22:23 pazlustreoss001 kernel: Call Trace:
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc03747c5>] jbd2_log_wait_commit+0xc5/0x140 [jbd2]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0375e52>] jbd2_complete_transaction+0x52/0xa0 [jbd2]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0732da2>] ldiskfs_sync_file+0x2e2/0x320 [ldiskfs]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa52760b0>] vfs_fsync_range+0x20/0x30
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0c8b651>] osd_object_sync+0xb1/0x160 [osd_ldiskfs]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0ab48a7>] tgt_sync+0xb7/0x270 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0dc3731>] ofd_sync_hdl+0x111/0x530 [ofd]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0aba1da>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0a5f80b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: LNet: Service thread pid 2403 completed after 200.29s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
Oct 28 11:22:23 pazlustreoss001 kernel: LNet: Skipped 48 previous similar messages
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffc0a6313c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa50c1c71>] kthread+0xd1/0xe0
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffa5775c37>] ret_from_fork_nospec_end+0x0/0x39
Oct 28 11:22:23 pazlustreoss001 kernel: [<ffffffffffffffff>] 0xffffffffffffffff


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20191028/1e1bf454/attachment.html>


More information about the lustre-discuss mailing list