[Lustre-discuss] freezing

Oleg Drokin Oleg.Drokin at Sun.COM
Sun May 18 17:08:30 PDT 2008


Hello!

On May 18, 2008, at 6:16 AM, Papp Tamás wrote:
> Our small but hardly used cluster was running for weeks without any
> problems, but yesterday 3 of 8 nodes froze up in 1 hour.

Froze as in what symptoms exactly? From the logs below I take it that
access to the fs from clients just stopped working?

> This is from messages log, there was many of this in it before this  
> last
> one:
> May 17 22:00:31 node8 kernel: LustreError: dumping log to
> /tmp/lustre-log.1211054431.7807
> May 17 22:00:33 node8 kernel: LustreError:
> 7807:0:(service.c:668:ptlrpc_server_handle_request()) request 3062525
> opc 4 from 12345-192.168.0.63 at tcp processed in 101s trans 2522291 rc  
> 0/0

So certain request took longer than obd_timeout to be processed likely  
causing
client to timeout in process.

> 7807:0:(watchdog.c:312:lcw_update_time()) Expired watchdog for pid  
> 7807
> disabled after 101.5718s

Certain server process did not progress far enough in 101.5 seconds as  
well,
which would lead to message above later on.
> 7641:0:(ldlm_lib.c:519:target_handle_reconnect()) cubefs-OST0008:
> 9737b5de-8c67-99be-94ae-12f66fe80edf reconnecting
> May 17 22:00:43 node8 kernel: Lustre:
> 7641:0:(ldlm_lib.c:519:target_handle_reconnect()) Skipped 1 previous
> similar message
> May 17 22:01:05 node8 kernel: Lustre:
> 7627:0:(ldlm_lib.c:747:target_handle_connect()) cubefs-OST0008: refuse
> reconnection from 1a535b99-47d7-7f5f-f3cb-277793
> 0c9217 at 192.168.0.65@tcp to 0xffff810010759000; still busy with 2  
> active
> RPCs

A client attempted to reconnect, but we did not let it in, because we  
believe we are already processing
some requests from it. This could easily be caused by problems above -  
something
takes too long on a server.

> 0:0:(linux-debug.c:168:libcfs_debug_dumpstack()) showing stack for
> process 7738

Ok, a stacktrace for a thread that has no progress for very long.

> May 17 22:01:05 node8 kernel: Lustre:
> 0:0:(linux-debug.c:168:libcfs_debug_dumpstack()) Skipped 1 previous
> similar message
> May 17 22:01:05 node8 kernel: ll_ost_io_13  D 000000000000712e     0
> 7738      1          7739  7737 (L-TLB)
> May 17 22:01:05 node8 kernel:  ffff8100655273f0 0000000000000046
> ffff810003d18c58 ffffffff80012c28
> May 17 22:01:05 node8 kernel:  000000000000000a ffff810065511820
> ffff810063e1f0c0 0007d730dc4a35a5
> May 17 22:01:05 node8 kernel:  0000000000000213 ffff810065511a08
> ffff810000000001 ffff810063c0a7e0
> May 17 22:01:05 node8 kernel: Call Trace:
> May 17 22:01:05 node8 kernel:  [<ffffffff80012c28>] get_request 
> +0x1a6/0x36b
> May 17 22:01:05 node8 kernel:  [<ffffffff883354ac>] :drbd:lc_find 
> +0x30/0x51
> May 17 22:01:05 node8 kernel:  [<ffffffff88334e23>]
> :drbd:drbd_al_begin_io+0x160/0x24c

Well, if you ask me, I would say drbd takes too long to do i/o.

> 7738:0:(filter_io_26.c:763:filter_commitrw_write()) cubefs-OST0008:  
> slow
> direct_io 102s

So, it took i/o backend (drbd) 102 seconds to process our single i/o  
request.
No wonders everybody got upset.

>
> May 17 22:01:08 node8 kernel: LustreError:
> 7738:0:(filter_io_26.c:763:filter_commitrw_write()) Skipped 24  
> previous
> similar messages
> May 17 22:01:08 node8 kernel: LustreError:
> 7738:0:(filter_io_26.c:776:filter_commitrw_write()) cubefs-OST0008:  
> slow
> commitrw commit 102s

And many times again...

> What could be the problem? Lustre, linux or hw?

Take a look into your drbd setup to se why it takes so long to do i/o.

> The other node was node1, but it has no messages in logs like this,  
> but
> before I restarted it, I saw the same on it's screen.
> In tme the first one was node3. It has in message log this:

Same history with slow i/o, except not that slow as in previous logs,
so no watchdogs fired.
> May 18 11:16:37 node2 kernel: LustreError:
> 3916:0:(recov_thread.c:473:llog_start_commit_thread()) error starting
> thread #1: -513
> May 18 11:16:37 node2 kernel: LustreError:
> 3916:0:(llog_obd.c:392:llog_cat_initialize()) rc: -513
> May 18 11:16:37 node2 kernel: LustreError:
> 3916:0:(filter.c:1717:filter_common_setup()) failed to setup llogging
> subsystems

So, we cannot start llog thread for some reason.

> May 18 12:05:13 node2 kernel: Lustre:
> 3507:0:(filter_io_26.c:698:filter_commitrw_write()) cubefs-OST0002:  
> slow
> i_mutex 46s
>
> May 18 12:05:13 node2 kernel: Lustre:
> 3507:0:(filter_io_26.c:711:filter_commitrw_write()) cubefs-OST0002:  
> slow
> brw_start 46s

And the i/o to the backend is still slow.

Bye,
     Oleg



More information about the lustre-discuss mailing list