[Lustre-discuss] MDS: lock timed out -- not entering recovery in server code, just going back to sleep

Thomas Roth t.roth at gsi.de
Tue Dec 2 08:46:10 PST 2008


Brian J. Murrell wrote:
> On Thu, 2008-11-27 at 19:18 +0100, Thomas Roth wrote:
>>  > Nov 27 17:57:41 lustre kernel: LustreError: 
>> 3974:0:(ldlm_request.c:64:ldlm_expired_completion_wait()) ### lock timed 
>> out (enqueued
>>  > at 1227804060, 1001s ago); not entering recovery in server code, just 
>> going back to sleep ns: mds-lustre-MDT0000_UUID lock:
>>  > e4f54680/0x2ccbde901a8157f2 lrc: 3/1,0 mode: --/CR res: 
>> 74908813/3524601089 bits 0x2 rrc: 173 type: IBT flags: 4004030 remote:
>>  > 0x0 expref: -99 pid 3974
>>
>> My question is now whether you would interpret this as a result of 
>> ongoing trouble with the network
> 
> Yes, network problems are a common cause of this.
> 
>> There are more disturbing log messages, many of the following type:
>>
>>  > Nov 27 18:17:42 lustre kernel: LustreError: 
>> 28521:0:(mds_open.c:1474:mds_close()) @@@ no handle for file close ino 
>> 81208923:
>>  >  cookie 0x2ccbde8fcca85aa7  req at f3e75800 x1686877/t0
>>  > o35->e0c12120-24ea-68c2-0394-712e75354f55 at NET_0x200008cb5726e_UUID:-1 
>> lens 296/3472 ref 0 fl Interpret:/0/0 rc 0/0
> 
> There was a description of that here or in bugzilla not that long ago.
> IIRC it's the result of a recovery operations where the MDS performs a
> close on a file on behalf of a disconnected client and then the client
> comes along and tries to explicitly close the file but the MDS has
> already done it for it.
> 
> b.
> 

Thanks Brian - for a while we could blame network for our problems with 
the MDS.

However in the meantime, things got worse. We now see this kind of hang 
on the MDS every day - the rest of our servers/network doing fine all 
the time.
Twice we hit an LBUG:

Dec  1 22:14:17 lustre2 kernel: LustreError: 
14714:0:(mds_reint.c:1512:mds_orphan_add_link()) 
ASSERTION(inode->i_nlink == 1) failed:dir nlink == 0
Dec  1 22:14:17 lustre2 kernel: LustreError: 
14714:0:(mds_reint.c:1512:mds_orphan_add_link()) LBUG
Dec  1 22:14:17 lustre2 kernel: Lustre: 
14714:0:(linux-debug.c:168:libcfs_debug_dumpstack()) showing stack for 
process 14714
Dec  1 22:14:17 lustre2 kernel: ll_mdt_69     R running     0 14714 
  1         14715 14713 (L-TLB)
Dec  1 22:14:17 lustre2 kernel: 00000019 00000018 c3635000 c0204fb4 
00000001 00000001 00000000 c3635000
Dec  1 22:14:17 lustre2 kernel: c3635000 00000018 00000000 c020502e 
00000001 00000000 c020872a c0364a82
Dec  1 22:14:17 lustre2 kernel: 0a01ffff 00000000 c02dac00 c020851e 
00047c05 00047c62 c011d553 00047c62
Dec  1 22:14:17 lustre2 kernel: Call Trace:
Dec  1 22:14:17 lustre2 kernel: [<c0204fb4>] scrup+0x58/0xba
Dec  1 22:14:17 lustre2 kernel: [<c011de22>] printk+0x14/0x18
Dec  1 22:14:17 lustre2 kernel: [<c0136851>] __print_symbol+0x9f/0xa8
Dec  1 22:14:17 lustre2 kernel: [<c01cbbe8>] vgacon_scroll+0x17a/0x195
Dec  1 22:14:17 lustre2 kernel: [<c01ba234>] vsnprintf+0x419/0x457
Dec  1 22:14:17 lustre2 kernel: [<c01ba234>] vsnprintf+0x419/0x457
Dec  1 22:14:17 lustre2 kernel: [<c011672a>] __wake_up_locked+0x11/0x15
Dec  1 22:14:17 lustre2 kernel: [<c01b932b>] __down_trylock+0x3b/0x44
Dec  1 22:14:17 lustre2 kernel: [<c01b932b>] __down_trylock+0x3b/0x44
Dec  1 22:14:17 lustre2 kernel: [<c027f977>] __down_failed_trylock+0x7/0xc
Dec  1 22:14:17 lustre2 kernel: [<c011d822>] release_console_sem+0x182/0x1bc
Dec  1 22:14:17 lustre2 kernel: [<c011d822>] release_console_sem+0x182/0x1bc
Dec  1 22:14:17 lustre2 kernel: [<c012c6d8>] __kernel_text_address+0x18/0x23
Dec  1 22:14:17 lustre2 kernel: [<c0103b62>] show_trace_log_lvl+0x47/0x6a
Dec  1 22:14:17 lustre2 kernel: [<c0103c13>] show_stack_log_lvl+0x8e/0x96
Dec  1 22:14:17 lustre2 kernel: [<c0104107>] show_stack+0x20/0x25
Dec  1 22:14:17 lustre2 kernel: [<fa1b7f79>] lbug_with_loc+0x69/0xc0 
[libcfs]
Dec  1 22:14:17 lustre2 kernel: [<fa682448>] 
mds_orphan_add_link+0xcb8/0xd20 [mds]
Dec  1 22:14:17 lustre2 kernel: [<fa69587a>] 
mds_reint_unlink+0x292a/0x3fd0 [mds]
Dec  1 22:14:17 lustre2 kernel: [<fa3a5990>] 
lustre_swab_ldlm_request+0x0/0x20 [ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<fa681495>] mds_reint_rec+0xf5/0x3f0 [mds]
Dec  1 22:14:17 lustre2 kernel: [<fa664feb>] mds_reint+0xcb/0x8a0 [mds]
Dec  1 22:14:17 lustre2 kernel: [<c01171b2>] find_busiest_group+0x177/0x46a
Dec  1 22:14:17 lustre2 kernel: [<fa678998>] mds_handle+0x3048/0xb9df [mds]
Dec  1 22:14:17 lustre2 kernel: [<c0124d83>] do_gettimeofday+0x31/0xce
Dec  1 22:14:17 lustre2 kernel: [<fa2d506b>] 
class_handle2object+0xbb/0x2a0 [obdclass]
Dec  1 22:14:17 lustre2 kernel: [<fa3a5a00>] 
lustre_swab_ptlrpc_body+0x0/0xc0 [ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<fa3a2b5a>] lustre_swab_buf+0xfa/0x180 
[ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<c0125aac>] lock_timer_base+0x15/0x2f
Dec  1 22:14:17 lustre2 kernel: [<c0125bbd>] __mod_timer+0x99/0xa3
Dec  1 22:14:17 lustre2 kernel: [<fa39fefe>] 
lustre_msg_get_conn_cnt+0xce/0x220 [ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<fa3b1e56>] ptlrpc_main+0x2016/0x2f40 
[ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<c0117c1f>] default_wake_function+0x0/0xc
Dec  1 22:14:17 lustre2 kernel: [<fa3afe40>] ptlrpc_main+0x0/0x2f40 [ptlrpc]
Dec  1 22:14:17 lustre2 kernel: [<c0101005>] kernel_thread_helper+0x5/0xb



After booting the machine, Lustre complains:
Dec  2 09:19:44 lustre2 kernel: Lustre: 
3880:0:(class_hash.c:159:lustre_hash_additem_unique()) Already found the 
key in hash [UU
ID_HASH]
Dec  2 09:19:44 lustre2 kernel: Lustre: 
3880:0:(genops.c:686:class_new_export()) alilust-MDT0000: denying 
duplicate export for a
e7199dc-a53f-c901-f07c-87b4285fb7af


This is about the only unusual error message in the log - in addition to 
the 'operation X on unconnected MDS' messages and the high frequency of 
(empty) log dumping.

Any ideas what's going on here?

Thanks,
Thomas



More information about the lustre-discuss mailing list