[Lustre-discuss] LBUG on client: Found existing inode ... in lock

Erich Focht efocht at hpce.nec.com
Wed Aug 20 07:00:12 PDT 2008


Hello,

we're seing an LBUG on clients running with Lustre 1.6.5.1 (the servers are
still under 1.6.4.3). I tried finding this in bugzilla with no success. There
seems to be some data inconsistency, can somebody please tell me whether this
is rather on the server side (the data on disk is inconsistent?) or could it
rather be a bug on the client, only?

Thanks in advance for any hint...

Erich


Aug 19 16:25:31 harper3 kernel: Lustre: lustre-OST0023-osc-ffff810228a56c00: Connection restored to service lustre-OST0023 using nid 10.3.0.233 at o2ib.
Aug 19 16:25:31 harper3 kernel: LustreError: 1168:0:(osc_request.c:2866:osc_set_data_with_check()) ### inconsistent l_ast_data found ns: lustre-OST0022-osc-ffff810228a56c00 lock: ffff81016276b600/0x4509c644cc57c7f2 lrc: 4/0,2 mode: PW/PW res: 17055/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 80120000 remote: 0x29a50977c8f92020 expref: -99 pid: 1154
Aug 19 16:25:31 harper3 kernel: LustreError: 1168:0:(osc_request.c:2872:osc_set_data_with_check()) ASSERTION(old_inode->i_state & I_FREEING) failed:Found existing inode ffff81018510dd78/10979900/3759376635 state 7 in lock: setting data to ffff81019cff0f38/10979916/3759376661
Aug 19 16:25:31 harper3 kernel: LustreError: 1168:0:(osc_request.c:2872:osc_set_data_with_check()) LBUG
Aug 19 16:25:31 harper3 kernel: 
Aug 19 16:25:31 harper3 kernel: Call Trace:
Aug 19 16:25:31 harper3 kernel:  [<ffffffff883ffc1a>] :libcfs:lbug_with_loc+0x7a/0xc0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff88615076>] :osc:osc_set_data_with_check+0x186/0x1d0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff88620f40>] :osc:osc_enqueue+0x180/0x590
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886b1455>] :lov:lov_set_add_req+0x15/0x20
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886b7761>] :lov:lov_prep_enqueue_set+0x981/0xb60
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886bd24c>] :lov:lsm_unpackmd_plain+0x1c/0x190
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886a11b2>] :lov:lov_enqueue+0x612/0x8b0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff88400168>] :libcfs:cfs_alloc+0x28/0x60
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886ed26c>] :lustre:ll_glimpse_size+0x62c/0xc20
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8850edb0>] :ptlrpc:ldlm_lock_add_to_lru_nolock+0x60/0xa0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8000948d>] __d_lookup+0xb0/0xff
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8851246a>] :ptlrpc:ldlm_lock_decref+0x9a/0xc0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff88619250>] :osc:osc_extent_blocking_cb+0x0/0x2b0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8852bf60>] :ptlrpc:ldlm_completion_ast+0x0/0x6a0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886efce0>] :lustre:ll_glimpse_callback+0x0/0x440
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886db0ae>] :lustre:ll_intent_drop_lock+0x8e/0xb0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886ededc>] :lustre:ll_inode_revalidate_it+0x67c/0x720
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8871da10>] :lustre:ll_mdc_blocking_ast+0x0/0x510
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886f0367>] :lustre:ll_file_release+0x247/0x2e0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886edfa4>] :lustre:ll_getattr_it+0x24/0x110
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886ee0c4>] :lustre:ll_getattr+0x34/0x40
Aug 19 16:25:31 harper3 kernel:  [<ffffffff800283a7>] vfs_stat_fd+0x32/0x4a
Aug 19 16:25:31 harper3 kernel:  [<ffffffff886f0367>] :lustre:ll_file_release+0x247/0x2e0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8000c199>] _atomic_dec_and_lock+0x39/0x57
Aug 19 16:25:31 harper3 kernel:  [<ffffffff800231fb>] sys_newstat+0x19/0x31
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8005c229>] tracesys+0x71/0xe0
Aug 19 16:25:31 harper3 kernel:  [<ffffffff8005c28d>] tracesys+0xd5/0xe0



More information about the lustre-discuss mailing list