[Lustre-discuss] INFO: possible recursive locking detected

Nirmal Seenu nirmal at fnal.gov
Thu Aug 13 09:24:43 PDT 2009


I am running lustre 1.8.0.1 on 2.6.22.19 kernel and I am getting the 
following Warning messages once on each lustre server when I mount the 
lustre partitions but everything seems to be working fine.

Could someone please let me know if this a known problem that can be 
ignored.

TIA
Nirmal

Aug 11 15:54:23 mdt1p kernel: Lustre: OBD class driver, 
http://www.lustre.org/
Aug 11 15:54:23 mdt1p kernel: Lustre:     Lustre Version: 1.8.0.1
Aug 11 15:54:23 mdt1p kernel: Lustre:     Build Version: 
1.8.0.1-19691231180000-PRISTINE-.usr.src.linux-2.6.22.19-2.6.22.19
Aug 11 15:54:23 mdt1p kernel: Lustre: Added LNI 172.19.11.210 at tcp1 [8/256]
Aug 11 15:54:23 mdt1p kernel: Lustre: Accept secure, port 988
Aug 11 15:54:23 mdt1p kernel: LustreError: 
8541:0:(router_proc.c:1020:lnet_proc_init()) couldn't create proc entry 
sys/lnet/stats
Aug 11 15:54:24 mdt1p kernel: Lustre: Lustre Client File System; 
http://www.lustre.org/
Aug 11 15:54:24 mdt1p kernel: kjournald starting.  Commit interval 5 seconds
Aug 11 15:54:24 mdt1p kernel: LDISKFS FS on dm-1, internal journal
Aug 11 15:54:24 mdt1p kernel: LDISKFS-fs: mounted filesystem with 
ordered data mode.
Aug 11 15:54:24 mdt1p kernel: kjournald starting.  Commit interval 5 seconds
Aug 11 15:54:24 mdt1p kernel: LDISKFS FS on dm-1, internal journal
Aug 11 15:54:24 mdt1p kernel: LDISKFS-fs: mounted filesystem with 
ordered data mode.
Aug 11 15:54:25 mdt1p kernel: Lustre: MGS MGS started
Aug 11 15:54:25 mdt1p kernel: Lustre: Server MGS on device 
/dev/mapper/mdt1_vol-mgs has started
Aug 11 15:54:25 mdt1p kernel: Lustre: MGC172.19.11.210 at tcp1: 
Reactivating import
Aug 11 15:54:28 mdt1p kernel: kjournald starting.  Commit interval 5 seconds
Aug 11 15:54:28 mdt1p kernel: LDISKFS FS on dm-2, internal journal
Aug 11 15:54:28 mdt1p kernel: LDISKFS-fs: recovery complete.
Aug 11 15:54:28 mdt1p kernel: LDISKFS-fs: mounted filesystem with 
ordered data mode.
Aug 11 15:54:28 mdt1p kernel: kjournald starting.  Commit interval 5 seconds
Aug 11 15:54:28 mdt1p kernel: LDISKFS FS on dm-2, internal journal
Aug 11 15:54:28 mdt1p kernel: LDISKFS-fs: mounted filesystem with 
ordered data mode.
Aug 11 15:54:28 mdt1p kernel:
Aug 11 15:54:28 mdt1p kernel: =============================================
Aug 11 15:54:28 mdt1p kernel: [ INFO: possible recursive locking detected ]
Aug 11 15:54:28 mdt1p kernel: 2.6.22.19 #1
Aug 11 15:54:28 mdt1p kernel: ---------------------------------------------
Aug 11 15:54:28 mdt1p kernel: mount.lustre/8761 is trying to acquire lock:
Aug 11 15:54:28 mdt1p kernel:  (&inode->i_mutex){--..}, at: 
[<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:28 mdt1p kernel:
Aug 11 15:54:28 mdt1p kernel: but task is already holding lock:
Aug 11 15:54:28 mdt1p kernel:  (&inode->i_mutex){--..}, at: 
[<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:28 mdt1p kernel:
Aug 11 15:54:28 mdt1p kernel: other info that might help us debug this:
Aug 11 15:54:28 mdt1p kernel: 2 locks held by mount.lustre/8761:
Aug 11 15:54:28 mdt1p kernel:  #0:  (&type->s_umount_key#22){--..}, at: 
[<ffffffff8029c490>] sget+0x240/0x3b7
Aug 11 15:54:29 mdt1p kernel:  #1:  (&inode->i_mutex){--..}, at: 
[<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:29 mdt1p kernel:
Aug 11 15:54:29 mdt1p kernel: stack backtrace:
Aug 11 15:54:29 mdt1p kernel:
Aug 11 15:54:29 mdt1p kernel: Call Trace:
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802506fa>] 
__lock_acquire+0x162/0xbd8
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802511ec>] lock_acquire+0x7c/0xa0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8046889c>] 
__mutex_lock_slowpath+0xef/0x279
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80468a4b>] mutex_lock+0x25/0x29
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802a32bd>] vfs_unlink+0x86/0x10e
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8846d248>] 
:obdclass:llog_lvfs_destroy+0x168/0x980
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff883f137b>] 
:libcfs:cfs_alloc+0x2b/0x60
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884687d6>] 
:obdclass:llog_init_handle+0xf6/0x880
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80294549>] __kmalloc+0x136/0x146
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8875b6a3>] 
:mgc:mgc_process_log+0x1953/0x24f0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8875cae0>] 
:mgc:mgc_blocking_ast+0x0/0x4a0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff88513830>] 
:ptlrpc:ldlm_completion_ast+0x0/0x830
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff887597c1>] 
:mgc:config_log_find+0xb1/0x360
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8875fafa>] 
:mgc:mgc_process_config+0x8aa/0x1080
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884a3a0a>] 
:obdclass:lustre_process_log+0x35a/0xed0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80294549>] __kmalloc+0x136/0x146
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884a4641>] 
:obdclass:server_find_mount+0x51/0x1b0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884a9ffc>] 
:obdclass:server_start_targets+0x98c/0x19e0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884ae510>] 
:obdclass:server_fill_super+0x1530/0x2380
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80294136>] 
cache_alloc_refill+0x77/0x210
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff883f137b>] 
:libcfs:cfs_alloc+0x2b/0x60
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80292eb6>] poison_obj+0x27/0x32
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff883f137b>] 
:libcfs:cfs_alloc+0x2b/0x60
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80292fd4>] 
cache_alloc_debugcheck_after+0x113/0x1c2
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80294549>] __kmalloc+0x136/0x146
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884b054d>] 
:obdclass:lustre_fill_super+0x11ed/0x1870
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802afd71>] get_filesystem+0x1a/0x40
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8029c5f5>] sget+0x3a5/0x3b7
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8029bf25>] set_anon_super+0x0/0xb7
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884af360>] 
:obdclass:lustre_fill_super+0x0/0x1870
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8029d0c7>] get_sb_nodev+0x57/0x97
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff884a0e96>] 
:obdclass:lustre_get_sb+0x16/0x20
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8029cb0f>] 
vfs_kern_mount+0x98/0x121
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8029cbf1>] do_kern_mount+0x47/0xe2
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802b1d84>] do_mount+0x6a1/0x714
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802510dc>] 
__lock_acquire+0xb44/0xbd8
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80250102>] 
trace_hardirqs_on+0x11c/0x147
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802510dc>] 
__lock_acquire+0xb44/0xbd8
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80314437>] __up_read+0x1a/0x83
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8027545c>] 
get_page_from_freelist+0x2a2/0x5ac
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80250102>] 
trace_hardirqs_on+0x11c/0x147
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8027546a>] 
get_page_from_freelist+0x2b0/0x5ac
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80292fd4>] 
cache_alloc_debugcheck_after+0x113/0x1c2
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8028df39>] 
alloc_pages_current+0xa8/0xb0
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff802b1e80>] sys_mount+0x89/0xcb
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff8020d043>] 
syscall_trace_enter+0x95/0x99
Aug 11 15:54:29 mdt1p kernel:  [<ffffffff80209ec5>] tracesys+0xdc/0xe1
Aug 11 15:54:29 mdt1p kernel:
Aug 11 15:54:29 mdt1p kernel: Lustre: Enabling user_xattr
Aug 11 15:54:29 mdt1p kernel: Lustre: Server lqcdproj-MDT0000 on device 
/dev/mapper/mdt1_vol-mdt has started



More information about the lustre-discuss mailing list