[Lustre-discuss] LBUG

Wojciech Turek wjt27 at cam.ac.uk
Thu Nov 15 14:35:38 PST 2007


Hi folks,

We've seen LBUG message today. It happened during failover of one  
OSS's to another one.

storage07 storage08 storage09 storage10 are OSS server
mds01 is MDS/MGS server
darwin is a lustre client

Our environment: 2.6.9-55.0.9.EL_lustre.1.6.3smp

I can provide lustre-log file dumped when LBUG occurred

Nov 15 22:10:14 darwin kernel: Lustre: Changing connection for  
ddn_home-MDT0000-mdc-00000100cff22800 to 10.143.245.201 at tcp/ 
10.143.245.201 at tcp
Nov 15 22:10:14 darwin kernel: Lustre: Skipped 5 previous similar  
messages
Nov 15 22:10:14 darwin kernel: Lustre: ddn_home-MDT0000- 
mdc-00000100cff22800: Connection restored to service ddn_home-MDT0000  
using nid 10.143.245.201 at tcp.
Nov 15 22:10:14 darwin kernel: LustreError: 27824:0:(mdc_request.c: 
588:mdc_set_open_replay_data()) @@@ saving replay request with id = 0  
gen = 0  req at 00000100cff2da00 x315962/t1761851 o101->ddn_home- 
MDT0000_UUID at 10.143.245.201@tcp:12 lens 496/816 ref 2 fl Interpret:RP/ 
4/0 rc -11/301
Nov 15 22:10:14 darwin kernel: LustreError: 27824:0:(mdc_request.c: 
589:mdc_set_open_replay_data()) LBUG
Nov 15 22:10:14 darwin kernel: Lustre: 27824:0:(linux-debug.c: 
168:libcfs_debug_dumpstack()) showing stack for process 27824
Nov 15 22:10:14 darwin kernel:        <ffffffffa047ab70> 
{:ptlrpc:lustre_swab_mds_body+0}
Nov 15 22:10:14 darwin kernel:        <ffffffff80145db3>{in_group_p 
+68} <ffffffffa057dbef>{:lustre:ll_intent_drop_lock+143}
Nov 15 22:10:14 darwin kernel:        <ffffffffa050bb02> 
{:mdc:mdc_intent_lock+690} <ffffffffa05b6320> 
{:lustre:ll_mdc_blocking_ast+0}
Nov 15 22:10:14 darwin kernel:        <ffffffffa0450da0> 
{:ptlrpc:ldlm_completion_ast+0} <ffffffffa05b6320> 
{:lustre:ll_mdc_blocking_ast+0}
Nov 15 22:10:14 darwin kernel:        <ffffffffa0450da0> 
{:ptlrpc:ldlm_completion_ast+0} <ffffffffa05b6b5b> 
{:lustre:ll_prepare_mdc_op_data+139}
Nov 15 22:10:14 darwin kernel:        <ffffffffa05b7811> 
{:lustre:ll_lookup_it+1009} <ffffffffa05b6320> 
{:lustre:ll_mdc_blocking_ast+0}
Nov 15 22:10:14 darwin kernel:        <ffffffff8018e526>{dput+55}  
<ffffffffa057d2b4>{:lustre:ll_release+692}
Nov 15 22:10:14 darwin kernel:        <ffffffffa05b7af5> 
{:lustre:ll_lookup_nd+149} <ffffffff8018f2a4>{d_alloc+436}
Nov 15 22:10:14 darwin kernel:        <ffffffffa057dc10> 
{:lustre:ll_intent_release+0} <ffffffff801778b9>{sys_open+57}
Nov 15 22:10:14 darwin kernel: LustreError: dumping log to /tmp/ 
lustre-log.1195164614.27824
Nov 15 22:10:51 mds01.beowulf.cluster kernel: LustreError: 22402:0: 
(ldlm_lib.c:1437:target_send_reply_msg()) @@@ processing error  
(-107)  req at 000001010a449050 x165295/t0 o101-><?>@<?>:-1 lens 232/0  
ref 0 fl Interpret:/0/0 rc -107/0
Nov 15 22:10:51 mds01.beowulf.cluster kernel: LustreError: 22402:0: 
(ldlm_lib.c:1437:target_send_reply_msg()) Skipped 130 previous  
similar messages
Nov 15 22:12:53 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client  
147c5633-2068-553b-7587-23e899f8cc7e at NET_0x200000a8f0625_UUID nid  
10.143.6.37 at tcp  ns: mds-ddn_data-MDT0000_UUID lock:  
000001002654f380/0xc5c8c9ecb9cd18df lrc: 1/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 29 type: IBT flags: 4000030  
remote: 0x6d75ab2c6647486e expref: 6 pid 22413
Nov 15 22:12:53 mds01.beowulf.cluster kernel: Lustre: 22760:0: 
(mds_reint.c:127:mds_finish_transno()) commit transaction for  
disconnected client 147c5633-2068-553b-7587-23e899f8cc7e: rc -2
Nov 15 22:12:53 mds01.beowulf.cluster kernel: LustreError: 22701:0: 
(handler.c:1498:mds_handle()) operation 36 on unconnected MDS from  
12345-10.143.6.37 at tcp
Nov 15 22:13:18 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client  
c28438c3-2db0-2ce1-3517-87bc655aeeff at NET_0x200000a8f0436_UUID nid  
10.143.4.54 at tcp  ns: mds-ddn_data-MDT0000_UUID lock:  
000001004aa2a180/0xc5c8c9ecb9ce4c99 lrc: 1/0,0 mode: CR/CR res:  
237797580/825589604 bits 0x3 rrc: 64 type: IBT flags: 4000030 remote:  
0xf69229d4ea6de65d expref: 13 pid 22559
Nov 15 22:13:18 mds01.beowulf.cluster kernel: Lustre: 22698:0: 
(mds_reint.c:127:mds_finish_transno()) commit transaction for  
disconnected client c28438c3-2db0-2ce1-3517-87bc655aeeff: rc -2
Nov 15 22:13:18 mds01.beowulf.cluster kernel: LustreError: 22605:0: 
(handler.c:1498:mds_handle()) operation 36 on unconnected MDS from  
12345-10.143.4.54 at tcp
Nov 15 22:14:34 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client 01ee681c-7bcb-497a- 
c385-9ad5545fb21d at NET_0x200000a8f0923_UUID nid 10.143.9.35 at tcp  ns:  
mds-ddn_data-MDT0000_UUID lock: 000001011e886cc0/0xc5c8c9ecb9d4a579  
lrc: 1/0,0 mode: CR/CR res: 244974718/2466336747 bits 0x3 rrc: 77  
type: IBT flags: 4000030 remote: 0x302b57a7545aea00 expref: 14 pid 22473
Nov 15 22:14:34 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) Skipped 1 previous  
similar message
Nov 15 22:15:19 mds01.beowulf.cluster kernel: LustreError: 22781:0: 
(handler.c:1498:mds_handle()) operation 400 on unconnected MDS from  
12345-10.143.9.35 at tcp
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 22831:0: 
(ldlm_lib.c:514:target_handle_reconnect()) ddn_data-MDT0000:  
39b2f4bc-05a7-bb74-b951-57360554f907 reconnecting
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 22831:0: 
(ldlm_lib.c:514:target_handle_reconnect()) Skipped 49 previous  
similar messages
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 22831:0: 
(ldlm_lib.c:742:target_handle_connect()) ddn_data-MDT0000: refuse  
reconnection from 39b2f4bc-05a7-bb74- 
b951-57360554f907 at 10.143.6.32@tcp to 0x000001010cb9a000; still busy  
with 4 active RPCs
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 22831:0: 
(ldlm_lib.c:742:target_handle_connect()) Skipped 117 previous similar  
messages
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 0:0:(watchdog.c: 
130:lcw_cb()) Watchdog triggered for pid 22711: it was inactive for 200s
Nov 15 22:16:13 mds01.beowulf.cluster kernel: Lustre: 0:0:(linux- 
debug.c:168:libcfs_debug_dumpstack()) showing stack for process 22711
Nov 15 22:16:14 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:14 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:15 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:16 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:16 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:17 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa042f9fb> 
{:ptlrpc:ldlm_handle_bl_callback+443}
Nov 15 22:16:18 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:18 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:18 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:18 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa0453b4c> 
{:ptlrpc:ptlrpc_server_handle_request+3036}
Nov 15 22:16:19 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:19 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:19 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:19 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa0453b4c> 
{:ptlrpc:ptlrpc_server_handle_request+3036}
Nov 15 22:16:20 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:20 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:20 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:20 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:21 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:21 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:21 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:21 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205}  <ffffffffa040f1ed> 
{:ptlrpc:ldlm_lock_create+1581}
Nov 15 22:16:22 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:23 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:23 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:23 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:23 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:24 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:24 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:25 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:26 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:26 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:26 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:26 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:27 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:27 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:28 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:28 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:28 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:28 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:29 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:29 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:29 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:29 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:30 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:30 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:30 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:30 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa042f9fb> 
{:ptlrpc:ldlm_handle_bl_callback+443}
Nov 15 22:16:31 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:31 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:31 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:32 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:32 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:32 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:33 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:33 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:33 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:34 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:34 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:34 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:34 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:35 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:35 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:35 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:36 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:36 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:36 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa042f9fb> 
{:ptlrpc:ldlm_handle_bl_callback+443}
Nov 15 22:16:36 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:37 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:37 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:37 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:37 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffff80131923> 
{recalc_task_prio+337}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:38 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:39 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:39 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:16:39 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:39 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:40 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:16:40 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:41 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:41 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:16:41 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:16:42 mds01.beowulf.cluster kernel: Lustre: 22487:0: 
(mds_reint.c:127:mds_finish_transno()) commit transaction for  
disconnected client 01ee681c-7bcb-497a-c385-9ad5545fb21d: rc -2
Nov 15 22:16:42 mds01.beowulf.cluster kernel: Lustre: 22487:0: 
(watchdog.c:312:lcw_update_time()) Expired watchdog for pid 22487  
disabled after 202.0064s
Nov 15 22:17:17 darwin kernel: Lustre: Client ddn_home-client has  
started
Nov 15 22:17:17 darwin kernel: Lustre: Skipped 1 previous similar  
message
Nov 15 22:17:32 darwin kernel: LustreError: 20092:0:(client.c: 
969:ptlrpc_expire_one_request()) @@@ timeout (sent at 1195165037, 15s  
ago)  req at 00000100cfc2fc00 x316565/t0 o8->ddn_home- 
OST0001_UUID at 10.143.245.8@tcp:6 lens 240/272 ref 1 fl Rpc:/0/0 rc 0/-22
Nov 15 22:17:32 darwin kernel: LustreError: 20092:0:(client.c: 
969:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Nov 15 22:17:32 storage10.beowulf.cluster kernel: LustreError: 137-5:  
UUID 'ddn_home-OST0002_UUID' is not available  for connect (no target)
Nov 15 22:17:32 darwin kernel: LustreError: 11-0: an error occurred  
while communicating with 10.143.245.9 at tcp. The ost_connect operation  
failed with -19
Nov 15 22:17:32 storage10.beowulf.cluster kernel: LustreError:  
Skipped 167 previous similar messages
Nov 15 22:17:32 storage09.beowulf.cluster kernel: LustreError: 137-5:  
UUID 'ddn_home-OST0003_UUID' is not available  for connect (no target)
Nov 15 22:17:32 storage09.beowulf.cluster kernel: LustreError:  
Skipped 256 previous similar messages
Nov 15 22:17:32 storage10.beowulf.cluster kernel: LustreError:  
20968:0:(ldlm_lib.c:1437:target_send_reply_msg()) @@@ processing  
error (-19)  req at 000001005654f000 x316597/t0 o8-><?>@<?>:-1 lens  
240/0 ref 0 fl Interpret:/0/0 rc -19/0
Nov 15 22:17:32 storage09.beowulf.cluster kernel: LustreError:  
26503:0:(ldlm_lib.c:1437:target_send_reply_msg()) @@@ processing  
error (-19)  req at 000001011779d800 x316598/t0 o8-><?>@<?>:-1 lens  
240/0 ref 0 fl Interpret:/0/0 rc -19/0
Nov 15 22:17:32 storage10.beowulf.cluster kernel: LustreError:  
20968:0:(ldlm_lib.c:1437:target_send_reply_msg()) Skipped 167  
previous similar messages
Nov 15 22:17:32 storage09.beowulf.cluster kernel: LustreError:  
26503:0:(ldlm_lib.c:1437:target_send_reply_msg()) Skipped 255  
previous similar messages
Nov 15 22:17:54 mds01.beowulf.cluster kernel: Lustre: 0:0:(watchdog.c: 
130:lcw_cb()) Watchdog triggered for pid 22663: it was inactive for 200s
Nov 15 22:17:54 mds01.beowulf.cluster kernel: Lustre: 0:0:(watchdog.c: 
130:lcw_cb()) Skipped 71 previous similar messages
Nov 15 22:17:54 mds01.beowulf.cluster kernel: Lustre: 0:0:(linux- 
debug.c:168:libcfs_debug_dumpstack()) showing stack for process 22663
Nov 15 22:17:54 mds01.beowulf.cluster kernel: Lustre: 0:0:(linux- 
debug.c:168:libcfs_debug_dumpstack()) Skipped 71 previous similar  
messages
Nov 15 22:17:54 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:17:54 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:17:55 mds01.beowulf.cluster kernel: LustreError: 22663:0: 
(ldlm_request.c:64:ldlm_expired_completion_wait()) ### lock timed out  
(enqueued at 1195164874, 200s ago); not entering recovery in server  
code, just going back to sleep ns: mds-ddn_data-MDT0000_UUID lock:  
00000101100cf700/0xc5c8c9ecb9de524c lrc: 3/0,1 mode: --/EX res:  
244974718/2466336747 bits 0x2 rrc: 73 type: IBT flags: 4004030  
remote: 0x0 expref: -99 pid 22663
Nov 15 22:17:55 mds01.beowulf.cluster kernel: LustreError: 22663:0: 
(ldlm_request.c:64:ldlm_expired_completion_wait()) Skipped 71  
previous similar messages
Nov 15 22:17:56 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client 39b2f4bc-05a7-bb74- 
b951-57360554f907 at NET_0x200000a8f0620_UUID nid 10.143.6.32 at tcp  ns:  
mds-ddn_data-MDT0000_UUID lock: 00000100b494b700/0xc5c8c9ecb9d4a5cd  
lrc: 1/0,0 mode: CR/CR res: 244974718/2466336747 bits 0x3 rrc: 73  
type: IBT flags: 4000030 remote: 0xdeb6ef9ea3b76af3 expref: 12 pid 22711
Nov 15 22:17:56 mds01.beowulf.cluster kernel: Lustre: 22541:0: 
(mds_reint.c:127:mds_finish_transno()) commit transaction for  
disconnected client 35c562c4-4ab0-d58a-fdfd-6e84b6b92ca7: rc -2
Nov 15 22:17:56 mds01.beowulf.cluster kernel: LustreError: 22764:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) ### lock on destroyed export  
00000101080a1000 ns: mds-ddn_data-MDT0000_UUID lock:  
00000100c169e140/0xc5c8c9ecb9d4a5fe lrc: 2/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 71 type: IBT flags: 4000030  
remote: 0x302b57a7545aea0e expref: 6 pid 22764
Nov 15 22:17:56 mds01.beowulf.cluster kernel: LustreError: 22541:0: 
(service.c:668:ptlrpc_server_handle_request()) request 302956 opc 36  
from 12345-10.143.9.36 at tcp processed in 303s trans 0 rc -2/-2
Nov 15 22:17:56 mds01.beowulf.cluster kernel: LustreError: 22541:0: 
(service.c:668:ptlrpc_server_handle_request()) Skipped 2 previous  
similar messages
Nov 15 22:17:56 mds01.beowulf.cluster kernel: Lustre: 22541:0: 
(watchdog.c:312:lcw_update_time()) Expired watchdog for pid 22541  
disabled after 303.0134s
Nov 15 22:17:56 mds01.beowulf.cluster kernel: Lustre: 22541:0: 
(watchdog.c:312:lcw_update_time()) Skipped 2 previous similar messages
Nov 15 22:17:56 mds01.beowulf.cluster kernel: LustreError: 22764:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) Skipped 1 previous similar  
message
Nov 15 22:18:39 mds01.beowulf.cluster kernel: Lustre: 0:0:(watchdog.c: 
130:lcw_cb()) Watchdog triggered for pid 22532: it was inactive for 200s
Nov 15 22:18:39 mds01.beowulf.cluster kernel: Lustre: 0:0:(linux- 
debug.c:168:libcfs_debug_dumpstack()) showing stack for process 22532
Nov 15 22:18:39 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:18:40 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:18:40 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:18:40 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:18:40 mds01.beowulf.cluster kernel: LustreError: 22532:0: 
(ldlm_request.c:64:ldlm_expired_completion_wait()) ### lock timed out  
(enqueued at 1195164919, 200s ago); not entering recovery in server  
code, just going back to sleep ns: mds-ddn_data-MDT0000_UUID lock:  
0000010125c50b00/0xc5c8c9ecb9e22f98 lrc: 3/1,0 mode: --/CR res:  
244974718/2466336747 bits 0x3 rrc: 70 type: IBT flags: 4004000  
remote: 0x0 expref: -99 pid 22532
Nov 15 22:18:40 mds01.beowulf.cluster kernel: Lustre: 22841:0: 
(ldlm_lib.c:742:target_handle_connect()) ddn_data-MDT0000: refuse  
reconnection from 01ee681c-7bcb-497a- 
c385-9ad5545fb21d at 10.143.9.35@tcp to 0x0000010010331000; still busy  
with 2 active RPCs
Nov 15 22:18:40 mds01.beowulf.cluster kernel: Lustre: 22841:0: 
(ldlm_lib.c:742:target_handle_connect()) Skipped 23 previous similar  
messages
Nov 15 22:18:47 storage10.beowulf.cluster kernel: Lustre: 20900:0: 
(ldlm_lib.c:514:target_handle_reconnect()) ddn_home-OST0003: 6f70df6b- 
c936-9ec1-900b-9cf9945466a4 reconnecting
Nov 15 22:18:47 storage09.beowulf.cluster kernel: Lustre: 26593:0: 
(ldlm_lib.c:514:target_handle_reconnect()) ddn_home-OST0002: 6f70df6b- 
c936-9ec1-900b-9cf9945466a4 reconnecting
Nov 15 22:18:47 storage10.beowulf.cluster kernel: Lustre: 20900:0: 
(ldlm_lib.c:514:target_handle_reconnect()) Skipped 27 previous  
similar messages
Nov 15 22:18:47 storage09.beowulf.cluster kernel: Lustre: 26593:0: 
(ldlm_lib.c:514:target_handle_reconnect()) Skipped 4 previous similar  
messages
Nov 15 22:19:06 mds01.beowulf.cluster kernel: LustreError: 22403:0: 
(mgs_handler.c:467:mgs_handle()) lustre_mgs: operation 101 on  
unconnected MGS
Nov 15 22:19:06 mds01.beowulf.cluster kernel: LustreError: 22403:0: 
(mgs_handler.c:467:mgs_handle()) Skipped 75 previous similar messages
Nov 15 22:19:37 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client 12e8711f-85d1-5c4e- 
de50-732c8d2364cf at NET_0x200000a8f0623_UUID nid 10.143.6.35 at tcp  ns:  
mds-ddn_data-MDT0000_UUID lock: 00000100c169e740/0xc5c8c9ecb9d4a5e9  
lrc: 1/0,0 mode: CR/CR res: 244974718/2466336747 bits 0x3 rrc: 70  
type: IBT flags: 4000030 remote: 0x51ab49d5cb930477 expref: 9 pid 22689
Nov 15 22:19:37 mds01.beowulf.cluster kernel: LustreError: 22688:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) ### lock on destroyed export  
000001010cb9a000 ns: mds-ddn_data-MDT0000_UUID lock:  
000001003a03d9c0/0xc5c8c9ecb9d4a62f lrc: 2/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 68 type: IBT flags: 4000030  
remote: 0xdeb6ef9ea3b76afa expref: 6 pid 22688
Nov 15 22:21:13 mds01.beowulf.cluster kernel: LustreError: 22402:0: 
(ldlm_lib.c:1437:target_send_reply_msg()) @@@ processing error  
(-107)  req at 0000010129af3450 x173557/t0 o101-><?>@<?>:-1 lens 232/0  
ref 0 fl Interpret:/0/0 rc -107/0
Nov 15 22:21:13 mds01.beowulf.cluster kernel: LustreError: 22402:0: 
(ldlm_lib.c:1437:target_send_reply_msg()) Skipped 127 previous  
similar messages
Nov 15 22:21:18 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client  
d603b717-3f73-4246-443a-3e7064c3005f at NET_0x200000a8f061f_UUID nid  
10.143.6.31 at tcp  ns: mds-ddn_data-MDT0000_UUID lock:  
000001003a03d3c0/0xc5c8c9ecb9d4a644 lrc: 1/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 67 type: IBT flags: 4000030  
remote: 0x1be3edd886aa31e7 expref: 6 pid 22412
Nov 15 22:21:18 mds01.beowulf.cluster kernel: LustreError: 22913:0: 
(service.c:668:ptlrpc_server_handle_request()) request 222619 opc 36  
from 12345-10.143.8.10 at tcp processed in 505s trans 0 rc -2/-2
Nov 15 22:21:18 mds01.beowulf.cluster kernel: LustreError: 22585:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) ### lock on destroyed export  
0000010107c58000 ns: mds-ddn_data-MDT0000_UUID lock:  
00000100179d3840/0xc5c8c9ecb9d4a660 lrc: 2/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 65 type: IBT flags: 4000030  
remote: 0x42272e0e580baa5f expref: 6 pid 22585
Nov 15 22:21:18 mds01.beowulf.cluster kernel: LustreError: 22585:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) Skipped 1 previous similar  
message
Nov 15 22:21:18 mds01.beowulf.cluster kernel: Lustre: 22585:0: 
(watchdog.c:312:lcw_update_time()) Expired watchdog for pid 22585  
disabled after 505.0263s
Nov 15 22:21:18 mds01.beowulf.cluster kernel: Lustre: 22585:0: 
(watchdog.c:312:lcw_update_time()) Skipped 6 previous similar messages
Nov 15 22:21:18 mds01.beowulf.cluster kernel: LustreError: 22913:0: 
(service.c:668:ptlrpc_server_handle_request()) Skipped 7 previous  
similar messages
Nov 15 22:21:59 mds01.beowulf.cluster kernel: Lustre: 22600:0: 
(ldlm_lib.c:742:target_handle_connect()) ddn_data-MDT0000: refuse  
reconnection from 01ee681c-7bcb-497a- 
c385-9ad5545fb21d at 10.143.9.35@tcp to 0x0000010010331000; still busy  
with 2 active RPCs
Nov 15 22:21:59 mds01.beowulf.cluster kernel: Lustre: 22600:0: 
(ldlm_lib.c:742:target_handle_connect()) Skipped 21 previous similar  
messages
Nov 15 22:22:59 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client 3e834863-8f0a-0b4f- 
cd38-63b5998906ed at NET_0x200000a8f081d_UUID nid 10.143.8.29 at tcp  ns:  
mds-ddn_data-MDT0000_UUID lock: 0000010038c9a6c0/0xc5c8c9ecb9d4a6c9  
lrc: 1/0,0 mode: CR/CR res: 244974718/2466336747 bits 0x3 rrc: 58  
type: IBT flags: 4000030 remote: 0x83eb55f7f897e238 expref: 7 pid 22596
Nov 15 22:22:59 mds01.beowulf.cluster kernel: LustreError: 22619:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) ### lock on destroyed export  
000001005b0fc000 ns: mds-ddn_data-MDT0000_UUID lock:  
0000010016de9700/0xc5c8c9ecb9d4a6fa lrc: 2/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 56 type: IBT flags: 4000030  
remote: 0x83eb55f7f897e23f expref: 3 pid 22619
Nov 15 22:22:59 mds01.beowulf.cluster kernel: LustreError: 22619:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) Skipped 3 previous similar  
messages
Nov 15 22:23:43 mds01.beowulf.cluster kernel: Lustre: 0:0:(watchdog.c: 
130:lcw_cb()) Watchdog triggered for pid 22415: it was inactive for 200s
Nov 15 22:23:43 mds01.beowulf.cluster kernel: Lustre: 0:0:(linux- 
debug.c:168:libcfs_debug_dumpstack()) showing stack for process 22415
Nov 15 22:23:43 mds01.beowulf.cluster kernel:         
<ffffffffa044cf57>{:ptlrpc:lustre_msg_get_flags+87}
Nov 15 22:23:43 mds01.beowulf.cluster kernel:         
<ffffffffa044fd30>{:ptlrpc:lustre_swab_ldlm_request+0}
Nov 15 22:23:44 mds01.beowulf.cluster kernel:         
<ffffffffa044e890>{:ptlrpc:lustre_swab_ptlrpc_body+0}
Nov 15 22:23:44 mds01.beowulf.cluster kernel:         
<ffffffffa044c39d>{:ptlrpc:lustre_swab_buf+205} <ffffffffa033660c> 
{:lnet:LNetMDAttach+764}
Nov 15 22:23:44 mds01.beowulf.cluster kernel: LustreError: 22415:0: 
(ldlm_request.c:64:ldlm_expired_completion_wait()) ### lock timed out  
(enqueued at 1195165223, 200s ago); not entering recovery in server  
code, just going back to sleep ns: mds-ddn_data-MDT0000_UUID lock:  
000001009f626540/0xc5c8c9ecba19d126 lrc: 3/1,0 mode: --/CR res:  
244974718/2466336747 bits 0x3 rrc: 55 type: IBT flags: 4004000  
remote: 0x0 expref: -99 pid 22415
Nov 15 22:24:40 mds01.beowulf.cluster kernel: LustreError: 0:0: 
(ldlm_lockd.c:210:waiting_locks_callback()) ### lock callback timer  
expired: evicting client 06066f18-5c49-e5ac- 
f16f-352689705c07 at NET_0x200000a8f0637_UUID nid 10.143.6.55 at tcp  ns:  
mds-ddn_data-MDT0000_UUID lock: 00000100b83c3b80/0xc5c8c9ecb9d4a74e  
lrc: 1/0,0 mode: CR/CR res: 244974718/2466336747 bits 0x3 rrc: 58  
type: IBT flags: 4000030 remote: 0xa590d62eb325364 expref: 7 pid 22639
Nov 15 22:24:40 mds01.beowulf.cluster kernel: Lustre: 22754:0: 
(mds_reint.c:127:mds_finish_transno()) commit transaction for  
disconnected client b1ee96c8-ce6d-50fd-78f8-bb05fdb04428: rc -2
Nov 15 22:24:40 mds01.beowulf.cluster kernel: LustreError: 22803:0: 
(ldlm_lockd.c:962:ldlm_handle_enqueue()) ### lock on destroyed export  
00000100cf426000 ns: mds-ddn_data-MDT0000_UUID lock:  
00000100b74d47c0/0xc5c8c9ecb9d4a78d lrc: 2/0,0 mode: CR/CR res:  
244974718/2466336747 bits 0x3 rrc: 51 type: IBT flags: 4000030  
remote: 0x10b6987b3792e705 expref: 6 pid 22803
Nov 15 22:26:13 mds01.beowulf.cluster kernel: Lustre: 22483:0: 
(ldlm_lib.c:514:target_handle_reconnect()) ddn_data-MDT0000:  
8ed57292-170f-3c06-bcd2-140ca904ffa4 reconnecting
Nov 15 22:26:13 mds01.beowulf.cluster kernel: Lustre: 22483:0: 
(ldlm_lib.c:514:target_handle_reconnect()) Skipped 67 previous  
similar messages


Thanks for your help!

Wojciech Turek




More information about the lustre-discuss mailing list