[Lustre-discuss] Fwd: Reg /// OSS rebooted automatically

Jeff Johnson jeff.johnson at aeoncomputing.com
Tue Dec 21 03:51:35 PST 2010


Daniel,

In the future you might want to consider posting some entries or pieces of a log rather than the entire log file. =)

Was this from the OSS that you say was rebooting or from your MDS node? I would look at the log file of the OSS node(s) that contain OST0006 and OST0007 and see if there are any RAID errors. It might be a network problem as well.

Morning is coming and one of the developers will likely respond to this with more suggestions.

--Jeff

---mobile signature---
Jeff Johnson - Aeon Computing
jeff.johnson at aeoncomputing.com
m: 619-204-9061

On Dec 20, 2010, at 23:13, Daniel Raj <danielraj2006 at gmail.com> wrote:

> Dec 19 04:19:49 cluster kernel: Lustre: 23300:0:(ldlm_lib.c:575:target_handle_reconnect()) dan3-OST0006: d957783f-e60b-07b0-2c86-ecfbc7eb57b6 reconnecting
> Dec 19 04:19:49 cluster kernel: Lustre: 23300:0:(ldlm_lib.c:575:target_handle_reconnect()) Skipped 4 previous similar messages
> Dec 19 04:30:05 cluster kernel: Lustre: 23308:0:(ldlm_lib.c:575:target_handle_reconnect()) dan3-OST0006: d957783f-e60b-07b0-2c86-ecfbc7eb57b6 reconnecting
> Dec 19 04:30:05 cluster kernel: LustreError: 137-5: UUID 'cluster-ost7_UUID' is not available  for connect (no target)
> Dec 19 04:30:05 cluster kernel: LustreError: 23290:0:(ldlm_lib.c:1892:target_send_reply_msg()) @@@ processing error (-19)  req at ffff8103fd722c00 x1355442914715019/t0 o8-><?>@<?>:0/0 lens 368/0 e 0 to 0 dl 1292713305 ref 1 fl Interpret:/0/0 rc -19/0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20101221/bf1dc5f4/attachment.htm>


More information about the lustre-discuss mailing list