[Lustre-discuss] Cannot send after transport endpoint shutdown (-108)

Charles Taylor taylor at hpc.ufl.edu
Tue Mar 4 12:41:04 PST 2008


We've seen this before as well.    Our experience is that the  
obd_timeout is  far too small for large clusters (ours is 400+  
nodes)  and the only way we avoid these errors is by setting it to  
1000 which seems high to us but  appears to work and puts an end to  
the transport endpoint shutdowns.

On the MDS....

lctl conf_param srn.sys.timeout=1000

You may have to do this on the OSS's as well unless you restart the  
OSS's but I could be wrong on that.   You should check it everywhere  
with...

cat /proc/sys/lustre/timeout


On Mar 4, 2008, at 3:31 PM, Aaron S. Knister wrote:

> This morning I've had both my infiniband and tcp lustre clients  
> hiccup. They are evicted from the server presumably as a result of  
> their high load and consequent timeouts. My question is- why don't  
> the clients re-connect. The infiniband and tcp clients both give  
> the following message when I type "df" - Cannot send after  
> transport endpoint shutdown (-108). I've been battling with this on  
> and off now for a few months. I've upgraded my infiniband switch  
> firmware, all the clients and servers are running the latest  
> version of lustre and the lustre patched kernel. Any ideas?
>
> -Aaron
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list