[Lustre-discuss] Adding a new client on a different network

Nathan Rutman Nathan.Rutman at Sun.COM
Wed Nov 14 19:48:53 PST 2007


Your client is trying to talk to 172.16.128.252 at tcp, but the server doesn't think that
is one of its NIDs.

Some things to try:
cat /proc/sys/lnet/nis on client and servers
lctl ping back and forth



Klaus Steden wrote:
> Hi there,
>
> I offered to help one of our network engineers test out some equipment by
> connecting up a new Linux node to our existing CFS and blasting data back
> and forth (I figure it was about as high-performance a solution I could
> provide :-)
>
> There are a couple of wrinkles that I can't quite figure out, and can't find
> any hits in Google that seem to fit the bill.
>
> Basically, I've had Lustre working beautifully on a closed network segment
> (172.16.129.0/24), and want to expand it to include another closed segment,
> 172.16.128.0/24. I've connected my MDS and OSS nodes into this new segment,
> connectivity is good, software/kernels/etc. are all at the same major and
> minor revision.
>
> However, when I try to connect up the new client, here's what happens:
>
> -- client --
> mount -t lustre 172.16.128.252 at tcp0:/lustre /mnt/lustre
> mount.lustre: mount 172.16.128.252 at tcp0:/lustre at /mnt/lustre failed:
> Cannot send after transport endpoint shutdown
> -- client --
>
> And on the MDS side, here's what I see in the output of 'dmesg':
>
> -- mds --
> LustreError: 120-3: Refusing connection from 172.16.128.100 for
> 172.16.128.252 at tcp: No matching NI
> -- mds --
>
> I was initially using this in my modprobe.conf:
>
> -- modprobe.conf --
> options lnet networks=tcp0(eth0,bond0)
> -- modprobe.conf --
>
> where 'eth0' is attached to 172.16.129.0/24, and 'bond0' is attached to
> 172.16.128.0/24.
>
> What's happening here, and where do I look for information on how to fix it?
>
> When I originally assembled the file system, I had only specified nodes on
> the 172.16.129.0/24 network in the various MGS/MGC parameters.
>
> Any help would be greatly, greatly appreciated!
>
> thanks,
> Klaus
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list