[Lustre-discuss] Which NID to use?

Chan Ching Yu, Patrick cychan at clustertech.com
Wed Feb 26 16:14:11 PST 2014


  

Hi, 

I'm always confused of which NID to use if multiple LNET
interfaces are available on server and client.
Someone told me
connection between Lustre client and OSS is determined by which NID of
MGS is specified when mounting

To make it clear, I establish a VM
environment to verify:

In the testing environment, there are only one
MDS, OSS and Client.
Each of them has two Ethernet interfaces.
All nodes
have the following lnet configuration in modprobe.d

options lnet
networks=tcp0(eth0),tcp1(eth1)

[root at mds1 ~]# lctl
list_nids
192.168.122.240 at tcp
192.168.100.100 at tcp1

[root at oss1 ~]# lctl
list_nids
192.168.122.194 at tcp
192.168.100.101 at tcp1

[root at client ~]#
lctl list_nids
192.168.122.70 at tcp
192.168.100.102 at tcp1

[root at mds1 ~]#
mkfs.lustre --mgs --mdt --fsname=data --index=0 --reformat
/dev/sda

When formatting OST, tcp0 is used to establish the connection
between OST and MGT.

[root at oss1 ~]# mkfs.lustre --ost
--mgsnode=192.168.122.240 at tcp0 --fsname=data --index=0 --reformat
/dev/sda

On Lustre client, I intentionally mount it with
tcp1

[root at client ~]# mount | grep lustre
192.168.100.100 at tcp1:/data on
/lustre type lustre (rw)

Now I dd a file on Lustre filesystem, you can
see that tcp0 is used when writing on OST. 
Why?

[root at client lustre]#
ifconfig eth0 | grep TX
 TX packets:224400 errors:0 dropped:0 overruns:0
carrier:0
 RX bytes:4450732 (4.2 MiB) TX bytes:1064624772 (1015.3
MiB)

[root at client lustre]# dd if=/dev/zero of=testfile bs=1M
count=500

[root at client lustre]# ifconfig eth0 | grep TX
 TX
packets:337851 errors:0 dropped:0 overruns:0 carrier:0
 RX bytes:5578294
(5.3 MiB) TX bytes:1596746050 (1.4 GiB) 

Regards, 

Patrick 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20140227/46537961/attachment.htm>


More information about the lustre-discuss mailing list