[Lustre-discuss] Lustre read performance decay when OSSes are assigned in two different subnet
Hammitt, Charles Allen
chammitt at email.unc.edu
Thu Mar 15 05:50:46 PDT 2012
Networking overhead… vlan routing perhaps; 1) with either adding an extra network device hop and latency from a network device/router or 2) overburdened switch handling the routing itself still introducing network latency. Latency is the storage and network i/o bandwidth killer.
I’m willing to bet two things:
1) changing your stripe size from 2 to 1 will make similar bandwidth results to the diagram 2 [54.3MB/s], even if the layout is as diagram 1 [separate nets].
2) If all your OSS/MDS and Clients nodes were in the same single vlan network…you’d see better performance than diagram’s 2 54.3MB/sec bandwidth throughput.
So, drop classful subnets…go with cidr / supernetting networks to get the ip spaces you need and drop the extra routing latency.
Storage Systems Specialist
ITS Research Computing @
The University of North Carolina-CH
From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss-bounces at lists.lustre.org] On Behalf Of zhengfeng
Sent: Thursday, March 15, 2012 12:11 AM
To: lustre-discuss at lists.lustre.org
Subject: [Lustre-discuss] Lustre read performance decay when OSSes are assigned in two different subnet
We met one problem about Lustre read performance decay when OSSes are assigned in two different subnet.
Describing that in the following diagram:
diagram 1, OSS in different subnets:
Client (subnet 10.0.1.2)
For diagram 1, we made the CLient OSS1 and OSS2 in 3 different subnets. the switch used is able forward all packages.
Use dd cmd to test r/w performance， write/rad data to/from to OSS1 and OSS2 at the same time:
[root at client client]# time dd if=test2 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 53.5922 seconds, 39.1 MB/s
diagram 2, OSS in same subnet:
Client (subnet 10.0.1.2)
(10.0.2.2, 10.0.2.3, at same subnet)
for diagram 2, we assigned OSS1 and OSS2 at the same subnet, then test:
[root at client219 client]# time dd of=/dev/null if=test1 bs=1M
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 193.07 seconds, 54.3 MB/s
In different subnets, the OSS read performance is 39.1 MB/s, while OSS in
same subnet, the read performance is 54.3 MB/s. the performance decays so much.
Why using different subnets in lustre, the performance decayed?
Anyone had met such problems? Many thanks for your answers and advice.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss