[Lustre-discuss] Lustre behaviour when multiple network paths are available?

Klaus Steden klaus.steden at thomson.net
Tue Feb 12 10:32:56 PST 2008


Hello Andreas,

I figured out what the issue was ...

My lnet configuration looked like this:

-- cut --
options lnet networks="tcp0(eth0),tcp1(bond0)"
-- cut --

So all of the traffic in the cluster was being routed over tcp0 using the
single link; I reconfigured my MGS parameters and changed my lnet config to
use 'tcp0(bond0)' and now I'm seeing the results I expected.

I get about 350 MB/s from two OSSes -- each with a 2 pair LACP link (4 GigE
total, so about 75% utilization).

cheers,
Klaus

On 2/7/08 11:16 PM, "Andreas Dilger" <adilger at sun.com>did etch on stone
tablets:

> On Feb 07, 2008  15:05 -0800, Klaus Steden wrote:
>> When Lustre is configured in an environment where there are multiple paths
>> to the same destination of the same length (i.e. two paths, each one hop
>> away), which path(s) will be used for sending and receiving data?
> 
> That depends on how you configure it in /etc/modprobe.conf.
> 
>> I have my cluster configured with two OSTs with two GigE NICs in each. I am
>> seeing identical performance metrics when I use LACP to aggregate, and when
>> I use two separate network addresses to connect them (ditto on the client
>> side).
>> 
>> So what I'm wondering is if I've hit the peak performance of my disk array,
>> or if Lustre is just using only one path. The numbers I'm seeing in both
>> scenarios indicate 95% utilization of GigE, times two.
> 
> I'm not sure I understand - if you are getting aggregate performance that
> is 190% of a single GigE from the client then you _have_ to be using both
> paths (assuming there are two GigE NICs in the client, and not four).
> 
>> How can I get Lustre to use both paths simultaneously?
> 
> ifconfig should show you clearly via TX/RX byte counts which NICs are
> being used in each configuration.
> 
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
> 




More information about the lustre-discuss mailing list