[Lustre-discuss] Modifying Lustre network (good practices)

Olivier Hargoaa olivier.hargoaa at bull.fr
Thu May 20 08:39:20 PDT 2010


Nate Pearlstein a écrit :
> Which bonding method are you using?  Has the performance always been
> this way?  Depending on which bonding type you are using and the network
> hardware involved you might see the behavior you are describing.
> 

Hi,

Here is our bonding configuration :

On linux side :

mode=4	          - to use 802.3ad
miimon=100	 - to set the link check interval (ms)
xmit_hash_policy=layer2+3	- to set XOR hashing method
lacp_rate=fast	 - to set LCAPDU tx rate to request (slow=20s, fast=1s)

Onethernet switch side, load balancing is configured as:
# port-channel load-balance src-dst-mac

thanks

> 
> On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
>> Dear All,
>>
>> We have a cluster with lustre critical data. On this cluster there are 
>> three networks on each Lustre server and client : one ethernet network 
>> for administration (eth0), and two other ethernet networks configured in 
>> bonding (bond0: eth1 & eth2). On Lustre we get poor read performances 
>> and good write performances so we decide to modify Lustre network in 
>> order to see if problems comes from network layer.
>>
>> Currently Lustre network is bond0. We want to set it as eth0, then eth1, 
>> then eth2 and finally back to bond0 in order to compare performances.
>>
>> Therefore, we'll perform the following steps: we will umount the 
>> filesystem, reformat the mgs, change lnet options in modprobe file, 
>> start new mgs server, and finally modify our ost and mdt with 
>> tunefs.lustre with failover and mgs new nids using "--erase-params" and 
>> "--writeconf" options.
>>
>> We tested it successfully on a test filesystem but we read in the manual 
>> that this can be really dangerous. Do you agree with this procedure? Do 
>> you have some advice or practice on this kind of requests? What's the 
>> danger?
>>
>> Regards.
>>
> 




More information about the lustre-discuss mailing list