[Lustre-discuss] Modifying Lustre network (good practices)

Nate Pearlstein npearl at sgi.com
Thu May 20 08:03:48 PDT 2010


Which bonding method are you using?  Has the performance always been
this way?  Depending on which bonding type you are using and the network
hardware involved you might see the behavior you are describing.


On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
> Dear All,
> 
> We have a cluster with lustre critical data. On this cluster there are 
> three networks on each Lustre server and client : one ethernet network 
> for administration (eth0), and two other ethernet networks configured in 
> bonding (bond0: eth1 & eth2). On Lustre we get poor read performances 
> and good write performances so we decide to modify Lustre network in 
> order to see if problems comes from network layer.
> 
> Currently Lustre network is bond0. We want to set it as eth0, then eth1, 
> then eth2 and finally back to bond0 in order to compare performances.
> 
> Therefore, we'll perform the following steps: we will umount the 
> filesystem, reformat the mgs, change lnet options in modprobe file, 
> start new mgs server, and finally modify our ost and mdt with 
> tunefs.lustre with failover and mgs new nids using "--erase-params" and 
> "--writeconf" options.
> 
> We tested it successfully on a test filesystem but we read in the manual 
> that this can be really dangerous. Do you agree with this procedure? Do 
> you have some advice or practice on this kind of requests? What's the 
> danger?
> 
> Regards.
> 

-- 
Sent from my wired giant hulking workstation

Nate Pearlstein - npearl at sgi.com - Product Support Engineer





More information about the lustre-discuss mailing list