[lustre-discuss] Speeding up recovery

Indivar Nair indivar.nair at techterra.in
Tue Jul 21 08:19:10 PDT 2015


1) You mention they are on the same host.  Are they on separate partitions
already?
 As you have failover configured I'm assuming that both servers can see the
storage. In which case this will not be too difficult (depending on your
failover software of course) if they have separate partitions.

Yes, they are separate DRBD Devices. So mounting any one of them on the
other server is easy.
But how do I tell the OSS that MGS or MDT has moved to a new IP/Host?
And how do I reconfigure the failover on the device I move?

2) so today Linux clients use the native client? And you are planning on
shifting this to use the NFS service from a gateway node, is that correct?
   How do they connect to the lustre servers today? QDR IB?
 How will they reach the gateway nodes after this change? NFS over IB? NFS
over RDMA?

Yes, the Linux Hosts use Lustre Native Clients. Windows Hosts connect via
the Gateway.
The Gateway Nodes uses Infiniband+RDMA to connect to Lustre.
I am thinking of moving the Linux Native Clients to NFS, connecting them
through this Gateway.
All client nodes are on 1GbE network.
Infiniband is used only to connect the Gateway to Lustre.

Regards,


Indivar Nair


On Tue, Jul 21, 2015 at 8:29 PM, Wahl, Edward <ewahl at osc.edu> wrote:

>  1) You mention they are on the same host.  Are they on separate
> partitions already?
>  As you have failover configured I'm assuming that both servers can see
> the storage. In which case this will not be too difficult (depending on
> your failover software of course) if they have separate partitions.
>
>
> 2) so today Linux clients use the native client? And you are planning on
> shifting this to use the NFS service from a gateway node, is that correct?
>    How do they connect to the lustre servers today? QDR IB?
>  How will they reach the gateway nodes after this change? NFS over IB? NFS
> over RDMA?
>
>
> Ed
>
>  ------------------------------
> *From:* lustre-discuss [lustre-discuss-bounces at lists.lustre.org] on
> behalf of Indivar Nair [indivar.nair at techterra.in]
> *Sent:* Tuesday, July 21, 2015 4:27 AM
> *To:* lustre-discuss; hpdd-discuss
> *Subject:* [lustre-discuss] Speeding up recovery
>
>    Hi ...,
>
>  Currently, Failover and Recovery takes a very long long time in our
> setup; almost 20 Minutes. We would like to make it as fast as possible.
>
>  I have two queries regarding this -
>
> 1.
> ===================================================
>  The MGS and MDT are on the same host.
>
>  We do however have a passive stand-by server for the MGS/MDT server,
> which only mounts these partitions in case of a failure.
>
>  *Current Setup*
>  Server A: MGS+MDT
>  Server B: Failover MGS+MDT
>
>  I was wondering whether I can now move the MGS or MDT Partition to the
> standby server (so that imperative recovery works properly) -
>
>  *New Setup*
>  Server A: MDT & *Failover MGS*
>  Server B: *MGS* & Failover MDT
>
> *OR *
> Server A: *MGS* & Failover MDT
>  Server B: MDT & *Failover MGS*
>
>  i.e.
>
> *Can I separate the MDT and MGS partitions on to different machines
> without formatting or reinstalling Lustre? *
> ===================================================
>
> 2.
> ===================================================
>  This storage is used by around 150 Workstations and 150 Compute (Render)
> Nodes.
>
>  Out of these 150 workstations, around 30 - 40 are MS Windows. The MS
> Windows clients access the storage through a 2-node Samba Gateway Cluster.
>
>  The Gateway Nodes are connected to the storage through a QDR Infiniband
> Network.
>
>  We were thinking of adding NFS Service to the Samba Gateway nodes, and
> reconfiguring the Linux clients to connect via this gateway.
>
>  This will bring down the direct Lustre Clients to just 2 nodes.
>  *So, will having only 2 clients improve the failover-recovery time?*
>  ===================================================
>
>  Is there anything else we can do to speed up recovery?
>
>  Regards,
>
>
>  Indivar Nair
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20150721/823170bf/attachment.htm>


More information about the lustre-discuss mailing list