[lustre-discuss] Adding a new NID
Ben Evans
bevans at cray.com
Sat Jan 6 14:03:08 PST 2018
This seems to be the problem lnet gateways were meant to solve.
-Ben Evans
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of "Vicker, Darby (JSC-EG311)" <darby.vicker-1 at nasa.gov<mailto:darby.vicker-1 at nasa.gov>>
Date: Friday, January 5, 2018 at 7:11 PM
To: Lustre discussion <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>
Cc: "Kirk, Benjamin (JSC-EG311)" <benjamin.kirk at nasa.gov<mailto:benjamin.kirk at nasa.gov>>
Subject: Re: [lustre-discuss] Adding a new NID
Sorry – one other question. We are configured for failover too. Will the "lctl replace_nids" do the right thing or should I do the tunefs to make sure all the failover pairs get updated properly? This is what our tunefs command would look like for an OST:
tunefs.lustre \
--dry-run \
--verbose \
--writeconf \
--erase-param \
--mgsnode=192.52.98.30 at tcp0,10.148.0.30 at o2ib0,10.150.100.30 at o2ib1 \
--mgsnode=192.52.98.31 at tcp0,10.148.0.31 at o2ib0,10.150.100.31 at o2ib1 \
--servicenode=${LUSTRE_LOCAL_TCP_IP}@tcp0,${LUSTRE_LOCAL_IB_L1_IP}@o2ib0,${LUSTRE_LOCAL_IB_EUROPA_IP}@o2ib1 \
--servicenode=${LUSTRE_PEER_TCP_IP}@tcp0,${LUSTRE_PEER_IB_L1_IP}@o2ib0,${LUSTRE_PEER_IB_EUROPA_IP}@o2ib1 \
$pool/ost-fsl
Our original mkfs.lustre options looked about like that, sans the o2ib1 NIDs. I'm worried that the "lctl repalce_nids" command won't know how to update the mgsnode and servicenode properly. Is replace_nids smart enough for this?
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of Darby Vicker <darby.vicker-1 at nasa.gov<mailto:darby.vicker-1 at nasa.gov>>
Date: Friday, January 5, 2018 at 5:16 PM
To: Lustre discussion <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>
Subject: [non-nasa source] [lustre-discuss] Adding a new NID
Hello everyone,
We have an existing LFS that is dual-homed on ethernet (mainly for our workstations) and IB (for the computational cluster), ZFS backend for the MDT and OST's. We just got a new computational cluster and need to add another IB NID. The procedure for doing this is straight forward (14.5 in the admin manual) and amounts to:
Unmount the clients
Unmount the MDT
Unmount all OSTs
mount -t lustre MDT partition -o nosvc mount_point
lctl replace_nids devicename nid1[,nid2,nid3 ...]
We haven't had to update a NID in a while so I was happy to see you can do this with "lctl replace_nids" instead of "tunsfs.lustre --writeconf".
I know this is dangerous, but we will sometime make minor changes to the servers by unmounting lustre on the servers (but leaving the clients up), make the changes, then remount the servers. If we are confident we can do this quickly, the clients recover just fine.
While this isn't such a minor change, I'm a little tempted to do that in this case since nothing will really change for the existing clients – they don't need the new NID. Am I asking for trouble here or do you think I can get away with this? I'm not too concerned about the possibility of it taking too long and getting the existing clients evicted. I'm (obviously) more concerned about doing something that would lead to corrupting the FS. I should probably schedule an outage and do this right but... :)
Darby
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180106/d27b337c/attachment-0001.html>
More information about the lustre-discuss
mailing list