[lustre-discuss] Adding a new NID
Cowe, Malcolm J
malcolm.j.cowe at intel.com
Sun Jan 7 17:22:42 PST 2018
There are, to my knowledge, a couple of open bugs related to the “lctl replace_nids” command that you should review prior to committing to a change:
https://jira.hpdd.intel.com/browse/LU-8948
https://jira.hpdd.intel.com/browse/LU-10384
Some time ago, I wrote a d[r]aft guide on how to manage relatively complex LNet server configs, including the long-hand method for changing server NIDs. I thought this had made it onto the community wiki but I appear to be mistaken. I don’t have time to make a mediawiki version, but I’ve uploaded a PDF version here:
http://wiki.lustre.org/File:Defining_Multiple_LNet_Interfaces_for_Multi-homed_Servers,_v1.pdf
YMMV, there’s no warranty, whether express or implied, and I assume no liability, etc. ☺
Nevertheless, I hope this helps, at least as a cross-reference.
Malcolm.
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of "Vicker, Darby (JSC-EG311)" <darby.vicker-1 at nasa.gov>
Date: Saturday, 6 January 2018 at 11:11 am
To: Lustre discussion <lustre-discuss at lists.lustre.org>
Cc: "Kirk, Benjamin (JSC-EG311)" <benjamin.kirk at nasa.gov>
Subject: Re: [lustre-discuss] Adding a new NID
Sorry – one other question. We are configured for failover too. Will the "lctl replace_nids" do the right thing or should I do the tunefs to make sure all the failover pairs get updated properly? This is what our tunefs command would look like for an OST:
tunefs.lustre \
--dry-run \
--verbose \
--writeconf \
--erase-param \
--mgsnode=192.52.98.30 at tcp0,10.148.0.30 at o2ib0,10.150.100.30 at o2ib1 \
--mgsnode=192.52.98.31 at tcp0,10.148.0.31 at o2ib0,10.150.100.31 at o2ib1 \
--servicenode=${LUSTRE_LOCAL_TCP_IP}@tcp0,${LUSTRE_LOCAL_IB_L1_IP}@o2ib0,${LUSTRE_LOCAL_IB_EUROPA_IP}@o2ib1 \
--servicenode=${LUSTRE_PEER_TCP_IP}@tcp0,${LUSTRE_PEER_IB_L1_IP}@o2ib0,${LUSTRE_PEER_IB_EUROPA_IP}@o2ib1 \
$pool/ost-fsl
Our original mkfs.lustre options looked about like that, sans the o2ib1 NIDs. I'm worried that the "lctl repalce_nids" command won't know how to update the mgsnode and servicenode properly. Is replace_nids smart enough for this?
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Darby Vicker <darby.vicker-1 at nasa.gov>
Date: Friday, January 5, 2018 at 5:16 PM
To: Lustre discussion <lustre-discuss at lists.lustre.org>
Subject: [non-nasa source] [lustre-discuss] Adding a new NID
Hello everyone,
We have an existing LFS that is dual-homed on ethernet (mainly for our workstations) and IB (for the computational cluster), ZFS backend for the MDT and OST's. We just got a new computational cluster and need to add another IB NID. The procedure for doing this is straight forward (14.5 in the admin manual) and amounts to:
Unmount the clients
Unmount the MDT
Unmount all OSTs
mount -t lustre MDT partition -o nosvc mount_point
lctl replace_nids devicename nid1[,nid2,nid3 ...]
We haven't had to update a NID in a while so I was happy to see you can do this with "lctl replace_nids" instead of "tunsfs.lustre --writeconf".
I know this is dangerous, but we will sometime make minor changes to the servers by unmounting lustre on the servers (but leaving the clients up), make the changes, then remount the servers. If we are confident we can do this quickly, the clients recover just fine.
While this isn't such a minor change, I'm a little tempted to do that in this case since nothing will really change for the existing clients – they don't need the new NID. Am I asking for trouble here or do you think I can get away with this? I'm not too concerned about the possibility of it taking too long and getting the existing clients evicted. I'm (obviously) more concerned about doing something that would lead to corrupting the FS. I should probably schedule an outage and do this right but... :)
Darby
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180108/09f1a988/attachment.html>
More information about the lustre-discuss
mailing list