[Lustre-discuss] MDT raid parameters, multiple MGSes

Jason Rappleye jason.rappleye at nasa.gov
Thu Jan 27 07:26:09 PST 2011


On Jan 27, 2011, at 3:15 AM, Andreas Dilger wrote:

> On 2011-01-25, at 17:05, Jeremy Filizetti wrote:
>> On Fri, Jan 21, 2011 at 1:02 PM, Andreas Dilger <adilger at whamcloud.com> wrote:
>>> While this "runs", it is definitely not correct.  The problem is that the client will only connect to a single MGS for configuration updates (in particular, the MGS for the last filesystem that was mounted).  If there is a configuration change (e.g. lctl conf_param, or adding a new OST) on one of the other filesystems, then the client will not be notified of this change because it is no longer connected to the MGS for that filesystem.
>>> 
>> 
>> We use Lustre in a WAN environment and each geographic location has their own Lustre file system with it's own MGS.  While I don't add storage frequently I've never seen an issue with this.
>> 
>> Just to be sure I just mounted a test file system, follewed by another file system and added an OST to the test file system and the client was notified by the MGS.  Looking at "lctl dl" the client shows a device for MGC and I see connections in the peers list.  I didn't test any conf_param, but at least the connections look fine including the output from the "lctl dk".
>> 
>> Is there something I'm missing here?  I know each OSS shares a single MGC between all the OBDs so that you can really only mount one file system at a time in Lustre.  Is that what you are referring to?
> 
> Depending on how you ran the test, it is entirely possible that the client
> hadn't been evicted from the first MGS yet, and it accepted the message from this MGS even though this was evicted.  However, if you check the connection state
> on the client (e.g. "lctl get_param mgc.*.import") it is only possible for
> the client to have a single MGC today, and that MGC can only have a connection
> to a single MGS at a time.
> 
> Granted, it is possible that someone fixed this when I wasn't paying attention.

I thought this sounded familiar - have a look at bz 20299. Multiple MGCs on a client are ok; multiple MGSes on a single server are not.

Jason

--
Jason Rappleye
System Administrator
NASA Advanced Supercomputing Division
NASA Ames Research Center
Moffett Field, CA 94035








More information about the lustre-discuss mailing list