[Lustre-discuss] MDT raid parameters, multiple MGSes

Andreas Dilger adilger at whamcloud.com
Thu Jan 27 03:15:22 PST 2011


On 2011-01-25, at 17:05, Jeremy Filizetti wrote:
> On Fri, Jan 21, 2011 at 1:02 PM, Andreas Dilger <adilger at whamcloud.com> wrote:
>> While this "runs", it is definitely not correct.  The problem is that the client will only connect to a single MGS for configuration updates (in particular, the MGS for the last filesystem that was mounted).  If there is a configuration change (e.g. lctl conf_param, or adding a new OST) on one of the other filesystems, then the client will not be notified of this change because it is no longer connected to the MGS for that filesystem.
>> 
>  
> We use Lustre in a WAN environment and each geographic location has their own Lustre file system with it's own MGS.  While I don't add storage frequently I've never seen an issue with this.
>  
> Just to be sure I just mounted a test file system, follewed by another file system and added an OST to the test file system and the client was notified by the MGS.  Looking at "lctl dl" the client shows a device for MGC and I see connections in the peers list.  I didn't test any conf_param, but at least the connections look fine including the output from the "lctl dk".
>  
> Is there something I'm missing here?  I know each OSS shares a single MGC between all the OBDs so that you can really only mount one file system at a time in Lustre.  Is that what you are referring to?

Depending on how you ran the test, it is entirely possible that the client
hadn't been evicted from the first MGS yet, and it accepted the message from this MGS even though this was evicted.  However, if you check the connection state
on the client (e.g. "lctl get_param mgc.*.import") it is only possible for
the client to have a single MGC today, and that MGC can only have a connection
to a single MGS at a time.

Granted, it is possible that someone fixed this when I wasn't paying attention.


Cheers, Andreas
--
Andreas Dilger 
Principal Engineer
Whamcloud, Inc.






More information about the lustre-discuss mailing list