[Lustre-discuss] MDT raid parameters, multiple MGSes

Jeremy Filizetti jeremy.filizetti at gmail.com
Tue Jan 25 16:05:01 PST 2011


On Fri, Jan 21, 2011 at 1:02 PM, Andreas Dilger <adilger at whamcloud.com>wrote:

> On 2011-01-21, at 06:55, Ben Evans wrote:
> > In our lab, we've never had a problem with simply having 1 MGS per
> filesystem.  Mountpoints will be unique for all of them, but functionally it
> works just fine.
>
> While this "runs", it is definitely not correct.  The problem is that the
> client will only connect to a single MGS for configuration updates (in
> particular, the MGS for the last filesystem that was mounted).  If there is
> a configuration change (e.g. lctl conf_param, or adding a new OST) on one of
> the other filesystems, then the client will not be notified of this change
> because it is no longer connected to the MGS for that filesystem.
>

We use Lustre in a WAN environment and each geographic location has their
own Lustre file system with it's own MGS.  While I don't add storage
frequently I've never seen an issue with this.

Just to be sure I just mounted a test file system, follewed by another file
system and added an OST to the test file system and the client was notified
by the MGS.  Looking at "lctl dl" the client shows a device for MGC and I
see connections in the peers list.  I didn't test any conf_param, but at
least the connections look fine including the output from the "lctl dk".

Is there something I'm missing here?  I know each OSS shares a single MGC
between all the OBDs so that you can really only mount one file system at a
time in Lustre.  Is that what you are referring to?

Thanks,
Jeremy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110125/96e13508/attachment.htm>


More information about the lustre-discuss mailing list