[Lustre-discuss] MDT raid parameters, multiple MGSes

Andreas Dilger adilger at whamcloud.com
Fri Jan 21 10:02:41 PST 2011


On 2011-01-21, at 06:55, Ben Evans wrote:
> In our lab, we've never had a problem with simply having 1 MGS per filesystem.  Mountpoints will be unique for all of them, but functionally it works just fine.

While this "runs", it is definitely not correct.  The problem is that the client will only connect to a single MGS for configuration updates (in particular, the MGS for the last filesystem that was mounted).  If there is a configuration change (e.g. lctl conf_param, or adding a new OST) on one of the other filesystems, then the client will not be notified of this change because it is no longer connected to the MGS for that filesystem.

I agree that it would be desirable to allow the client to connect to multiple MGSes, but it doesn't work today.  I'd be thrilled if some interested party were to fix that.


> -----Original Message-----
> From: lustre-discuss-bounces at lists.lustre.org on behalf of Thomas Roth
> Sent: Fri 1/21/2011 6:43 AM
> To: lustre-discuss at lists.lustre.org
> Subject: [Lustre-discuss] MDT raid parameters, multiple MGSes
> 
> Hi all,
> 
> we have gotten new MDS hardware, and I've got two questions:
> 
> What are the recommendations for the RAID configuration and formatting
> options?
> I was following the recent discussion about these aspects on an OST:
> chunk size, strip size, stride-size, stripe-width etc. in the light of
> the 1MB chunks of Lustre ... So what about the MDT? I will have a RAID
> 10 that consists of 11 RAID-1 pairs striped over. giving me roughly 3TB
> of space. What would be the correct value for <insert your favorite
> term>, the amount of data written to one disk before proceeding to the
> next disk?
> 
> Secondly, it is not yet decided whether we wouldn't use this hardware to
> set up a second Lustre cluster. The manual recommends to have only one
> MGS per site, but doesn't elaborate: what would be the drawback of
> having two MGSes, two different network addresses the clients have to
> connect to to mount the Lustres?
> I know that it didn't work in Lustre 1.6.3 ;-) and there are no apparent
> issues when connecting a Lustre client to a test cluster now (version
> 1.8.4), but what about production?
> 
> 
> Cheers,
> Thomas
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss


Cheers, Andreas
--
Andreas Dilger 
Principal Engineer
Whamcloud, Inc.






More information about the lustre-discuss mailing list