[Lustre-discuss] MDT raid parameters, multiple MGSes

Thomas Roth t.roth at gsi.de
Sat Jan 22 02:02:11 PST 2011

On 01/21/2011 07:02 PM, Andreas Dilger wrote:
> On 2011-01-21, at 06:55, Ben Evans wrote:
>> In our lab, we've never had a problem with simply having 1 MGS per filesystem.  Mountpoints will be unique for all of them, but functionally it
> works just fine.
> While this "runs", it is definitely not correct.  The problem is that the client will only connect to a single MGS for configuration updates (in
> particular, the MGS for the last filesystem that was mounted).  If there is a configuration change (e.g. lctl conf_param, or adding a new OST) on one
> of the other filesystems, then the client will not be notified of this change because it is no longer connected to the MGS for that filesystem.
> I agree that it would be desirable to allow the client to connect to multiple MGSes, but it doesn't work today.  I'd be thrilled if some interested
> party were to fix that.

Ah, thanks Andreas, that would be a point we'd miss with test clusters. I had only seen this effect when deactivating OST using the device number
instead of the name, and a client with multiple MGSes could have a different numbering. Very well, we'll stick to one MGS, then.

>> -----Original Message-----
>> From: lustre-discuss-bounces at lists.lustre.org on behalf of Thomas Roth
>> Sent: Fri 1/21/2011 6:43 AM
>> To: lustre-discuss at lists.lustre.org
>> Subject: [Lustre-discuss] MDT raid parameters, multiple MGSes
>> Hi all,
>> we have gotten new MDS hardware, and I've got two questions:
>> What are the recommendations for the RAID configuration and formatting
>> options?
>> I was following the recent discussion about these aspects on an OST:
>> chunk size, strip size, stride-size, stripe-width etc. in the light of
>> the 1MB chunks of Lustre ... So what about the MDT? I will have a RAID
>> 10 that consists of 11 RAID-1 pairs striped over. giving me roughly 3TB
>> of space. What would be the correct value for <insert your favorite
>> term>, the amount of data written to one disk before proceeding to the
>> next disk?
>> Secondly, it is not yet decided whether we wouldn't use this hardware to
>> set up a second Lustre cluster. The manual recommends to have only one
>> MGS per site, but doesn't elaborate: what would be the drawback of
>> having two MGSes, two different network addresses the clients have to
>> connect to to mount the Lustres?
>> I know that it didn't work in Lustre 1.6.3 ;-) and there are no apparent
>> issues when connecting a Lustre client to a test cluster now (version
>> 1.8.4), but what about production?
>> Cheers,
>> Thomas
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> Cheers, Andreas
> --
> Andreas Dilger
> Principal Engineer
> Whamcloud, Inc.

Thomas Roth
Department: Informationstechnologie
Location: SB3 1.262
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986

GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1
64291 Darmstadt

Gesellschaft mit beschränkter Haftung
Sitz der Gesellschaft: Darmstadt
Handelsregister: Amtsgericht Darmstadt, HRB 1528

Geschäftsführung: Professor Dr. Dr. h.c. Horst Stöcker,
Dr. Hartmut Eickhoff

Vorsitzende des Aufsichtsrates: Dr. Beatrix Vierkorn-Rudolph
Stellvertreter: Ministerialdirigent Dr. Rolf Bernhardt

More information about the lustre-discuss mailing list