[Lustre-discuss] MGS - one per site

Wojciech Turek wjt27 at cam.ac.uk
Sat Jun 27 08:54:39 PDT 2009


We have three lustre file systems of which one is "stand alone" and uses
it's own MGS. We mount all three file systems on every client. The
only difficulty I came across when mounting all three file systems on a
client was that there could be only one MGS per NID. So to make
client communicating with both MGSs I created an alias network interface and
then in modprobe.conf I configured two NIDs
options lnet networks=tcp1(eth1),tcp2(eth1:0)

mount -t lustre 10.142.10.201 at tcp1:10.142.10.202 at tcp1:/scratch  /scratch
mount -t lustre 10.142.10.201 at tcp1:10.142.10.202 at tcp1:/data  /data
mount -t lustre 10.42.10.203 at tcp2:10.42.10.204 at tcp2:/work  /work

Regards,

Wojciech

2009/6/26 Andreas Dilger <adilger at sun.com>

> On Jun 26, 2009  10:52 -0500, Carlos Santana wrote:
> > Can a lustre file system have more than one MGS? Isn't it only one per
> > site? I saw some examples where target type mgs was mentioned during
> > mkfs.lustre for MDS and OSS nodes. Is it correct and when is it used?
>
> It makes sense generally to have a single MGS per site for multiple
> filesystems, because if clients are mounting more than one filesystem
> they can only communicate with a single MGS at a time.
>
> In some cases there will of course be a need for multiple MGSes in
> a single site (e.g. secure and open networks), which is fine as long
> as clients don't try to mount from multiple MGSes at once.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



-- 
--
Wojciech Turek

Assistant System Manager

High Performance Computing Service
University of Cambridge
Email: wjt27 at cam.ac.uk
Tel: (+)44 1223 763517
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20090627/210f92de/attachment.htm>


More information about the lustre-discuss mailing list