[Lustre-discuss] how to define 60 failnodes

Michael Schwartzkopff misch at multinet.de
Mon Nov 9 07:44:33 PST 2009


Am Montag, 9. November 2009 16:36:15 schrieb Bernd Schubert:
> On Monday 09 November 2009, Brian J. Murrell wrote:
> > Theoretically.  I had discussed this briefly with another engineer a
> > while ago and IIRC, the result of the discussion was that there was
> > nothing inherent in the configuration logic that would prevent one from
> > having more than two ("primary" and "failover") OSSes providing service
> > to an OST.  Two nodes per OST is how just about everyone that wants
> > failover configures Lustre.
>
> Not everyone ;) And especially it doesn't make sense to have a 2 node
> failover scheme with pacemaker:
>
> https://bugzilla.lustre.org/show_bug.cgi?id=20964

the problem is that pacemaker does not understand about the applications it 
does cluster. pacemaker is made to provide high availability for ANY service, 
not only for a cluster FS.

So if you want to pin some resources (i.e. FS1) to a special node, you have to 
add a location constraint. But this contradicts the logic of pacemaker a 
little bit. Why should a resource run on this node, if all nodes are equal?

Basically I had the same problem with my lustre cluster I had the following 
solution:

- make colocation constratins so that filesystems do not like to run in the 
same node.

And theoretically with openais as a cluster stack the number of nodes is not 
limited to 16 any more like in heartbeat. You can build larger clusters.

Greetings,

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: misch at multinet.de
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42



More information about the lustre-discuss mailing list