<p>Hi Atul,</p>
<p>Thanks a lot -- this is very helpful!</p>
<p>So assuming the application is performing the following fcntl() call to set a file segment lock:</p>
<p> struct flock fl;<br> int err;</p>
<p> fl.l_type = F_WRLCK;<br> fl.l_whence = 0;<br> fl.l_start = 0;<br> fl.l_len = 0; /* len = 0 means until end of file */</p>
<p> err = fcntl(file, F_SETLK, &fl);</p>
<p>I should be able to achieve the desired behavior if I enable cluster-wide locking with the /-o flock/ mount option. Is this correct?</p>
<p>Thanks again!</p>
<p>Nochum</p>
<p>---------- Original message ----------<br>From: Atul Vidwansa <<a href="mailto:Atul.Vidwa...@Sun.COM">Atul.Vidwa...@Sun.COM</a>><br>Date: Jan 8, 1:30 am<br>Subject: Newbie question: File locking, synchronicity, order, and ownership<br>
To: lustre-discuss-list</p>
<p><br>Some comments inline..</p>
<p> </p>
<p> </p>
<p>Nochum Klein wrote:<br>> Hi Everyone,</p>
<p>> Apologies for what is likely a simple question for anyone who has been<br>> working with Lustre for a while. I am evaluating Lustre as part of a<br>> fault-tolerant failover solution for an application component. Based<br>
> on our design using heartbeats between the hot primary and warm<br>> secondary components, we have four basic requirements of the clustered<br>> file system:</p>
<p>> 1. *Write Order *- The storage solution must write data blocks to<br>> shared storage in the same order as they occur in the data<br>> buffer. Solutions that write data blocks in any other order<br>
> (for example, to enhance disk efficiency) do not satisfy this<br>> requirement.<br>> 2. *Synchronous Write Persistence* - Upon return from a<br>> synchronous write call, the storage solution guarantees that all<br>
> the data have been written to durable, persistent storage.<br>> 3. *Distributed File Locking* - Application components must be<br>> able to request and obtain an exclusive lock on the shared<br>
> storage. The storage solution must not assign the locks to two<br>> servers simultaneously.</p>
<p>AFAIK Lustre does support distributed locking. From <a href="http://wiki.lustre.org">wiki.lustre.org</a>:</p>
<p> * /flock/lockf/</p>
<p> POSIX and BSD /flock/lockf/ system calls will be completely coherent<br> across the cluster, using the Lustre lock manager, but are not<br> enabled by default today. It is possible to enable client-local<br>
/flock/ locking with the /-o localflock/ mount option, or<br> cluster-wide locking with the /-o flock/ mount option. If/when this<br> becomes the default, it is also possible to disable /flock/ for a<br> client with the /-o noflock/ mount option.</p>
<p>> 1. *Unique Write Ownership* - The application component that has<br>> the file lock must be the only server process that can write to<br>> the file. Once the system transfers the lock to another server,<br>
> pending writes queued by the previous owner must fail.</p>
<p>It depends on what level of locking you do. Lustre supports byte-range<br>locking, so unless writes overlap, multiple writers can write to same file.</p>
<p>Cheers,<br>_Atul</p>
<p> </p>
<p> </p>
<p> </p>
<p>> 1.</p>
<p>> Can anyone confirm that these requirements would be met by Lustre 1.8?</p>
<p>> Thanks a lot!</p>
<p>> Nochum<br>> ------------------------------------------------------------------------</p>
<p>> _______________________________________________<br>> Lustre-discuss mailing list<br>> <a href="mailto:Lustre-disc...@lists.lustre.org">Lustre-disc...@lists.lustre.org</a><br>><a href="http://lists.lustre.org/mailman/listinfo/lustre-discuss">http://lists.lustre.org/mailman/listinfo/lustre-discuss</a></p>
<p>_______________________________________________<br>Lustre-discuss mailing list<br><a href="mailto:Lustre-disc...@lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss">Lustre-disc...@lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss</a></p>