[Lustre-discuss] Newbie question: File locking, synchronicity, order, and ownership

Atul Vidwansa Atul.Vidwansa at Sun.COM
Sun Jan 10 20:30:11 PST 2010


Comments inline...

Nochum Klein wrote:
>
> Hi Atul,
>
> Thanks a lot -- this is very helpful!
>
> So assuming the application is performing the following fcntl() call 
> to set a file segment lock:
>
>     struct flock fl;
>     int err;
>
>     fl.l_type = F_WRLCK;
>     fl.l_whence = 0;
>     fl.l_start = 0;
>     fl.l_len = 0;    /* len = 0 means until end of file */
>
>     err = fcntl(file, F_SETLK, &fl);
>
> I should be able to achieve the desired behavior
>
What is your desired behavior ?
>
> if I enable cluster-wide locking with the /-o flock/ mount option.  Is 
> this correct?
>
Is your application writing to same file from multiple nodes? If yes, do 
writes from different nodes overlap? Above piece of code will work fine 
if each node is writing to its own file OR multiple nodes are writing to 
different sections of the file. Otherwise, it will result in lock pingpong.

Cheers,
_Atul
>
> Thanks again!
>
> Nochum
>
> ---------- Original message ----------
> From: Atul Vidwansa <Atul.Vidwa... at Sun.COM <mailto:Atul.Vidwa... at Sun.COM>>
> Date: Jan 8, 1:30 am
> Subject: Newbie question: File locking, synchronicity, order, and 
> ownership
> To: lustre-discuss-list
>
>
> Some comments inline..
>
>  
>
>  
>
> Nochum Klein wrote:
> > Hi Everyone,
>
> > Apologies for what is likely a simple question for anyone who has been
> > working with Lustre for a while.  I am evaluating Lustre as part of a
> > fault-tolerant failover solution for an application component.  Based
> > on our design using heartbeats between the hot primary and warm
> > secondary components, we have four basic requirements of the clustered
> > file system:
>
> >    1. *Write Order *- The storage solution must write data blocks to
> >       shared storage in the same order as they occur in the data
> >       buffer.  Solutions that write data blocks in any other order
> >       (for example, to enhance disk efficiency) do not satisfy this
> >       requirement.
> >    2.  *Synchronous Write Persistence* - Upon return from a
> >       synchronous write call, the storage solution guarantees that all
> >       the data have been written to durable, persistent storage.
> >    3.  *Distributed File Locking* - Application components must be
> >       able to request and obtain an exclusive lock on the shared
> >       storage. The storage solution must not assign the locks to two
> >       servers simultaneously.
>
> AFAIK Lustre does support distributed locking. From wiki.lustre.org 
> <http://wiki.lustre.org>:
>
>     * /flock/lockf/
>
>     POSIX and BSD /flock/lockf/ system calls will be completely coherent
>     across the cluster, using the Lustre lock manager, but are not
>     enabled by default today. It is possible to enable client-local
>     /flock/ locking with the /-o localflock/ mount option, or
>     cluster-wide locking with the /-o flock/ mount option. If/when this
>     becomes the default, it is also possible to disable /flock/ for a
>     client with the /-o noflock/ mount option.
>
> >    1.  *Unique Write Ownership* - The application component that has
> >       the file lock must be the only server process that can write to
> >       the file. Once the system transfers the lock to another server,
> >       pending writes queued by the previous owner must fail.
>
> It depends on what level of locking you do. Lustre supports byte-range
> locking, so unless writes overlap, multiple writers can write to same 
> file.
>
> Cheers,
> _Atul
>
>  
>
>  
>
>  
>
> >   1.
>
> > Can anyone confirm that these requirements would be met by Lustre 1.8?
>
> > Thanks a lot!
>
> > Nochum
> > ------------------------------------------------------------------------
>
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-disc... at lists.lustre.org <mailto:Lustre-disc... at lists.lustre.org>
> >http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-disc... at lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss 
> <mailto:Lustre-disc... at lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list