[Lustre-discuss] MPI-IO / ROMIO support for Lustre

Larry tsrjzq at gmail.com
Mon Nov 1 20:10:08 PDT 2010


we use  "localflock"  in order to work with MPI-IO. "flock" may
consume more addtional resource than "localflock".

On Mon, Nov 1, 2010 at 10:35 PM, Mark Dixon <m.c.dixon at leeds.ac.uk> wrote:
> Hi,
>
> I'm trying to get the MPI-IO/ROMIO shipped with OpenMPI and MVAPICH2
> working with our Lustre 1.8 filesystem. Looking back at the list archives,
> 3 different solutions have been offered:
>
> 1) Disable "data sieving"         (change default library behaviour)
> 2) Mount Lustre with "localflock" (flock consistent only within a node)
> 3) Mount Lustre with "flock"      (flock consistent across cluster)
>
> However, it is not entirely clear which of these was considered the
> "best". Could anyone who is using MPI-IO on Lustre comment which they
> picked, please?
>
> I *think* the May 2008 list archive indicates I should be using (3), but
> I'd feel a whole lot better about it if I knew I wasn't alone :)
>
> Cheers,
>
> Mark
> --
> -----------------------------------------------------------------
> Mark Dixon                       Email    : m.c.dixon at leeds.ac.uk
> HPC/Grid Systems Support         Tel (int): 35429
> Information Systems Services     Tel (ext): +44(0)113 343 5429
> University of Leeds, LS2 9JT, UK
> -----------------------------------------------------------------
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



More information about the lustre-discuss mailing list