[Lustre-discuss] [mpich-devel] ROMIO for Lustre
valiantljk at gmail.com
Sun Sep 22 18:41:21 PDT 2013
Interesting, Thanks Rob,
So I can assume the Hopper( a cray XE6 with MPT 3.2) contains the
Does it work both for read and write?
On Sun, Sep 22, 2013 at 2:00 PM, Rob Latham <robl at mcs.anl.gov> wrote:
> On Sat, Sep 21, 2013 at 11:21:19PM -0500, Jaln wrote:
> > Hi everyone,
> > I'm not sure, whether the lustre or the MPI forum is the right place for
> > question.
> both, i guess :>
> > The question is about the ROMIO optimization on Lustre,
> > In one SC'08 paper,
> > http://users.eecs.northwestern.edu/~wkliao/PAPERS/fd_sc08_revised.pdf<
> > , it's said that the way ROMIO assigns the file domains to I/O
> > will not make two aggregators access the same OST.
> > In my understanding, this means, the data locality on Lustre layer has
> > taken care of in the ROMIO, such that the aggregators will not
> > have competition on the same OST.
> > My question is "is this optimization used in all current lustre system,
> > e.g., Hopper at NERSC?"
> Wei-keng never contributed the specific ROMIO optimizations he discussed in
> the SC 08 paper, but his work did spur a lot of community discussion
> and contributions.
> Emoly Lu contributed a bunch of Lustre ADIO driver work, which Pascal
> Deveze and Martin Pokorny improved upon. MPICH-1.3 and newer contain
> these improvements.
> David Knaak from Cray implemented his own improvements. Cray's MPI-IO
> is based on ROMIO but the cray modifications are proprietary. MPT-3.2
> and newer contain lustre-specific optimizations.
> The community has been quiet with respect to Lustre MPI-IO work since
> then. I hope that's because everything "just works".
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
Genius only means hard-working all one's life
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss