[Lustre-discuss] Parallel HDF5
tistesoa at gmail.com
Wed Jul 18 15:24:11 PDT 2012
I would like to know if there is some trick to use parallel MPI/IO in HDF5
file on Lustre?
I have been trying for while to fix the issue that happens in Lustre.
Basically, I have 8 process writing small datas in the same dataset in an
attempt. Each attempt write different data than before, and when an
attempt comes close to 2500, the I/O parallel process broken. There is too
a another serial process writing other data in the same file
I found that is a common Lustre issue which do not support locking! So I
tried this parameters above, but my application still broken.. Also, I
read in somewhere to set the "stripesize" to count 8 (number of OST) for
parallel IO, but still doesn't woks.
/* Disables ROMIO's data-sieving */
MPI_Info_set(info, "romio_ds_read", "disable");
MPI_Info_set(info, "romio_ds_write", "disable");
/* Enable ROMIO's collective buffering */
MPI_Info_set(info, "romio_cb_read", "enable");
MPI_Info_set(info, "romio_cb_write", "enable");
I ran my application on PVFS, and works fine. I know that unlike the
Lustre, PVFS use non-alligned striping data. Could be it the reason that
works in PVFS?
Tiago Steinmetz Soares
MSc Student of Computer Science - UFSC
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss