[Lustre-discuss] Parallel HDF5

Tiago Soares tistesoa at gmail.com
Wed Jul 18 15:24:11 PDT 2012


Dear all,
I would like to know if there is some trick to use parallel MPI/IO in HDF5
file on Lustre?
I have been trying for while to fix the issue that happens in Lustre.

Basically, I have 8 process writing small datas in the same dataset in an
attempt.  Each attempt write different data than before, and when an
attempt comes close to 2500, the I/O parallel process broken. There is too
a another serial process writing other data in the same file


I found that is a common Lustre issue which do not support locking! So I
tried this parameters above, but my application still broken..  Also, I
read in somewhere to set the "stripesize" to count 8 (number of OST) for
parallel IO, but still doesn't woks.

 MPI_Info_create(&info);

 /* Disables ROMIO's data-sieving */
 MPI_Info_set(info, "romio_ds_read", "disable");
 MPI_Info_set(info, "romio_ds_write", "disable");


 /* Enable ROMIO's collective buffering */
 MPI_Info_set(info, "romio_cb_read", "enable");
 MPI_Info_set(info, "romio_cb_write", "enable");

  https://wickie.hlrs.de/platforms/index.php/MPI-IO

I ran my application on PVFS, and works fine. I know that unlike the
Lustre,  PVFS use non-alligned striping data. Could be it the reason that
works in PVFS?

 Regards

-- 
Tiago Steinmetz Soares
MSc Student of Computer Science - UFSC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20120718/ed2bccf9/attachment.htm>


More information about the lustre-discuss mailing list