[lustre-discuss] statfs Error while running IOR benchmark on lustre setup.

Gadre Nayan gadrenayan at gmail.com
Mon Mar 16 03:37:32 PDT 2015


Dear all,

I have installed lustre on 4 nodes, all on same network.
I want to run IOR benchmark on the setup.

I have tested the mpi ring by using the following commands:

mpd &
mpiexec -n 1 /bin/hostname
mpdallexit

output: hp01 (local machine name)
***********************************************
sudo nano mpd.hosts :- updated the clients here (2 clients hp01 hp02)
mpdboot -n 2 -f mpd.hosts
mpdtrace

output:
hp01
hp02

expected. 2 clients(hp01 hp02)

*****************************************************

mpdringtest
mpiexec -n 30 /bin/hostname

ouput: prints hostname hp01 15 times hp02 15 times:

***************************************************************














*When I run IOR on this setup using:mpirun -np 2 Gadre/IOR/src/C/IOR -f
Gadre/IOR/scripts/testScriptI get following error:IOR-2.10.3: MPI
Coordinated Test of Parallel I/ORun began: Mon Mar 16 15:16:00 2015Command
line used: Gadre/IOR/src/C/IOR -f Gadre/IOR/scripts/testScriptMachine:
Linux hp01Start time skew across all tasks: 6103.23 sec** error **ERROR in
utilities.c (line 349): unable to statfs() file system.ERROR: No such file
or directory** exiting **application called MPI_Abort(MPI_COMM_WORLD, -1) -
process 0rank 0 in job 2  hp01_34081   caused collective abort of all
ranks  exit status of rank 0: killed by signal 9Why do i get a statfs()
error.How to resolve. Any pointers or direct data :)Please help.ThanksGadre
Nayan A*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20150316/5bda57f2/attachment.htm>


More information about the lustre-discuss mailing list