[Lustre-discuss] IOR performance - Need help
satish patil
satishvpatil at yahoo.com
Tue Sep 14 04:20:01 PDT 2010
Thanks for your feedback. Back end storage P2000 G3 which is SAS based - 8 Gbps SAN using 450GB-15K. It is clients requirement to have performanace with Single file using all OST's.
Regards
SP
--- On Tue, 9/14/10, Fan Yong <yong.fan at whamcloud.com> wrote:
> From: Fan Yong <yong.fan at whamcloud.com>
> Subject: Re: [Lustre-discuss] IOR performance - Need help
> To: lustre-discuss at lists.lustre.org
> Date: Tuesday, September 14, 2010, 4:24 PM
> On 9/14/10 5:57 PM, satish
> patil wrote:
> > Hello,
> >
> > Recently we installed 6 OSS pairs with 8 OST per pair.
> Total 48 OST's. Each OST is with 3.7 TB. In all it is 177 TB
> file system. Lustre version installed is 1.8.1.1 and
> currently using client based on RHEL 5U2 which is 1.6.x.
> When running the individual OST test from performance
> perspecitve we are able to get around 17.5 GB performance.
> Out target is to cross 10 GBPS write performance using
> single file w/o -F option avoiding client side cache.
> I have reached max to 7.5GB for write performance , but not
> going beyond. I tried using stripe count as 48 for a single
> file along with default stripe size which is 1MB. But not
> able to cross 10 GBPS.
> Can you give a detailed description for your system
> topology? We have
> met customer with more large theory bandwidth, but worse
> performance,
> because of the unexpected back-end storage performance in
> parallel.
>
> For I/O performance testing, full stripe maybe not the best
> choice.
> Using single stripe files, and spreading these relative
> small files to
> all OSTs evenly, maybe give better result.
> > Command line used for running the IOR as follow
> > /opt/intel/mpi/bin64/mpirun --totalnum=96
> --file=$PBS_NODEFILE --rsh=/usr/bin/ssh -1 --ordered
> --verbose -l -machinefile $PBS_NODEFILE -np 96
> /newScratch/IOR/src/C/IOR.mpiio -a MPIIO -b 22G -C -i 3 -k
> -t 1m -w -r -R -W -x -N 96 -o
> /newScratch/hp.stripeC48/IOR.dat
> >
> > We have used lustre_config to create the file system.
> On the other hand, lustre provides basic I/O performance
> utils (under
> lustre-iokit). You can use them step by step for the basic
> elements
> performance (like back-end storage, obdfilter, and
> network), which can
> help you to locate where the performance issues are.
>
>
> Cheers,
> Nasf
> > Appriciate your help.
> >
> > Regards
> > SP
> >
> >
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/mailman/listinfo/lustre-discuss
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
More information about the lustre-discuss
mailing list