[Lustre-discuss] IOR performance - Need help

Fan Yong yong.fan at whamcloud.com
Tue Sep 14 07:08:47 PDT 2010


  On 9/14/10 7:20 PM, satish patil wrote:
> Thanks for your feedback. Back end storage P2000 G3 which is SAS based - 8 Gbps SAN using 450GB-15K.  It is clients requirement to have performanace with Single file using all OST's.
>
It is quite necessary to verify that the raw system (without lustre) can 
achieve the parallel I/O performance more than 10GB/s as your expected. 
The real test result is more convincing than any nominal parallel I/O 
performance, especially for SAN infrastructure based storage.

Cheers,
Nasf
> Regards
> SP
>
> --- On Tue, 9/14/10, Fan Yong<yong.fan at whamcloud.com>  wrote:
>
>> From: Fan Yong<yong.fan at whamcloud.com>
>> Subject: Re: [Lustre-discuss] IOR performance - Need help
>> To: lustre-discuss at lists.lustre.org
>> Date: Tuesday, September 14, 2010, 4:24 PM
>>    On 9/14/10 5:57 PM, satish
>> patil wrote:
>>> Hello,
>>>
>>> Recently we installed 6 OSS pairs with 8 OST per pair.
>> Total 48 OST's. Each OST is with 3.7 TB. In all it is 177 TB
>> file system. Lustre version installed is 1.8.1.1 and
>> currently using client based on RHEL 5U2 which is 1.6.x.
>> When running the individual OST test from performance
>> perspecitve we are able to get around 17.5 GB performance.
>> Out target is to cross 10 GBPS write performance using
>> single file w/o -F option avoiding client side cache. 
>> I have reached max to 7.5GB for write performance , but not
>> going beyond. I tried using stripe count as 48 for a single
>> file along with default stripe size which is 1MB. But not
>> able to cross 10 GBPS.
>> Can you give a detailed description for your system
>> topology? We have
>> met customer with more large theory bandwidth, but worse
>> performance,
>> because of the unexpected back-end storage performance in
>> parallel.
>>
>> For I/O performance testing, full stripe maybe not the best
>> choice.
>> Using single stripe files, and spreading these relative
>> small files to
>> all OSTs evenly, maybe give better result.
>>> Command line used for running the IOR as follow
>>> /opt/intel/mpi/bin64/mpirun --totalnum=96
>> --file=$PBS_NODEFILE --rsh=/usr/bin/ssh -1 --ordered
>> --verbose -l -machinefile $PBS_NODEFILE -np 96
>> /newScratch/IOR/src/C/IOR.mpiio -a MPIIO -b 22G -C -i 3 -k
>> -t 1m -w -r -R -W -x -N 96 -o
>> /newScratch/hp.stripeC48/IOR.dat
>>> We have used lustre_config to create the file system.
>> On the other hand, lustre provides basic I/O performance
>> utils (under
>> lustre-iokit). You can use them step by step for the basic
>> elements
>> performance (like back-end storage, obdfilter, and
>> network), which can
>> help you to locate where the performance issues are.
>>
>>
>> Cheers,
>> Nasf
>>> Appriciate your help.
>>>
>>> Regards
>>> SP
>>>
>>>
>>>
>>> _______________________________________________
>>> Lustre-discuss mailing list
>>> Lustre-discuss at lists.lustre.org
>>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>
>




More information about the lustre-discuss mailing list