[Lustre-discuss] Questions about benchmarking Lustre's local filesystem

teng wang tzw0019 at gmail.com
Thu Feb 19 14:26:10 PST 2015


Thanks for your advice Ron. The point is that I want to monitor how much
time each OSS spends on the data

transfer on the network and how much time it spends on accessing the disk

for a specific application. Is there any tool that can achieve such
dissection without

modifying the Lustre code?

On Thu, Feb 19, 2015 at 4:11 PM, Ron Croonenberg <ronc at lanl.gov> wrote:

> That depends on the number of IO nodes, compute nodes, lanes, switches
> LNETS, OSSs etc etc.  it is hard telling
>
> On 02/19/2015 03:00 PM, teng wang wrote:
>
>> Thanks for all your answers. I just took some time to understand
>> how Obdfilter-survey works. It works fine for the Lustre local filesystem.
>> But is there any Lustre tool that can directly profile the time spent
>> on the network and on the disk for an application running at the
>> user level?
>>
>> T
>>
>> On Thu, Feb 19, 2015 at 10:47 AM, teng wang <tzw0019 at gmail.com
>> <mailto:tzw0019 at gmail.com>> wrote:
>>
>>     Is there any way to benchmark the local filesystem performance of
>>
>>     Lustre on the OSS side? For example, I want to benchmark the random
>>
>>     and sequential I/O bandwidth of the local filesystem of Lustre 2.5
>>     using IOR.
>>
>>     Is there anyway to run IOR directly on its local filesystem without
>>     having
>>
>>     to go through the network?
>>
>>
>>     Thanks!
>>
>>     Teng
>>
>>
>>
>>
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>>  _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20150219/00fc4a56/attachment.htm>


More information about the lustre-discuss mailing list