<div dir="ltr"><div><br><br> <b>New performance numbers (<a href="http://1.6.5.1">1.6.5.1</a> vs <a href="http://1.6.4.3">1.6.4.3</a>):</b><br></div>
<div><br> ---------------------------------------------------------------------------------------<br> </div>
<div> Client : Intel <a href="mailto:X5450@3.00GHz" target="_blank">X5450@3.00GHz</a> 2xQuad core, 16GB RAM, <br> Infiniband, RHEL4 x86_64<br></div>
<div> Servers: Official <a href="http://1.6.4.1/" target="_blank">1.6.4.1</a></div>
<div> Single stream writing: (lmdd of=/lustre/tstfileXX bs=1M time=200 fsync=1)</div>
<div> --------------------------------------------------------------------------------------- <br><br><br></div>
<div> <u>2.6.9-67.0.20.ELsmp unmodified, OFED 1.2, <b>319 MB/sec</b><br></u> <u>Lustre <a href="http://1.6.5.1/" target="_blank">1.6.5.1</a> (with checksumming): </u> <b></b></div>
<div> Client loads: lmdd - 100% (1 CPU), ptlrpcd - 5% , pdflush- 15%</div>
<div> On 2 OSS servers in use: circa 50% total sys (2 CPUs), circa 10% I/O wait.</div>
<div> </div>
<div> <u>2.6.9-67.0.7.EL_lustre.1.6.5.1smp, OFED 1.3, <b>340 MB/sec</b><br></u> <u>Lustre <a href="http://1.6.5.1/" target="_blank">1.6.5.1</a> (with checksumming):</u>
<div> Client loads: lmdd - 100% (1 CPU), ptlrpcd - 5%, pdflush- 15%</div></div>
<div> On 2 OSS servers in use: circa 50% total sys (2 CPUs), circa 12% I/O wait.</div>
<div><br> <u>2.6.9-67.0.20.ELsmp unmodified, OFED 1.2, <b>671 MB/sec</b><br></u> <u>Lustre <a href="http://1.6.5.1/" target="_blank">1.6.5.1</a> (no checksumming) :</u> <b><br></b></div>
<div> Client loads: lmdd - 100% (1 CPU), ptlrpcd - 15%, pdflush- 2-3% <br> </div>
<div> On 2 OSS servers in use: circa 35% total sys (2 CPUs), circa 35% I/O wait. </div>
<div> </div>
<div> <u>2.6.9-67.0.7.EL_lustre.1.6.5.1smp, OFED 1.3, <b>670 MB/sec</b><br></u> <u>Lustre <a href="http://1.6.5.1/" target="_blank">1.6.5.1</a> (no checksumming) :</u>
<div> Client loads: lmdd - 100% (1 CPU), ptlrpcd - 12%, pdflush- 2-3% <br> </div>
<div> On 2 OSS servers in use: circa 32% total sys (2 CPUs), circa 32% I/O wait.<br><br> <u>2.6.9-67.0.4.EL_lustre.1.6.4.3smp, OFED 1.2, <b>843 MB/sec</b></u><br> <u>Lustre <a href="http://1.6.4.3">1.6.4.3</a> </u><br>
Client loads: lmdd - 100% (1 CPU), ptlrpcd - 20%, pdflush - 1%<br> </div></div>
<div> On 2 OSS servers in use: circa 33 % total sys (2 CPUs), circa 30% I/O wait.<br><br> <br> --------------------------------------------------------------------------------------<br> Running several (2,4) simultaneous jobs on the same <a href="http://1.6.4.3">1.6.4.3</a> client <br>
does not improve the aggregate performance. I have seen 750 MB/sec<br> aggregate with 4 streams, and 806 MB/sec aggregate with 2 streams.<br><br> With <a href="http://1.6.5.1">1.6.5.1</a> client with no checksumming I can get up to 800 MB/sec<br>
aggregate with 4 streams, and some 730 MB/sec with 2 streams.<br><br> But Lustre <a href="http://1.6.5.1">1.6.5.1</a> is visibly (20%) less performant on a single stream when<br> compared with <a href="http://1.6.4.3">1.6.4.3</a>. <br>
<br> Andrei.<br> <br><br><br><br><br><br></div></div>