[Lustre-discuss] OSSes on dual IOH motherboards

Kevin Van Maren kevin.van.maren at oracle.com
Thu Jun 2 18:49:09 PDT 2011


Mark,

In addition to thread pinning, see also Bug 22078, which allows a 
different network interface to be used for different OSTs on the same 
server: a single IB interface is not enough to saturate one IOH, let 
alone multiple.

Normally all the threads are in a shared pool, where any thread can 
service any incoming request for any OST.

The most common server configuration is probably still dual-socket 
single IOH.

Kevin


Andreas Dilger wrote:
> Look for the Bull NUMIOA presentation from the recent LUG. The short story is that OST thread pinning is critical to getting good performance.  The numbers are something like 3.6GB/s without, and 6.0 GB/s with thread affinity. 
>
> Cheers, Andreas
>
> On 2011-06-02, at 7:23 PM, Mark Nelson <mark at msi.umn.edu> wrote:
>
>   
>> Hi List,
>>
>> I was wondering if anyone here has looked at the performance 
>> characteristics of lustre OSSes on dual tylersburg motherboards with 
>> raid controllers split up onto separate IO hubs.  I imagine that without 
>> proper pinning of service threads to the right CPUs/IOH and memory pools 
>> this could cause some nasty QPI contention.  Is this actually a problem 
>> in practice?  Is it possible to pin service threads in a reasonable way 
>> based on which OST is involved?  Anyone doing this on purpose to try and 
>> gain more overall PCIE bandwidth?
>>
>> I imagine that in general it's probably best to stick with a single 
>> socket single IOH OSS.  No pinning to worry about, very direct QPI 
>> setup, consistent performance characteristics, etc.
>>
>> Thanks,
>> Mark
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>     
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list