[Lustre-discuss] One or two OSS, no difference?

Oleg Drokin Oleg.Drokin at Sun.COM
Thu Mar 4 12:48:58 PST 2010


Hello!

   This is pretty strange. Are there any differences in network topology that can explain this?
   If you remove the first client, does the second one shows performance
   at the level of of the first, but as soon as you start the load on the first again, the second
   client performance drops?

Bye,
    Oleg
On Mar 4, 2010, at 1:45 PM, Jeffrey Bennett wrote:

> Hi Oleg, thanks for your reply
> 
> I was actually testing with only one client. When adding a second client using a different file, one client gets all the performance and the other one gets very low performance, any recommendation?
> 
> Thanks in advance
> 
> jab
> 
> 
> -----Original Message-----
> From: Oleg.Drokin at Sun.COM [mailto:Oleg.Drokin at Sun.COM] 
> Sent: Wednesday, March 03, 2010 5:20 PM
> To: Jeffrey Bennett
> Cc: lustre-discuss at lists.lustre.org
> Subject: Re: [Lustre-discuss] One or two OSS, no difference?
> 
> Hello!
> 
> On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
>> We are building a very small Lustre cluster with 32 clients (patchless) and two OSS servers. Each OSS server has 1 OST with 1 TB of Solid State Drives. All is connected using dual-port DDR IB.
>> 
>> For testing purposes, I am enabling/disabling one of the OSS/OST by using the "lfs setstripe" command. I am running XDD and vdbench benchmarks.
>> 
>> Does anybody have an idea why there is no difference in MB/sec or random IOPS when using one OSS or two OSS? A quick test with "dd" also shows the same MB/sec when using one or two OSTs.
> 
> I wonder if you just don't saturate even one OST (both backend SSD and IB interconnect) with this number of clients? Does the total throughput decreases as you decrease
> number of active clients and increases as you increase it even further?
> Increasing maximum number of in-flight rpcs might help in that case.
> Also are all of your clients writing to the same file or each client does io to a separate file (I hope)?
> 
> Bye,
>    Oleg




More information about the lustre-discuss mailing list