[lustre-discuss] poor performance on reading small files

Riccardo Veraldi Riccardo.Veraldi at cnaf.infn.it
Wed Aug 3 18:28:38 PDT 2016


On 03/08/16 10:57, Dilger, Andreas wrote:
> On Jul 29, 2016, at 03:33, Oliver Mangold <Oliver.Mangold at EMEA.NEC.COM> wrote:
>> On 29.07.2016 04:19, Riccardo Veraldi wrote:
>>> I am using lustre on ZFS.
>>>
>>> While write performances are excellent also on smaller files, I find
>>> there is a drop down in performance
>>> on reading 20KB files. Performance can go as low as 200MB/sec or even
>>> less.
>> Getting 200 MB/s with 20kB files means you have to do 10000 metadata
>> ops/s. Don't want to say it is impossible to get more than that, but at
>> least with MDT on ZFS this doesn't sound bad either. Did you run an
>> mdtest on your system? Maybe some serious tuning of MD performance is in
>> order.
> I'd agree with Oliver that getting 200MB/s with 20KB files is not too bad.
> Are you using HDDs or SSDs for the MDT and OST devices?  If using HDDs,
> are you using SSD L2ARC to allow the metadata and file data be cached in
> L2ARC, and allowing enough time for L2ARC to be warmed up?
>
> Are you using TCP or IB networking?  If using TCP then there is a lower
> limit on the number of RPCs that can be handled compared to IB.
>
> Cheers, Andreas
Yes Andreas perhaps is not too bad and in my particular situation I am 
reading bunch of 20KB chunks inside a bigger 200GB file.
I found benefits reducing the ZFS record size that was set up at the 
beginning to a quite large value.
I am using SSD disks and I did not set up a L2ARC because I do not think 
I'd have much benefit in my siutation.
So ti is not a Lustre problem at all.
thank you!

Riccardo




More information about the lustre-discuss mailing list