[lustre-discuss] Lustre on ZFS pooer direct I/O performance

Jones, Peter A peter.a.jones at intel.com
Fri Oct 14 13:10:43 PDT 2016


Riccardo

I would imagine that knowing the Lustre and ZFS version you are using
could be useful info to anyone who could advise you.

Peter

On 10/14/16, 12:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
<lustre-discuss-bounces at lists.lustre.org on behalf of
Riccardo.Veraldi at cnaf.infn.it> wrote:

>Hello,
>
>I would like how may I improve the situation of my lustre cluster.
>
>I have 1 MDS and 1 OSS with 20 OST defined.
>
>Each OST is a 8x Disks RAIDZ2.
>
>A single process write performance is around 800MB/sec
>
>anyway if I force direct I/O, for example using oflag=direct in dd, the
>write performance drop as low as 8MB/sec
>
>with 1MB block size. And each write it's about 120ms latency.
>
>I used these ZFS settings
>
>options zfs zfs_prefetch_disable=1
>options zfs zfs_txg_history=120
>options zfs metaslab_debug_unload=1
>
>i am quite worried for the low performance.
>
>Any hints or suggestions that may help me to improve the situation ?
>
>
>thank you
>
>
>Rick
>
>
>_______________________________________________
>lustre-discuss mailing list
>lustre-discuss at lists.lustre.org
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list