[lustre-discuss] Lustre on ZFS pooer direct I/O performance

Ben Evans bevans at cray.com
Mon Oct 17 07:28:38 PDT 2016


I'm guessing that you have more disk bandwidth than network bandwidth.
Adding more OSSes and distributing the OSTs among them would probably help
the general case, not necessarily the single dd case.

On 10/14/16, 3:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
<lustre-discuss-bounces at lists.lustre.org on behalf of
Riccardo.Veraldi at cnaf.infn.it> wrote:

>Hello,
>
>I would like how may I improve the situation of my lustre cluster.
>
>I have 1 MDS and 1 OSS with 20 OST defined.
>
>Each OST is a 8x Disks RAIDZ2.
>
>A single process write performance is around 800MB/sec
>
>anyway if I force direct I/O, for example using oflag=direct in dd, the
>write performance drop as low as 8MB/sec
>
>with 1MB block size. And each write it's about 120ms latency.
>
>I used these ZFS settings
>
>options zfs zfs_prefetch_disable=1
>options zfs zfs_txg_history=120
>options zfs metaslab_debug_unload=1
>
>i am quite worried for the low performance.
>
>Any hints or suggestions that may help me to improve the situation ?
>
>
>thank you
>
>
>Rick
>
>
>_______________________________________________
>lustre-discuss mailing list
>lustre-discuss at lists.lustre.org
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list