[lustre-discuss] Lustre 2.10.3 on ZFS - slow read performance

Alex Vodeyko alex.vodeyko at gmail.com
Fri Mar 30 06:32:28 PDT 2018


Hi,

I'm still fighting with this setup:
zpool with three or six 8+2 raidz2 vdevs shows very slow reads (0.5
GB/s or even less compared with 2.5 GB/s writes)...
I've tried recordsizes upto 16M and also zfs module parameters f.e.
from http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2017-March/014307.html
- unfortunately it didnot help.

Everything is quite good with six individual 8+2 raidz2 pools (0.6+
GB/s read/write from each totaling 3.6+ GB/s), so I would probably
have to go with six OSTs (individual pool with 8+2 raidz2 each).
But I still hope there should be such a setups (zpool with three or
six 8+2 raidz2 vdevs) in a production, so I kindly ask to share your
setups or any ideas helping to diagnose this problem?

Thank you in advance,
Alex



2018-03-27 22:55 GMT+03:00 Alex Vodeyko <alex.vodeyko at gmail.com>:
> Hi,
>
> I'm setting up the new lustre test setup with the following hw config:
> - 2x servers (dual E5-2650v3, 128GB RAM), one MGS/MDS, one OSS
> - 1x HGST 4U60G2 JBOD with 60x 10TB HUH721010AL5204 drives (4k
> physical, 512 logical sector size), connected to OSS using lsi 9300-8e
>
> Lustre 2.10.3 servers/clients (centos 7.4), zfs - 0.7.5 and also 0.7.7
>
> Initially I planned to use 2 zpools with three 8+2 vdevs or 1 zpool
> with six 8+2 vdevs.
>
..


More information about the lustre-discuss mailing list