[lustre-discuss] Cant reach full throughput bandwith on Mellanox

Stepan Beskrovnyy bsm099 at gmail.com
Sun Mar 29 10:33:57 PDT 2026


Hello everyone!

I made a bunch of self-tests about EC branch performance so far.

About my network config, I got:

7 servers with Mellanox ConnectX-7 with 2x100G ports.

Lustre topology:
1 MGS/MDT server
6 servers with 2 OSS

At all 12 OSS’s and 1 MDT. All OSS sit on spdk raid0 of 8 nvme with big
throughputput.

Using EC 10+2 on cluster and 1M stripe pattern.

ib_write/read work well between nodes, shows 100G per interface. All the
packets send to prio3.


But at IOR benchmark I got only 65Gib/s on read and write
both(—posix.odirect, no caching).


Network utilization due tests is only about 20%.

How can I tune my configuration more ? Any Ideas?

And another question, is there any way in Erasure Coding branch to choose
parity-block OSS’s manually?


Thanks,
Stepan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20260329/dab181b3/attachment.htm>


More information about the lustre-discuss mailing list