[lustre-discuss] Lustre traffic slow on OPA fabric network

Robin Humble rjh+lustre at cita.utoronto.ca
Tue Jul 10 02:03:30 PDT 2018


Hi Kurt,

On Tue, Jul 03, 2018 at 02:59:22PM -0400, Kurt Strosahl wrote:
>   I've been seeing a great deal of slowness from clients on an OPA network accessing lustre through lnet routers.  The nodes take very long to complete things like lfs df, and show lots of dropped / reestablished connections.  The OSS systems show this as well, and occasionally will report that all routes are down to a host on the omnipath fabric.  They also show large numbers of bulk callback errors.  The lnet router show large numbers of PUT_NACK messages, as well as Abort reconnection messages for nodes on the OPA fabric.

I don't suppose you're talking to a super-old Lustre version via the
lnet routers?

we see excellent performance OPA to IB via lnet routers wth 2.10.x
clients and 2.9 servers, but when we try to talk to a IEEL 2.5.41
servers then we see pretty much exactly the symptoms you describe.

strangely direct mounts of old lustre on new clients on IB work ok, but
not via lnet routers to OPA. old lustre to new clients on tcp networks
are ok. lnet self tests OPA to IB also work fine, it's just when we do
the actual mounts...
anyway, we are going to try and resolve the problem by updating the
IEEL to 2.9 or 2.10

hmm, now that I think of it, we did have to tweak the ko2iblnd options
a lot on the lnet router to get it this stable. I forget the symptoms
we were seeing though, sorry.
we found the minimum common denominator settings between the IB network
and the OPA, and tuned ko2iblnd on the lnet routers down to that. if it
finds one OPA card then Lustre imposes an agressive OPA config on all
IB networks which made our mlx4 cards on a ipath/qib fabric unhappy.

FWIW, for our hardware combo, ko2iblnd options are

  options ko2iblnd-opa peer_credits=8 peer_credits_hiw=0 credits=256 concurrent_sends=0 ntx=512 map_on_demand=0 fmr_pool_size=512 fmr_flush_trigger=384 fmr_cache=1 conns_per_peer=1

I don't know what most of these do, so please take with a grain of salt.

cheers,
robin


More information about the lustre-discuss mailing list