<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>Hi,</p>
<p>The CEA hit this kind of issue but with Lustre router (server
2.12 -> router 2.15). "lctl ping" from router to the server
make the route go "up".</p>
<p>We solve this by setting map_on_demand=1 on the router.</p>
<p>Note here, map_on_demand=1 should be the default on a 2.15 to
avoid this kind of issue: <a class="issue-link" data-issue-key="LU-15186" href="https://jira.whamcloud.com/browse/LU-15186" id="key-val" rel="66934">LU-15186</a> Default ko2iblnd map_on_demand to 1</p>
<p>But, "lnetctl import" stills set map_on_demand=0 by default. This
should be solve by <a class="issue-link" data-issue-key="LU-15538" href="https://jira.whamcloud.com/browse/LU-15538" id="key-val" rel="68590">LU-15538</a>/<a class="issue-link" data-issue-key="LU-12452" href="https://jira.whamcloud.com/browse/LU-12452" id="key-val" rel="55988">LU-12452</a>.</p>
<p>You can verify this with "lnetctl net show -v" on the server
side.<br>
</p>
<p>In your case map_on_demand=1 on the client side (2.12) does not
work because the 2.15 server asks for 257 frags but the 2.12 is
not able to negotiate with this value (256 is the maximum for a
2.12 node).</p>
<p>Pinging from the server helps because a 2.15 node is able to
negotiate frags (even with map_on_demand=0, <a class="issue-link" data-issue-key="LU-15094" href="https://jira.whamcloud.com/browse/LU-15094" id="key-val" rel="66608">LU-15094</a>):</p>
<ul>
<li>2.15 node initiate the connection with max_frags=257</li>
<li>2.12 node rejects the connection and requests max_frags=256</li>
<li>2.15 node retries with max_frags=256 and save this value in
memory for the remote peer</li>
<li>2.12 node accepts the connection<br>
</li>
<li>2.12 node initiate the connection with max_frags=256</li>
<li>2.15 node accept the connection because it uses the save value
of max_frags=256.</li>
</ul>
<p>So, setting map_on_demand=1 on the server side (on the 2.15
nodes) should solve your issue.</p>
<p>Regards,</p>
<p>Etienne<br>
</p>
<div class="moz-cite-prefix">On 7/24/25 22:23, Makia Minich via
lustre-discuss wrote:<br>
</div>
<blockquote type="cite" cite="mid:5C095D0B-188D-42DF-8512-A28C819BAB10@systemfabricworks.com">
Recently we upgraded our lustre servers to RHEL 8 with lustre
2.15.5 but due to scheduling the clients are still currently at
RHEL 7 with lustre version 2.12.6. Infiniband is the interconnect.
<div><br>
</div>
<div>We've found that the client will fail to mount unless we run
a "lnetctl ping" from the server side to the client. Once that
happens then the client will ultimately mount. We've seen the
following error in the logs on the server side:
<div><br>
</div>
<div><font face="Courier New">lNet:
1920125:0:(o2iblnd_<a class="moz-txt-link-freetext" href="cd.c:2587:kiblnd_passive_connect())">cd.c:2587:kiblnd_passive_connect())</a> Can’t
accept conn from 172.16.19.6@o2ib (version 12): max_frags
256 incompatible without FMR_pool (257 wanted)</font></div>
<div><br>
</div>
<div>Attempting to set map_on_demand on the client side didn't
help, resulting in the same error. Are there any parameters or
configuration changes that may help the situation? At this
time we aren't able to upgrade the client side to RHEL 8, so
we're ultimately limited on available versions, so looking for
ideas on what to try next.</div>
</div>
<div><br>
</div>
<div>Thanks.</div>
<br>
<fieldset class="moz-mime-attachment-header"></fieldset>
<pre wrap="" class="moz-quote-pre">_______________________________________________
lustre-discuss mailing list
<a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a>
<a class="moz-txt-link-freetext" href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a>
</pre>
</blockquote>
</body>
</html>