<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto">Hi,<div><br></div><div>Are you mixing OS versions? You may want to align parameters of the ko2iblnd kernel module (esp "map_on_demand") value on both server and clients. The default seems to differ among major kernel versions and that could cause issue. Also double check your firewall if present.</div><div><br></div><div>Best regards, <br>Angelos<br><div dir="ltr">(Sent from mobile, please pardon me for typos and cursoriness.)</div><div dir="ltr"><br><blockquote type="cite">21/6/2023 0:11、Youssef Eldakar via lustre-discuss <lustre-discuss@lists.lustre.org>のメール:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr">In a cluster having ~100 Lustre clients (compute nodes) connected together with the MDS and OSS over Intel True Scale InfiniBand (discontinued product), we started seeing certain nodes failing to mount the Lustre file system and giving I/O error on LNET (lctl) ping even though an ibping test to the MDS gives no errors. We tried rebooting the problematic nodes and even fresh-installing the OS and Lustre client, which did not help. However, rebooting the MDS seems to possibly momentarily help after the MDS starts up again, but the same set of problematic nodes seem to always eventually revert back to the state where they fail to ping the MDS over LNET.<div><br></div><div>Thank you for any pointers we may pursue.</div><div><br></div><div>Youssef Eldakar</div><div>Bibliotheca Alexandrina</div><div><a href="http://www.bibalex.org">www.bibalex.org</a></div><div><a href="http://hpc.bibalex.org">hpc.bibalex.org</a></div></div>
<span>_______________________________________________</span><br><span>lustre-discuss mailing list</span><br><span>lustre-discuss@lists.lustre.org</span><br><span>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</span><br></div></blockquote></div></body></html>