[lustre-discuss] Lustre Module ko2iblnd does not load for Lustre 2.16.1

Shaun Tancheff shaun.tancheff at hpe.com
Tue Sep 16 18:07:29 PDT 2025


On 9/16/25 19:00, Pati, Abhilasha via lustre-discuss wrote:
> Greetings To All,
> 
> 
> I am trying to install the lustre server for Lustre version*2.16.1* on 
> *RHEL 9.4 *....and the command I am using for it is /dnf install lustre- 
> all-dkms/.
> 
> Unfortunately....I get the following messages from kernel buffer...
> 
> 
> /[167702.252133] in_kernel_ko2iblnd: Unknown symbol rdma_set_reuseaddr 
> (err -22)
> [167702.252135] in_kernel_ko2iblnd: disagrees about version of symbol 
> ib_destroy_cq_user
> [167702.252135] in_kernel_ko2iblnd: Unknown symbol ib_destroy_cq_user 
> (err -22)
> [167702.252138] in_kernel_ko2iblnd: disagrees about version of symbol 
> ib_modify_qp
> [167702.252138] in_kernel_ko2iblnd: Unknown symbol ib_modify_qp (err -22)
> [167702.252144] in_kernel_ko2iblnd: disagrees about version of symbol 
> ib_dma_virt_map_sg
> [167702.252145] in_kernel_ko2iblnd: Unknown symbol ib_dma_virt_map_sg 
> (err -22)
> [167702.252146] in_kernel_ko2iblnd: disagrees about version of symbol 
> rdma_destroy_id
> [167702.252147] in_kernel_ko2iblnd: Unknown symbol rdma_destroy_id (err -22)
> [167702.252152] in_kernel_ko2iblnd: disagrees about version of symbol 
> rdma_accept
> [167702.252152] in_kernel_ko2iblnd: Unknown symbol rdma_accept (err -22)
> [167702.252160] in_kernel_ko2iblnd: disagrees about version of symbol 
> ib_dealloc_pd_user
> [167702.252160] in_kernel_ko2iblnd: Unknown symbol ib_dealloc_pd_user 
> (err -22)/
> 
> **followed by the following LnetError and the LustreError:
> 
> /[167702.282546] LNetError: 8805:0:(api-ni.c:2616:lnet_load_lnd()) Can't 
> load LND o2ib, module ko2iblnd, rc=256
> [167702.282713] LustreError: 8805:0:(events.c:640:ptlrpc_init_portals()) 
> network initialisation failed: rc = -22/
> 
> I must add to this that we have installed the mellanox diver on the 
> machine as well...which has the following vermagic 
> *5.14.0-427.31.1_lustre.el9.x86_64*, which is the same as our kernel 
> version...Infact the *ib_core* module on which this *ko2iblnd* 
> depends...also has the vermagic :*5.14.0-427.31.1_lustre.el9.x86_64.*
> *
> *
> **Not sure if this is related to the above issue , but during the 
> installation of lustre, we come across an error message that says that 
> the build of lquota.ko module fails.
> 
> |/Error! Build of lustre/lquota/lquota.ko failed for: 
> 5.14.0-427.31.1_lustre.el9.x86_64 (x86_64) Make sure the name and 
> location of the generated module are correct, or consult /var/lib/dkms/ 
> lustre-all/2.16.1/build/make.log for more information. warning: 
> %post(lustre-all-dkms-2.16.1-1.el9.noarch) scriptlet failed, exit status 7/|
> |/
> /|
> //and when the log file is checked.....the following is returned (not 
> sure if this is an error or a warning):
> 
> /####### //cat /var/lib/dkms/lustre-all/2.16.1/build/make.log | grep quota
> checking if 'quotactl_ops.set_dqblk' takes struct qc_dqblk... yes
> checking whether to enable quota support global control... yes
> config.status: creating lustre/quota/Makefile
> config.status: creating lustre/quota/autoMakefile
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lproc_quota.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota_lib.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota_disk.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota_entry.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_request.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_lib.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_entry.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_lock.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_reint.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/osd-zfs/osd_quota.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/ 
> qsd_writeback.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_config.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qsd_handler.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qmt_dev.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/osc/osc_quota.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qmt_handler.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qmt_lock.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qmt_entry.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/qmt_pool.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/osd-ldiskfs/ 
> osd_quota.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/osd-ldiskfs/ 
> osd_quota_fmt.o
>    LD [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota.o
>    CC [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota.mod.o
>    LD [M]  /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota.ko
>    BTF [M] /var/lib/dkms/lustre-all/2.16.1/build/lustre/quota/lquota.ko
> Skipping BTF generation for /var/lib/dkms/lustre-all/2.16.1/build/ 
> lustre/quota/lquota.ko due to unavailability of vmlinux
> Making all in quota
> make[3]: Entering directory '/var/lib/dkms/lustre-all/2.16.1/build/ 
> lustre/quota'
> make[3]: Leaving directory '/var/lib/dkms/lustre-all/2.16.1/build/ 
> lustre/quota'/


lquota fix:
https://review.whamcloud.com/c/fs/lustre-release/+/59409

Landed for master:
4193fd7e44 LU-19047 dkms: module install fix lquota, add ec, kuinit

This is not in 2.16.1

If Mellanox is still giving issues, ensure /usr/src/ofa_kernel/* matches 
your kernel and ofed version.

> 
> We would really appreciate if some insight was given into why this error 
> occurs and how it could be resolved.**
> 
> With sincerest regards,
> Abhilasha Pati
> 
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> https://urldefense.com/v3/__http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org__;!!NpxR!nUyf4a2PRbuWxJtmJUH8bjfSsW718JJPdbAs5X9-Gt26EdxturgKDkUgBeZi9ZiIUi1CvuO6nK-UJYM-qiRzzCqBrlk2HWvH$



More information about the lustre-discuss mailing list