[Lustre-discuss] Lustre FS in Infiniband - client mount problem

ren yufei renyufei83 at yahoo.com.cn
Thu Nov 4 09:22:32 PDT 2010


Thank you. After the self compiled 
'lustre-modules-1.8.3-2.6.18_164.11.1.el5_lustre.1.8.3_201011012110' installed 
on the client side, the problem resolved.

Yufei




________________________________
From: Wang Yibin <wang.yibin at oracle.com>
To: ren yufei <renyufei83 at yahoo.com.cn>
Cc: lustre-discuss at lists.lustre.org
Sent: Wed, November 3, 2010 10:03:31 PM
Subject: Re: [Lustre-discuss] Lustre FS in Infiniband - client mount problem

As the error message said, the lustre modules probably were not loaded when you 
were trying to mount lustre client.

Please provide information - specifically, the loaded modules and more dmesg 
log.


在 2010-11-4,上午2:30, ren yufei 写道:

Dear all,
>
>I have setup some nodes (include MDT/MGS/OSS/Client) with Mellanox 40G RNIC 
>connected via Mellanox MTS3600 switch, and deployed lustre FS in this cluster. 
>The MDT/MGS node and OSS nodes works but client could not mount on this FS. The 
>error information is as follows.
>
>Client: 192.168.1.23
>MDS: 192.168.1.11
>
>-- error information
>Client # mount -t lustre 192.168.1.11 at o2ib0:/lustre /mnt/lustre
>mount.lustre: mount 192.168.1.11 at o2ib0:/lustre at /mnt/lustre failed: No such 
>device
>Are the lustre modules loaded?
>Check /etc/modprobe.conf and /proc/filesystems
>Note 'alias lustre llite' should be removed from modprobe.conf
>
>Client # ls -l /mnt/lustre
>total 0
>
>Client # dmesg | tail
>LustreError: 165-2: Nothing registered for client mount! Is the 'lustre' module 
>loaded?
>LustreError: 5116:0:(obd_mount.c:2045:lustre_fill_super()) Unable to mount  
>(-19)
>
>-- environment information.
>
>Client:
># lctl list_nids
>192.168.1.23 at o2ib
># lctl ping 192.168.1.11 at o2ib0
>12345-0 at lo
>12345-192.168.1.11 at o2ib
>
>MDS:
># lctl list_nids
>192.168.1.11 at o2ib
>
>lctl > device_list
>  0 UP mgs MGS MGS 13
>  1 UP mgc MGC192.168.1.11 at o2ib bb9cf87d-fd14-b679-85ce-f0fa1a866aff 5
>  2 UP mdt MDS MDS_uuid 3
>  3 UP lov lustre-mdtlov lustre-mdtlov_UUID 4
>  4 UP mds lustre-MDT0000 lustre-MDT0000_UUID 3
>  5 UP osc lustre-OST0000-osc lustre-mdtlov_UUID 5
>  6 UP osc lustre-OST0001-osc lustre-mdtlov_UUID 5
>...
>
>By the way, All these nodes could connect or access to each other via 
>ping/iperf(TCP)/ibv_rc_ pingpong/ibv_ud_pingpong/ib_ write_lat. However, the 
>'rping' client, based on librdmacm, could not connect to the server side. error 
>is:
>
># rping -c 192.168.1.11
>cq completion failed status 5
>wait for CONNECTED state 10
>connect error -1
>cma event RDMA_CM_EVENT_REJECTED, error 8
>
>
>Thank you very much.
>
>Yufei
>
>
>
>_______________________________________________
>Lustre-discuss mailing list
>Lustre-discuss at lists.lustre.org
>http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20101105/37a4661c/attachment.htm>


More information about the lustre-discuss mailing list