[Lustre-discuss] lustre showing inactive devices

Colin Faber colin_faber at xyratex.com
Mon Mar 18 09:07:17 PDT 2013


Hi,

each client must maintain access to each OST / MDT individually. Likely 
that client 2 is having some connectivity issues to the OSS's hosting 
those OST's, and client 1 is not.

Without detailed logging it's impossible to determine why.

-cf

On 03/18/2013 01:41 AM, linux freaker wrote:
>
>
>
> I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
>
>
>
> [code]
>
> [root at MDS ~]#  lctl list_nids
>
> 10.94.214.185 at tcp
>
> [root at MDS ~]#
>
>
>
> [/code]
>
>
>
> On Lustre Client1:
>
> [code]
>
> [root at lustreclient1 lustre]# lfs df -h
>
> UUID                       bytes        Used   Available Use% Mounted on
>
> lustre-MDT0000_UUID         4.5G      274.3M        3.9G   6% 
> /mnt/lustre[MDT:0]
>
> lustre-OST0000_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:0]
>
> lustre-OST0001_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:1]
>
> lustre-OST0002_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:2]
>
> lustre-OST0003_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:3]
>
> lustre-OST0004_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:4]
>
> lustre-OST0005_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:5]
>
> lustre-OST0006_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:6]
>
> lustre-OST0007_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:7]
>
> lustre-OST0008_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:8]
>
> lustre-OST0009_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:9]
>
> lustre-OST000a_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:10]
>
> lustre-OST000b_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:11]
>
>
>
> filesystem summary:        70.9G        3.2G       64.0G   5% /mnt/lustre
>
>
>
>
>
> [/code]
>
>
>
> But Lustre Client2 is displaying it as:
>
>
>
> [code]
>
>
>
> [root at alpha ~]# lfs df -h
>
> UUID                       bytes        Used   Available Use% Mounted on
>
> lustre-MDT0000_UUID         4.5G      274.3M        3.9G   6% in in
>
> lustre-OST0000_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:0]
>
> lustre-OST0001_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:1]
>
> lustre-OST0002_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:2]
>
> lustre-OST0003_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:3]
>
> lustre-OST0004_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:4]
>
> lustre-OST0005_UUID         5.9G      276.1M        5.3G   5% 
> /mnt/lustre[OST:5]
>
> OST0006             : inactive device
>
> OST0007             : inactive device
>
> OST0008             : inactive device
>
> OST0009             : inactive device
>
> OST000a             : inactive device
>
> OST000b             : inactive device
>
>
>
> filesystem summary:        35.4G        1.6G       32.0G   5% /mnt/lustre
>
>
>
> [/code]
>
>
>
> Why is it showing inactive device on one machine while not in another?
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list