[Lustre-discuss] [wc-discuss] Remove a inactive OST

Alfonso Pardo alfonso.pardo at ciemat.es
Sun Nov 4 23:24:46 PST 2012


I have rebooted the clientes, but the erased OST keep in the clients 
"device lists".



On 02/11/12 16:53, Nathan Rutman wrote:
> I suspect if you restart the clients the problem will go away.
>
> On Oct 30, 2012, at 1:49 AM, Alfonso Pardo <alfonso.pardo at ciemat.es 
> <mailto:alfonso.pardo at ciemat.es>> wrote:
>
>> More about my problem:
>>
>> If I run in the client the command *lfs df -i*. I can see the 
>> inactive/removed OST:
>>
>> UUID                      Inodes       IUsed       IFree IUse% Mounted on
>> cetafs-MDT0000_UUID    975470592    23375132   952095460   2% 
>> /mnt/data[MDT:0]
>> cetafs-OST0000_UUID     19073280    18889414      183866  99% 
>> /mnt/data[OST:0]
>> cetafs-OST0001_UUID     19073280    18889304      183976  99% 
>> /mnt/data[OST:1]
>> cetafs-OST0002_UUID     19073280    18889353      183927  99% 
>> /mnt/data[OST:2]
>> cetafs-OST0003_UUID     19073280    18889397      183883  99% 
>> /mnt/data[OST:3]
>> cetafs-OST0004_UUID     19073280    18889372      183908  99% 
>> /mnt/data[OST:4]
>> cetafs-OST0005_UUID     19073280    18889440      183840  99% 
>> /mnt/data[OST:5]
>> cetafs-OST0006_UUID     19073280    18889184      184096  99% 
>> /mnt/data[OST:6]
>> cetafs-OST0007_UUID     19073280    18889292      183988  99% 
>> /mnt/data[OST:7]
>> cetafs-OST0008_UUID     19073280    18889134      184146  99% 
>> /mnt/data[OST:8]
>> cetafs-OST0009_UUID     19073280    18889413      183867  99% 
>> /mnt/data[OST:9]
>> cetafs-OST000a_UUID     19073280    18888999      184281  99% 
>> /mnt/data[OST:10]
>> cetafs-OST000b_UUID     19073280    18889393      183887  99% 
>> /mnt/data[OST:11]
>> cetafs-OST000c_UUID     19073280    18889290      183990  99% 
>> /mnt/data[OST:12]
>> cetafs-OST000d_UUID     19073280    18889353      183927  99% 
>> /mnt/data[OST:13]
>> cetafs-OST000e_UUID     19073280    18889349      183931  99% 
>> /mnt/data[OST:14]
>> cetafs-OST000f_UUID     19073280    18889357      183923  99% 
>> /mnt/data[OST:15]
>> cetafs-OST0010_UUID     19073280    18889378      183902  99% 
>> /mnt/data[OST:16]
>> cetafs-OST0011_UUID     19073280    18889385      183895  99% 
>> /mnt/data[OST:17]
>> cetafs-OST0012_UUID     19073280     2629014    16444266  13% 
>> /mnt/data[OST:18]
>> cetafs-OST0013_UUID     19073280     2629045    16444235  13% 
>> /mnt/data[OST:19]
>> OST0014             : Resource temporarily unavailable
>> cetafs-OST0015_UUID      7621120     1494736     6126384  19% 
>> /mnt/data[OST:21]
>> cetafs-OST0016_UUID      7621120     1495107     6126013  19% 
>> /mnt/data[OST:22]
>> cetafs-OST0017_UUID      7621120     1494952     6126168  19% 
>> /mnt/data[OST:23]
>> cetafs-OST0018_UUID      7621120     1494865     6126255  19% 
>> /mnt/data[OST:24]
>>
>> filesystem summary:    975470592    23375132   952095460   2% /mnt/data
>>
>> But, if I list the lustre devices in the client with *lctl dl*, the 
>> inactive/removed OST don't exist:
>>
>>   0 UP mgc MGC192.168.11.9 at tcp 7ac5cb56-40f1-7183-672b-8d77a7f42d5d 5
>>   1 UP lov cetafs-clilov-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 4
>>   2 UP mdc cetafs-MDT0000-mdc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   3 UP osc cetafs-OST0000-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   4 UP osc cetafs-OST0001-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   5 UP osc cetafs-OST0002-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   6 UP osc cetafs-OST0003-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   7 UP osc cetafs-OST0004-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   8 UP osc cetafs-OST0005-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>   9 UP osc cetafs-OST0006-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  10 UP osc cetafs-OST0007-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  11 UP osc cetafs-OST0012-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  12 UP osc cetafs-OST0013-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  13 UP osc cetafs-OST0008-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  14 UP osc cetafs-OST000a-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  15 UP osc cetafs-OST0009-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  16 UP osc cetafs-OST000b-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  17 UP osc cetafs-OST000c-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  18 UP osc cetafs-OST000d-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  19 UP osc cetafs-OST000e-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  20 UP osc cetafs-OST000f-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  21 UP osc cetafs-OST0010-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  22 UP osc cetafs-OST0011-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  23 UP osc cetafs-OST0018-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  24 UP osc cetafs-OST0015-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  25 UP osc cetafs-OST0016-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>  26 UP osc cetafs-OST0017-osc-ffff81012394e400 
>> 132759a4-add7-eaed-3b81-d27b42f97aef 5
>>
>>
>> It is a problem, becouse If I want to run a *lfs quotachek*, I got a 
>> error becouse It want to access to the inactive/removed OST
>>
>>
>> Any suggest?
>>
>>
>>
>> On 29/10/12 09:40, Alfonso Pardo wrote:
>>> Hi, I have removed the inactive OST by: "tunefs.lustre --writeconf" 
>>> in all OST and MDT.
>>>
>>> Buy my clients can see the removed OST
>>>
>>> *OST0014             : Resource temporarily unavailable*
>>>
>>> But in the MDS the inactive OST is erased:
>>>
>>> 0 UP mgc MGC192.168.11.9 at tcp 705b5ce9-4857-4c87-6663-6a2824537f83 5
>>>   1 UP lov cetafs-MDT0000-mdtlov cetafs-MDT0000-mdtlov_UUID 4
>>>   2 UP mdt cetafs-MDT0000 cetafs-MDT0000_UUID 123
>>>   3 UP mds mdd_obd-cetafs-MDT0000 mdd_obd_uuid-cetafs-MDT0000 3
>>>   4 UP osc cetafs-OST0000-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.11 at tcp
>>>   5 UP osc cetafs-OST0001-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.11 at tcp
>>>   6 UP osc cetafs-OST0002-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.13 at tcp
>>>   7 UP osc cetafs-OST0003-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.13 at tcp
>>>   8 UP osc cetafs-OST0004-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.14 at tcp
>>>   9 UP osc cetafs-OST0005-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.14 at tcp
>>>  10 UP osc cetafs-OST0006-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.15 at tcp
>>>  11 UP osc cetafs-OST0007-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.15 at tcp
>>>  12 UP osc cetafs-OST0008-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.17 at tcp
>>>  13 UP osc cetafs-OST0009-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.18 at tcp
>>>  14 UP osc cetafs-OST000a-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.17 at tcp
>>>  15 UP osc cetafs-OST000b-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.18 at tcp
>>>  16 UP osc cetafs-OST000c-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.19 at tcp
>>>  17 UP osc cetafs-OST000d-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.19 at tcp
>>>  18 UP osc cetafs-OST000e-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.20 at tcp
>>>  19 UP osc cetafs-OST000f-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.20 at tcp
>>>  20 UP osc cetafs-OST0010-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.21 at tcp
>>>  21 UP osc cetafs-OST0011-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.21 at tcp
>>>  22 UP osc cetafs-OST0012-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.16 at tcp
>>>  23 UP osc cetafs-OST0013-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.16 at tcp
>>>  24 UP osc cetafs-OST0015-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.4 at tcp
>>>  25 UP osc cetafs-OST0016-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.4 at tcp
>>>  26 UP osc cetafs-OST0017-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.4 at tcp
>>>  27 UP osc cetafs-OST0018-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>> 192.168.11.4 at tcp
>>>
>>> What can I do to the client don't see the erased OST?
>>>
>>>
>>> On 23/10/12 08:54, Alfonso Pardo wrote:
>>>> Hello,
>>>>
>>>> I have a inactive OST that, I wish to remove from my Lustre system. 
>>>> I have deactivate the OST with:
>>>>
>>>> lctl conf_param cetafs-OST0014.osc.active=0
>>>>
>>>>
>>>> How do I to remove the inactive OST from my device list?
>>>>
>>>>
>>>> Thanks!!!!
>>>>
>>>> lctl dl -t
>>>>   0 UP mgc MGC192.168.11.9 at tcp 705b5ce9-4857-4c87-6663-6a2824537f83 5
>>>>   1 UP lov cetafs-MDT0000-mdtlov cetafs-MDT0000-mdtlov_UUID 4
>>>>   2 UP mdt cetafs-MDT0000 cetafs-MDT0000_UUID 123
>>>>   3 UP mds mdd_obd-cetafs-MDT0000 mdd_obd_uuid-cetafs-MDT0000 3
>>>>   4 UP osc cetafs-OST0000-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.11 at tcp
>>>>   5 UP osc cetafs-OST0001-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.11 at tcp
>>>>   6 UP osc cetafs-OST0002-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.13 at tcp
>>>>   7 UP osc cetafs-OST0003-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.13 at tcp
>>>>   8 UP osc cetafs-OST0004-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.14 at tcp
>>>>   9 UP osc cetafs-OST0005-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.14 at tcp
>>>>  10 UP osc cetafs-OST0006-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.15 at tcp
>>>>  11 UP osc cetafs-OST0007-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.15 at tcp
>>>>  12 UP osc cetafs-OST0008-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.17 at tcp
>>>>  13 UP osc cetafs-OST0009-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.18 at tcp
>>>>  14 UP osc cetafs-OST000a-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.17 at tcp
>>>>  15 UP osc cetafs-OST000b-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.18 at tcp
>>>>  16 UP osc cetafs-OST000c-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.19 at tcp
>>>>  17 UP osc cetafs-OST000d-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.19 at tcp
>>>>  18 UP osc cetafs-OST000e-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.20 at tcp
>>>>  19 UP osc cetafs-OST000f-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.20 at tcp
>>>>  20 UP osc cetafs-OST0010-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.21 at tcp
>>>>  21 UP osc cetafs-OST0011-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.21 at tcp
>>>>  22 UP osc cetafs-OST0012-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.16 at tcp
>>>>  23 UP osc cetafs-OST0013-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.16 at tcp
>>>> * 24 IN osc cetafs-OST0014-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.4 at tcp*
>>>>  25 UP osc cetafs-OST0015-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.4 at tcp
>>>>  26 UP osc cetafs-OST0016-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.4 at tcp
>>>>  27 UP osc cetafs-OST0017-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.4 at tcp
>>>>  28 UP osc cetafs-OST0018-osc-MDT0000 cetafs-MDT0000-mdtlov_UUID 5 
>>>> 192.168.11.4 at tcp
>>>> -- 
>>>>
>>>> /Alfonso Pardo Díaz
>>>> *Researcher / System Administrator at CETA-Ciemat*
>>>> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
>>>> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>>>> <Mail Attachment.png> <http://www.ceta-ciemat.es/>/
>>>>
>>>> ---------------------------- Confidencialidad: Este mensaje y sus 
>>>> ficheros adjuntos se dirige exclusivamente a su destinatario y 
>>>> puede contener información privilegiada o confidencial. Si no es 
>>>> vd. el destinatario indicado, queda notificado de que la 
>>>> utilización, divulgación y/o copia sin autorización está prohibida 
>>>> en virtud de la legislación vigente. Si ha recibido este mensaje 
>>>> por error, le rogamos que nos lo comunique inmediatamente 
>>>> respondiendo al mensaje y proceda a su destrucción. Disclaimer: 
>>>> This message and its attached files is intended exclusively for its 
>>>> recipients and may contain confidential information. If you 
>>>> received this e-mail in error you are hereby notified that any 
>>>> dissemination, copy or disclosure of this communication is strictly 
>>>> prohibited and may be unlawful. In this case, please notify us by a 
>>>> reply and delete this email and its contents immediately. 
>>>> ----------------------------
>>>>
>>>>
>>>> _______________________________________________
>>>> Lustre-discuss mailing list
>>>> Lustre-discuss at lists.lustre.org
>>>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>>
>>>
>>> -- 
>>>
>>> /Alfonso Pardo Díaz
>>> *Researcher / System Administrator at CETA-Ciemat*
>>> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
>>> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>>> <Mail Attachment.png> <http://www.ceta-ciemat.es/>/
>>>
>>>
>>>
>>> _______________________________________________
>>> Lustre-discuss mailing list
>>> Lustre-discuss at lists.lustre.org
>>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>>
>> -- 
>>
>> /Alfonso Pardo Díaz
>> *Researcher / System Administrator at CETA-Ciemat*
>> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
>> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>> <image001.png> <http://www.ceta-ciemat.es/>/
>>
>


-- 

/Alfonso Pardo Díaz
*Researcher / System Administrator at CETA-Ciemat*
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
CETA-Ciemat logo <http://www.ceta-ciemat.es/>/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20121105/73b40210/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 26213 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20121105/73b40210/attachment.png>


More information about the lustre-discuss mailing list