[Lustre-discuss] how to replace a bad OST.

Lundgren, Andrew Andrew.Lundgren at Level3.com
Tue Mar 18 09:04:31 PDT 2008


Well,

That did work, I also had to unmount all of the OSTs and clients to get it to function however.

Does anyone know, is there a way to do this w/o resetting the entire file system?

--
Andrew

> -----Original Message-----
> From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss-
> bounces at lists.lustre.org] On Behalf Of Mailer PH
> Sent: Monday, March 17, 2008 12:07 PM
> To: Lustre-discuss at lists.lustre.org
> Subject: Re: [Lustre-discuss] how to replace a bad OST.
>
> I run into similar problem few weeks ago .
>
> You need to run :
> tunefs.lustre --writeconf /dev/.............
>
> On MDT/MGS after unmounting it , maybbe there is another way to do that
> without unmounting MDT/MGS but im not sure .
>
> Cheers .
>
>
>
>       ----- Original Message -----
>       From: Lundgren, Andrew
>       To: 'Lustre-discuss at clusterfs.com'
>       Sent: Monday, March 17, 2008 7:29 PM
>       Subject: [Lustre-discuss] how to replace a bad OST.
>
>
>       I am trying to learn how to replace a defective OST with a new one.
> Assuming the old OST can not be salvaged.
>
>
>
>       I have a test cluster that I am working on.
>
>
>
>       I deactivated the volume on the MGS using:
>
>
>
>       lctl conf_param content-OST0002-osc.osc.active=0
>
>
>
>       I unlinked all of the bad files by finding the ones on the bad
> volume.
>
>
>
>       I formatted a fresh OST using the index number of the bad device:
>
>
>
>       mkfs.lustre --reformat  --fsname content --ost --
> mgsnode=4.248.52.81 at tcp0 --param="failover.mode=failout" --index=02
> /dev/md6
>
>
>
>       Then I tried to mount the freshly formatted OST into the cluster.
>
>
>
>       Unfortunately, I end up with an error:
>
>
>
>       mount.lustre: mount /dev/md6 at /lustre_raw_ost_one failed: Address
> already in use
>
>       The target service's index is already in use. (/dev/md6)
>
>
>
>       How can I re-use the index number to prevent always having a "dead"
> point in my cluster?
>
>
>
>       Thanks!
>
>
>
>       --
>
>       Andrew
>
>
>
> ----------------------------------------------------------
> Outgoing messages are virus free checked by NOD32 system




More information about the lustre-discuss mailing list