[Lustre-discuss] sanity check

Mervini, Joseph A jamervi at sandia.gov
Wed May 26 12:47:34 PDT 2010


Andreas,

I migrated all the files off the target with lfs_migrate. I didn't realize that I would need to retain any of the ldiskfs data if everything was moved. (I must have misinterpreted your earlier comment.)

So this is my current scenario:

1. All data from a failing OST has been migrated to other targets.
2. The original target was recreated via mdadm.
3. mkfs.lustre was run on the recreated target
4. tunefs.lustre was run on the recreated target to set the index to what it was before it was reformatted.
5. No other data from the original target has been retained.

Question:

Based on the above conditions, what do I need to do to get this OST back into the file system?

Thanks in advance.

Joe
 
On May 26, 2010, at 1:29 PM, Andreas Dilger wrote:

> On 2010-05-26, at 13:18, Mervini, Joseph A wrote:
>> I have migrated all the files that were on a damaged OST and have recreated the software raid array and put a lustre file system on it.
>> 
>> I am now at the point where I want to re-introduce it to the scratch file system as if it was never gone. I used:
>> 
>> tunefs.lustre --index=27 /dev/md4 to get the right index for the file system (the information is below). I just want to make sure there is nothing else I need to do before I pull the trigger will mounting it. (The things that have me concerned are the differences in the flags, and less so the "OST first_time update.)
> 
> The use of tunefs.lustre is not sufficient to make the new OST identical to the previous one.  You should also copy the O/0/LAST_ID file, last_rcvd, and mountdata files over, at which point you don't need tunefs.lustre at all.
> 
>> <pre rebuild>
>> 
>> [root at oss-scratch obdfilter]# tunefs.lustre /dev/md4
>> checking for existing Lustre data: found CONFIGS/mountdata
>> Reading CONFIGS/mountdata
>> 
>> Read previous values:
>> Target:     scratch1-OST001b
>> Index:      27
>> Lustre FS:  scratch1
>> Mount type: ldiskfs
>> Flags:      0x2
>>            (OST )
>> Persistent mount opts: errors=remount-ro,extents,mballoc
>> Parameters: mgsnode=10.10.10.2 at o2ib mgsnode=10.10.10.5 at o2ib failover.node=10.10.10.10 at o2ib
>> 
>> 
>> Permanent disk data:
>> Target:     scratch1-OST001b
>> Index:      27
>> Lustre FS:  scratch1
>> Mount type: ldiskfs
>> Flags:      0x2
>>            (OST )
>> Persistent mount opts: errors=remount-ro,extents,mballoc
>> Parameters: mgsnode=10.10.10.2 at o2ib mgsnode=10.10.10.5 at o2ib failover.node=10.10.10.10 at o2ib
>> 
>> exiting before disk write.
>> 
>> 
>> <after reformat and tunefs>
>> 
>> [root at oss-scratch obdfilter]# tunefs.lustre --dryrun /dev/md4
>> checking for existing Lustre data: found CONFIGS/mountdata
>> Reading CONFIGS/mountdata
>> 
>> Read previous values:
>> Target:     scratch1-OST001b
>> Index:      27
>> Lustre FS:  scratch1
>> Mount type: ldiskfs
>> Flags:      0x62
>>            (OST first_time update )
>> Persistent mount opts: errors=remount-ro,extents,mballoc
>> Parameters: mgsnode=10.10.10.2 at o2ib mgsnode=10.10.10.5 at o2ib failover.node=10.10.10.10 at o2ib
>> 
>> 
>> Permanent disk data:
>> Target:     scratch1-OST001b
>> Index:      27
>> Lustre FS:  scratch1
>> Mount type: ldiskfs
>> Flags:      0x62
>>            (OST first_time update )
>> Persistent mount opts: errors=remount-ro,extents,mballoc
>> Parameters: mgsnode=10.10.10.2 at o2ib mgsnode=10.10.10.5 at o2ib failover.node=10.10.10.10 at o2ib
>> 
>> exiting before disk write.
>> 
>> 
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> 
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Technical Lead
> Oracle Corporation Canada Inc.
> 
> 





More information about the lustre-discuss mailing list