[Lustre-discuss] Lustre on mpath devices

Klaus Steden klaus.steden at thomson.net
Thu Oct 25 20:07:23 PDT 2007


Is kpartx destructive?

I've already got a live file system constructed, and I need to know ahead of
time if I've need to restore if I make configuration changes, or if I'm
risking corruption during setup.

Klaus

On 10/25/07 1:59 PM, "Robert LeBlanc" <robert at leblancnet.us>did etch on
stone tablets:

> What I've done on our multipath is to specify an alias in
> /etc/multipath.conf that has the wwid of the LUN and then I give it a nice
> name like ldiska, ldiskb, etc and since I was having some troubles with
> _netdev and then multipath didn't settle before kpartx ran with udev, I
> created an init script that makes sure kpartx creates the dev devices for
> the LUNs at /dev/mapper/ldisk[ab]. Then heartbeat mounts the Lustre volumes
> and we are good to go. May be clunky, but it works, the key is use multipath
> to create a nice alias, then use that and not the volume label, multipath
> will make sure there is a good path to the volume.
> 
> Robert
> 
> 
> On 10/25/07 2:37 PM, "Klaus Steden" <klaus.steden at thomson.net> wrote:
> 
>> 
>>> That depends...  It depends on what "blkid -t LABEL={fsname}-OST0001"
>>> returns.  It _should_ be smart enough to return the DM device, but
>>> it is prudent to make sure of this.  There shouldn't be any problem
>>> with mounting the Lustre filesystems by LABEL= (which is one reason we
>>> moved to a mount-based setup).
>>> 
>> Hi Andreas,
>> 
>> I just checked out my local system and it's returning a regular device name
>> (/dev/sdh, or /dev/sdi, depending on the label) and a unique UUID, but then
>> I didn't enable multipath when I built the FS.
>> 
>> If I avoid using the /dev name, is it still possible to build failover
>> properly if I'm not using the multi-path framework? If it's not, is a
>> rebuild of the filesystem required in order to enable multi-path support?
>> 
>> Sorry to keep firing questions at you, I'm trying to make sure I've got all
>> the bases covered for failover.
>> 
>> thanks again,
>> Klaus
>>  
>> 
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at clusterfs.com
>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>> 
> 
>  
> Robert LeBlanc
> College of Life Sciences Computer Support
> Brigham Young University
> leblanc at byu.edu
> (801)422-1882
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list