[Lustre-discuss] What says an OST is deactivated?

Chris Worley worleys at gmail.com
Tue Mar 25 11:22:22 PDT 2008


On Tue, Mar 25, 2008 at 10:39 AM, Andreas Dilger <adilger at sun.com> wrote:
> On Mar 25, 2008  08:53 -0600, Chris Worley wrote:
>  > On Tue, Mar 25, 2008 at 2:13 AM, Andreas Dilger <adilger at sun.com> wrote:
>  > > On Mar 25, 2008  01:28 -0600, Chris Worley wrote:
>  > >  > I do an "lctl dl" and it shows "UP" in the first column for all
>  > >  > OST's... even though I've deactivated many disks.  "iostat" shows the
>  > >  > disks are still in use too.
>  > >
>  > >  What does it mean when you say "deactivated many disks"?
>  >
>  > To deactivate the disk, I use an incantation like:
>  >
>  > lctl --device ddnlfs-OST001f-osc deactivate
>
>  Note that "deactivate" only affects the node on which it is run.
>  The normal place to do this is on the MDS.

That's what I do.

>  Note that you also
>  mount the client filesystem on the MDS node you need to deactivate
>  the MDS OSC connection, and not the client filesystem one:

I'm not sure I understand the above?

I think you're saying that when deactivating, use the block device
label with "-osc" appended.  That I do too.

>
>
>  > ...but new files are still going there, and, if I'm reading it right,
>  > the disk is still "up" in Lustre:
>  >
>  > # lctl dl | grep 1f
>  >  36 UP osc ddnlfs-OST001f-osc ddnlfs-mdtlov_UUID 5
>
>  This does look like you have the right device.  Using "device_list"
>  only shows which devices are configured.  A deactivated device is
>  still configured...   The "UP" status is related to the configuration
>  status and not the current connection state.  Have a look at the file
>  /proc/fs/lustre/ddnlfs-mdtlov/target_obd to see the device status.
>

Ahh, that verifies what's active/inactive:

# cat /proc/fs/lustre/lov/ddnlfs-mdtlov/target_obd | grep " ACTIVE" | wc -l
48
# cat /proc/fs/lustre/lov/ddnlfs-mdtlov/target_obd | grep " INACTIVE" | wc -l
22

>  # lfs df

This command returns nothing?

>  UUID                 1K-blocks      Used Available  Use% Mounted on
>  mds-myth-0_UUID        9174328    678000   8496328    7% /myth[MDT:0]
>  ost-myth-0_UUID      292223856 286837752   5386104   98% /myth[OST:0]
>  ost-myth-1_UUID       94442984  92833972   1609012   98% /myth[OST:1]
>  ost-myth-2_UUID      487388376 474792788  12595588   97% /myth[OST:2]
>  ost-myth-3_UUID      487865304 472221312  15643992   96% /myth[OST:3]
>
>  filesystem summary:  1361920520 1326685824  35234696   97% /myth
>
>  # lctl --device %myth-OST0001-osc deactivate
>  # cat /proc/fs/lustre/lov/myth-mdtlov/target_obd
>  0: ost-myth-0_UUID ACTIVE
>  1: ost-myth-1_UUID INACTIVE
>  2: ost-myth-2_UUID ACTIVE
>  3: ost-myth-3_UUID ACTIVE
>
>  # lctl --device %myth-OST0001-osc recover
>
>
>  > >  > I'm trying to get rid of slow disks... what's the right way to tell
>  > >  > Lustre to quit using a disk?
>  > >
>  > >  If you deactivate an OST on the MDS node it will stop allocating new
>  > >  files there
>  >
>  > For now, that's all I want to do... but new files are still going there.
>  >
>  > ... both a way to deactivate the disk and a way to know which are
>  > deactivated would be nice.
>
>  It was confusing when you say "deactivate the disk" because that could
>  mean a great many things like deactivating a disk from a RAID set, or
>  similar.  An OST may reside on many disks (via hardware/software RAID,
>  LVM, etc).
>
>  What you are trying to do is the right process.
>

Thanks for all the help!  I think I've got it now.

Chris
>
>
>  Cheers, Andreas
>  --
>  Andreas Dilger
>  Sr. Staff Engineer, Lustre Group
>  Sun Microsystems of Canada, Inc.
>
>



More information about the lustre-discuss mailing list