[Lustre-discuss] Adding OST's not increasing storage?

Josephine Palencia josephin at psc.edu
Sun Jul 27 14:02:26 PDT 2008



Test Setup: (combining them first while ENOMEM error fixed for mgs hardware)
------
mds00w.psc.teragrid.org: combined mdt/mgs and client
oss00w.psc.teragrid.org: ost1
oss00w.psc.teragrid.org: ost2
operon22.psc.edu       : ost3 and client


Combined mgs/mdt on mds00w:
-----------------------------
root at mds00w ~]# df -h
/dev/sda8             9.9G  489M  8.9G   6% /mnt/test/mdt


Added 1 ost from oss00w with 1.4TB
------------------
[root at oss00w ~]# df -h
/dev/sda7             1.3T  1.1G  1.3T   1% /mnt/test/ost0

Shows 1.3T storage correctly on mds00w as client.
root at mds00w ~]# df -h
mds00w at tcp0:/testfs   1.3T  1.1G  1.3T   1% /mnt/testfs

Added 2nd ost from oss01w  with 1.4TB:
------------------
[root at oss01w ~]# df -h
/dev/sda7             1.3T  1.1G  1.3T   1% /mnt/test/ost1

mds00w shows both osts active
-------------------
root at mds00w ~]# cat 
/proc/fs/lustre/lov/testfs-clilov-ffff81007c659000/target_obd
0: testfs-OST0000_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
[root at mds00w ~]# cat /proc/fs/lustre/lov/testfs-MDT0000-mdtlov/target_obd
0: testfs-OST0000_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE

but df from mds00w shows  only 1.3T storage available  (other 1.3T in used?)
Should be around 2.6T


Adding a third ost with only 150GB storage gives 142GB storage showing on
both clients. Total storage should be around 2.7TB.
-------------

[root at operon22 ~]# df -h
/dev/hdb1             151G  1.1G  142G   1% /mnt/test/ost3
mds00w at tcp0:/testfs   2.8T  2.6T  142G  95% /mnt/testfs
[root at mds00w ~]# df -h
/dev/sda8             9.9G  489M  8.9G   6% /mnt/test/mdt
mds00w at tcp0:/testfs   2.8T  2.6T  142G  95% /mnt/testfs


[root at mds00w ~]# cat /proc/fs/lustre/devices
   0 UP mgs MGS MGS 11
   1 UP mgc MGC128.182.112.60 at tcp e6af805d-1e32-b002-d315-54fb78e7e558 5
   2 UP lov testfs-MDT0000-mdtlov testfs-MDT0000-mdtlov_UUID 4
   3 UP mdt testfs-MDT0000 testfs-MDT0000_UUID 7
   4 UP mds mdd_obd-testfs-MDT0000-0 mdd_obd_uuid-testfs-MDT0000-0 3
   5 UP osc testfs-OST0000-osc-MDT0000 testfs-MDT0000-mdtlov_UUID 5
   6 UP lov testfs-clilov-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 4
   7 UP lmv testfs-clilmv-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 4
   8 UP mdc testfs-MDT0000-mdc-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 5
   9 UP osc testfs-OST0000-osc-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 5
  10 UP osc testfs-OST0001-osc-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 5
  11 UP osc testfs-OST0001-osc-MDT0000 testfs-MDT0000-mdtlov_UUID 5
  12 UP osc testfs-OST0002-osc-ffff8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 5
  13 UP osc testfs-OST0002-osc-MDT0000 testfs-MDT0000-mdtlov_UUID 5

Show all OST's active.

[root at mds00w ~]# cat /proc/fs/lustre/lov/testfs-clilov-ffff8101278aec00/target_obd
0: testfs-OST0000_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
2: testfs-OST0002_UUID ACTIVE
[root at mds00w ~]# cat /proc/fs/lustre/lov/testfs-MDT0000-mdtlov/target_obd
0: testfs-OST0000_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
2: testfs-OST0002_UUID ACTIVE

Encountering issues deactivating/activating OST's as well but that's in 
another email.

Thanks,
josephin


PS.
Version:  lustre: 1.9.50
Kernel-2.6.18-92.1.6-lustre-1.9.50 #2 SMP x86_64





More information about the lustre-discuss mailing list