[Lustre-discuss] OST node filling up and aborting write

Nick Jennings nick at creativemotiondesign.com
Fri Feb 27 17:34:27 PST 2009


Hi Everyone,

  I have a small lustre test machine setup to bring myself back up to 
speed as it's been a few years. This is probably a very basic issue but 
I'm not able to find documentation on it (maybe I'm looking for the 
wrong thing).

  I've got 4 OSTs (each 2gigs in size) on one lustre file system. I dd a 
4 gig file to the filesystem and after the first OST fills up, the write 
fails (not enough space on device):


# dd of=/mnt/testfs/datafile3 if=/dev/zero bs=1048576 count=4024
dd: writing `/mnt/testfs/testfile3': No space left on device
1710+0 records in
1709+0 records out
1792020480 bytes (1.8 GB) copied, 55.1519 seconds, 32.5 MB/s

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1              15G  7.7G  5.9G  57% /
tmpfs                 252M     0  252M   0% /dev/shm
/dev/hda5             4.1G  198M  3.7G   6% /mnt/test/mdt
/dev/hda6             1.9G  1.1G  686M  62% /mnt/test/ost0
192.168.0.149 at tcp:/testfs
                       7.4G  4.7G  2.4G  67% /mnt/testfs
/dev/hda7             1.9G  1.8G   68K 100% /mnt/test/ost1
/dev/hda8             1.9G   80M  1.7G   5% /mnt/test/ost2
/dev/hda9             1.9G  1.8G   68K 100% /mnt/test/ost3


I did this 2 times, which is why both ost1 and ost3 are full. As you can 
see, ost2 and ost0 still have space.

I initially thought this could be solved by enabling striping, but from 
HowTo (which doesn't say much on the subject admittedly) I gathered 
striping was already enabled? (4MB chunks). So, shouldn't these OSTs be 
filling up at a relatively uniform ratio?

# cat /proc/fs/lustre/lov/testfs-clilov-ca5e0000/stripe*
1
0
4194304
1
[root at andy ~]# cat /proc/fs/lustre/lov/testfs-mdtlov/stripe*
1
0
4194304
1


Thanks for any help,
-Nick



More information about the lustre-discuss mailing list