[Lustre-discuss] howto make a lvm, or virtual lvm?

Wang Yibin wang.yibin at oracle.com
Thu Dec 16 07:34:55 PST 2010


Lustre has its own load-balancing algorithms (either round-robin or weighted) depending on the space usage of the OSTs.

在 2010-12-16,下午11:25, Eudes PHILIPPE 写道:

> I was wrong about what I say
> If oss1 and 2 are full, oss3 is ok, and if I sent a new file, upload is ok (sftp said nothing), but the file is different (the md5sum si different!)
> It's very dangerous!!
> 
> Is there a solution, if I see all ossX are almost full, when I add (some) new oss, distribute data on these new oss to always have the same poucentage use on all oss?
> 
> 
> 
> -----Message d'origine-----
> De : Wang Yibin [mailto:wang.yibin at oracle.com] 
> Envoyé : jeudi 16 décembre 2010 16:09
> À : Eudes PHILIPPE
> Cc : lustre-discuss
> Objet : Re: [Lustre-discuss] howto make a lvm, or virtual lvm?
> 
> 
> 在 2010-12-16,下午10:49, Eudes PHILIPPE 写道:
> 
>> Ok, so, i'll try this :
>> - One mds
>> - 2 physical oss with one drive (1 GB) (one ost on one oss)
>> 
>> On client, mount mds on /home..
>> lfs setstripe -c2 /home
>> 
>> I upload (in sftp) one file, 300 MB
>> - On Oss 1, he use 150 Mb of 1000
>> - On Oss 2, he use 150 Mb of 1000
>> 
>> All right!
>> 
>> I continue... copy my first file 4 times (so there is 5 * 300 MB = 
>> 1500 MB)
>> - On Oss 1, he use 750 Mb of 1000
>> - On Oss 2, he use 750 Mb of 1000
>> 
>> *************************
>> Now, I add a new oss server, with one ost (1GB)
>> - On Oss 1, he use 750 Mb of 1000
>> - On Oss 2, he use 750 Mb of 1000
>> - On Oss 3, he use 0 Mb of 1000
>> 
>> lfs setstripe -c3 /home on client
>> 
>> I upload a big file, 1.3 Go
>> He write on oss1, 2 and 3, but, when oss 1 and oss2 are full, he stop 
>> (Couldn't write to remote file "/home/big_log.log": Failure)
>> ******************************
> 
> All files in a directory inherits its parent dir's stripe attributes.
> As you set the mountpoint dir to stripe over 3 OSTs, all files in it will be written to 3 objects located in different OSTs.
> As OST 1 and 2 are full, surely you'll get write failure with ENOSPC.
> 
>> 
>> So now,
>> - On Oss 1, he use 1000 Mb of 1000
>> - On Oss 2, he use 1000 Mb of 1000
>> - On Oss 3, he use 250 Mb of 1000
>> I upload again, just for see, my first file (300 MB), he copy the file 
>> only on Oss3 (oss 1 and 2 are full of course), it's ok :)
>> 
>> Is there a solution for this problem?
> 
> If you want to do write with system that has full OSTs, you need to either 1) deactivate the full OSTs, or 2) set stripe size and offset properly.
> In your specific case, get the stripe size of your file to 1 and stripe offset to 2 (assuming the non-full OST index is 2).
> 
>> 
>> Regards
>> 
>> 
>> 
>> -----Message d'origine-----
>> De : Andreas Dilger [mailto:andreas.dilger at oracle.com]
>> Envoyé : mercredi 15 décembre 2010 22:39 À : Eudes PHILIPPE Cc : 
>> lustre-discuss at lists.lustre.org Objet : Re: [Lustre-discuss] howto 
>> make a lvm, or virtual lvm?
>> 
>> On 2010-12-15, at 10:06, Eudes PHILIPPE wrote:
>>> At the end, I want (if it's possible), a raid 5 over Ethernet, or, 1
>> physical raid 5 on each ostX and a big lvm I can extend as I want....
>> 
>> Lustre itself can not do RAID over the network, if that is what you 
>> are looking for...
>> 
>>> For my first test, I upload on client a file (1.8 Go) (each ost have 
>>> 1
>>> Go) The problem, is, when sdb is full, he stop the copy, and don't 
>>> continue on ost2
>> 
>> If you create your file to be striped over both OSTs, then it should work.
>> 
>> Use "lfs setstripe -c2 /home/newfile" to specify a stripe count of 2.
>> 
>> Cheers, Andreas
>> --
>> Andreas Dilger
>> Lustre Technical Lead
>> Oracle Corporation Canada Inc.
>> 
>> 
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list