[Lustre-discuss] lustre quota problems

McHale, Therese therese.mchale at hp.com
Mon Jan 7 05:05:52 PST 2008


>What about the 1.6.x tree? Is this fix included in one 1.6.x version, too?
yes. It went into 1.5 and is therefore included in 1.6.x
-therese

Postal Address: Hewlett Packard Galway Ltd., Ballybrit Business Park, Galway, Ireland
Registered Office: 63-74 Sir John Rogerson's Quay, Dublin 2, Ireland.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error you should delete it from your system immediately and advise the sender. To any recipient of this message within HP: unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".



-----Original Message-----
From: lustre-discuss-bounces at clusterfs.com [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of lustre-discuss-request at clusterfs.com
Sent: 05 January 2008 19:00
To: lustre-discuss at clusterfs.com
Subject: Lustre-discuss Digest, Vol 24, Issue 11


Send Lustre-discuss mailing list submissions to
        lustre-discuss at clusterfs.com

To subscribe or unsubscribe via the World Wide Web, visit
        https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
or, via email, send a message with subject or body 'help' to
        lustre-discuss-request at clusterfs.com

You can reach the person managing the list at
        lustre-discuss-owner at clusterfs.com

When replying, please edit your Subject line so it is more specific than "Re: Contents of Lustre-discuss digest..."


Today's Topics:

   1. Re: Problems with failover (Aaron Knister)
   2. Re: lustre quota problems (Patrick Winnertz)
   3. Re: small file performance (Robin Humble)
   4. Re: small file performance (Aaron Knister)


----------------------------------------------------------------------

Message: 1
Date: Fri, 4 Jan 2008 15:35:35 -0500
From: Aaron Knister <aaron at iges.org>
Subject: Re: [Lustre-discuss] Problems with failover
To: Jeremy Mann <jeremy at biochem.uthscsa.edu>
Cc: Andreas Dilger <adilger at sun.com>, lustre-discuss at clusterfs.com
Message-ID: <C2F3FB6B-08C3-4D6E-86FE-9687A471AFC5 at iges.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Personally I strongly advise against using compute nodes to host any type of storage service. If a user job crashes  a compute node (it will actually usually take out several) in which case you're once again up a creek. I don't know of any filesystem that could handle the failure of more than two or three underlying storage components. Separating storage from computation was the best decision I've ever made because it allows both to be scaled independently. Am I totally missing the mark here? If you still want to do this, try the gfarm filesystem and there's another one but I can't think of the name. If i find it I'll let you know.

On Jan 4, 2008, at 11:10 AM, Jeremy Mann wrote:

>
> On Thu, 2008-01-03 at 17:34 -0700, Andreas Dilger wrote:
>
>> To be clear - Lustre failover has nothing to do with data
>> replication. It is meant only as a mechanism to allow
>> high-availability of shared disk.  This means - more than one node
>> can serve shared disk from a SAN or multi-port FC/SCSI disks.
>
> How would one build a reliable system with 20 OSTs? Our system
> contains 20 compute nodes, each with 2 200GB drives in a RAID0
> configuration. Each node acts as an OST and a failover of each other,
> i.e. 0-1, 1-2, 3-4, etc..
>
> I can start from scratch, so I'm thinking of rebuilding the RAID
> arrays with RAID1 to compensate for disk failures. But that still
> leaves me questioning if a node goes down, or we lose another drive,
> if we'll be back to the same problems we've been having.
>
> --
> Jeremy Mann
> jeremy at biochem.uthscsa.edu
>
> University of Texas Health Science Center
> Bioinformatics Core Facility http://www.bioinformatics.uthscsa.edu
> Phone: 210-567-2672
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Aaron Knister
Associate Systems Analyst
Center for Ocean-Land-Atmosphere Studies

(301) 595-7000
aaron at iges.org






------------------------------

Message: 2
Date: Fri, 4 Jan 2008 23:57:12 +0100
From: Patrick Winnertz <patrick.winnertz at credativ.de>
Subject: Re: [Lustre-discuss] lustre quota problems
To: lustre-discuss at clusterfs.com
Message-ID: <200801042357.14163.patrick.winnertz at credativ.de>
Content-Type: text/plain;  charset="iso-8859-1"

Am Mittwoch, 2. Januar 2008 15:45:48 schrieb Johann Lombardi:
> On Wed, Jan 02, 2008 at 01:39:06PM +0000, McHale, Therese wrote:
> > The fix Roland mentions is included in Lustre 1.4.10 or you can also
> > find it here https://bugzilla.lustre.org/attachment.cgi?id=8709
>
> For the record, the original bugzilla ticket is in fact 11073 and as
> Therese pointed out, the patch is included in lustre 1.4.10.
What about the 1.6.x tree? Is this fix included in one 1.6.x version, too?

Greetings
Winnie
>
> Johann
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss



--
Patrick Winnertz
Tel.: +49 (0) 2161 / 4643 - 0

credativ GmbH, HRB M?nchengladbach 12080
Hohenzollernstr. 133, 41061 M?nchengladbach
Gesch?ftsf?hrung: Dr. Michael Meskes, J?rg Folz



------------------------------

Message: 3
Date: Sat, 5 Jan 2008 03:50:17 -0500
From: Robin Humble <rjh+lustre at cita.utoronto.ca>
Subject: Re: [Lustre-discuss] small file performance
To: Aaron Knister <aaron at iges.org>
Cc: Lustre-discuss <lustre-discuss at clusterfs.com>
Message-ID: <20080105085016.GA20815 at lemming.cita.utoronto.ca>
Content-Type: text/plain; charset=us-ascii

On Fri, Jan 04, 2008 at 09:44:54AM -0500, Aaron Knister wrote:
>For whatever reason, searching my lustre mount (ls -R or find),
>compiling code and other operations involving lots of small files are
>painfully slow. There is no load on the filesystem other than my
>various tests. I've disabled lnet debugging. Just to give you an idea
>of how slow it is-- a ./configure of this particular code on a local
>filesystem takes less than a minute. On lustre it's been running for
>five minutes and is hardly half way through. An untar on the local
>filesystem takes .9 seconds while that same untar takes 12 seconds to
>our lustre mount. Any ideas for improving this?

do you have striping turned off?
that makes a massive difference for metadata operations...
  lfs setstripe -d /some/lustre/dir/

cheers,
robin



------------------------------

Message: 4
Date: Sat, 5 Jan 2008 11:08:10 -0500
From: Aaron Knister <aaron at iges.org>
Subject: Re: [Lustre-discuss] small file performance
To: Robin Humble <rjh+lustre at cita.utoronto.ca>
Cc: Lustre-discuss <lustre-discuss at clusterfs.com>
Message-ID: <76D0DB87-00E2-4A7F-BDA2-EB1C34D33A00 at iges.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Striping is turned off. Are there any other optimizations you know of to increase the speed of metadata operations?

On Jan 5, 2008, at 3:50 AM, Robin Humble wrote:

> On Fri, Jan 04, 2008 at 09:44:54AM -0500, Aaron Knister wrote:
>> For whatever reason, searching my lustre mount (ls -R or find),
>> compiling code and other operations involving lots of small files are
>> painfully slow. There is no load on the filesystem other than my
>> various tests. I've disabled lnet debugging. Just to give you an idea
>> of how slow it is-- a ./configure of this particular code on a local
>> filesystem takes less than a minute. On lustre it's been running for
>> five minutes and is hardly half way through. An untar on the local
>> filesystem takes .9 seconds while that same untar takes 12 seconds to
>> our lustre mount. Any ideas for improving this?
>
> do you have striping turned off?
> that makes a massive difference for metadata operations...  lfs
> setstripe -d /some/lustre/dir/
>
> cheers,
> robin

Aaron Knister
Associate Systems Analyst
Center for Ocean-Land-Atmosphere Studies

(301) 595-7000
aaron at iges.org






------------------------------

_______________________________________________
Lustre-discuss mailing list
Lustre-discuss at clusterfs.com https://mail.clusterfs.com/mailman/listinfo/lustre-discuss


End of Lustre-discuss Digest, Vol 24, Issue 11
**********************************************
-------------- next part --------------
A non-text attachment was scrubbed...
Name: McHale, Therese (SFS Support Engineer, Galway).vcf
Type: text/x-vcard
Size: 422 bytes
Desc: McHale, Therese (SFS Support Engineer, Galway).vcf
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080107/d06896ee/attachment.vcf>


More information about the lustre-discuss mailing list