[Lustre-discuss] Reads Starved

Jeremy Filizetti jeremy.filizetti at gmail.com
Thu Jun 2 18:45:28 PDT 2011

You didn't mention which version of Lustre you were using for the tests.  If
you are using 1.8.5 or earlier it's possible you could be seeing problems
caused by bugzilla 23081.  I imagine you were filling the page cache quickly
with writes.  Reads could end up getting read and purged repeatedly because
they are added to the wrong end of the LRU list so they get purged from
memory pressure or hitting max_cached_mb.  If you have a sequential work
load take a look at your read ahead stats "lctl get_param
llite.*.read_ahead_stats" to see.  If you have an excessive number of "miss
inside window" and "read but discarded" for your misses that might be what
your seeing.


On Thu, Jun 2, 2011 at 5:02 PM, Roger Spellman <Roger.Spellman at terascala.com
> wrote:

> > -----Original Message-----
> > From: Peter Grandi [mailto:pg_mh at mh.to.sabi.co.UK]
> > Sent: Thursday, June 02, 2011 4:10 PM
> > To: Roger Spellman
> > Cc: lustre-discuss at lists.lustre.org
> > Subject: Re: [Lustre-discuss] Reads Starved
> >
> > Unfortunately these numbers are meaningless without an idea of
> > the storage system and the access patterns.
> The storage system is a Dell MD3200.
> >
> > > But, when I have 10 clients doing reads, and a different 10
> > > clients doing writes, the write performance barely drops, but
> > > the read performance drops to about 150 MB/s.
> >
> > This may be entirely the right thing for it to happen.
> Each test is doing large block, streaming I/O.  The reads and write are
> accessing different files.  I am running 1 thread per client.  If I
> increase the thread count on the reads, the performance does increase,
> but not nearly to the level of the writes.
> I had increased max_rpcs quite large, as this gives excellent
> performance.  However, this turns out to be the cause of this read/write
> disparity.  The system can queue up many writes, but reads are issued
> one at a time.  By decreasing max_rpcs, the read performance increased
> considerably.
> Roger Spellman
> Staff Engineer
> Terascala, Inc.
> 508-588-1501
> www.terascala.com <http://www.terascala.com/>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110602/c862e254/attachment.htm>

More information about the lustre-discuss mailing list