[Lustre-devel] Lustre RPC visualization

Eric Barton eric.barton at oracle.com
Tue Jun 1 08:58:48 PDT 2010


I'd really like to see how vampire handles _all_ the trace data
we can throw at it.  If 600 clients is a pain, how bad will it
be at 60,000?   

What in particular makes collecting traces from all the clients
+ servers hard?  How can/should we automate it or otherwise make
it easier?

    Cheers,
              Eric

> -----Original Message-----
> From: di.wang [mailto:di.wang at oracle.com]
> Sent: 01 June 2010 12:50 PM
> To: Robert Read
> Cc: Michael Kluge; Eric Barton; Galen M. Shipman
> Subject: Re: [Lustre-devel] Lustre RPC visualization
> 
> Hello,
> 
> IMHO, just run IOR with whatever parameters, and get rpctrace
> log(probably only enable rpctrace) from 1 OST and some of clients
> (probably 2 is enough).
> Note: please make sure these 2 clients did communicate the OST during
> the IOR.
> 
> Michael, I do not think you need all the trace logs from the clients. right?
> 
> Robert
> 
> If there are available time slots for this test on Hyperion, who can
> help to get these logs?
> 
> Thanks
> Wangdi
> 
> Robert Read wrote:
> > What should I run then? Do have scripts to capture this?
> >
> > robert
> >
> > On May 31, 2010, at 2:39 , Michael Kluge wrote:
> >
> >> Hi Robert,
> >>
> >> 600 is a nice number. Plus the traces from the server an I am happy.
> >>
> >>
> >> Michael
> >>
> >> Am 28.05.2010 um 17:53 schrieb Robert Read:
> >>
> >>>
> >>> On May 28, 2010, at 4:09 , di.wang wrote:
> >>>
> >>>> Hello, Michael
> >>>>
> >>>>> One good news: The Feature that Vampir can show something like a heat
> >>>>> map (Eric asked about this) comes back with the release at ISC. It is
> >>>>> now called "performance radar". It can produce a heat map for a
> >>>>> counter
> >>>>> and does some other things as well. I could send a picture around, but
> >>>>> need at first an bigger trace (more hosts generating traces in
> >>>>> parallel).
> >>>>>
> >>>> Right now I do not have big clusters available to generate the trace.
> >>>> I will see what I can do here.
> >>>
> >>> If ~600 clients is big enough we could generate that on Hyperion.
> >>>
> >>> robert
> >>>
> >>>>
> >>>> Thanks
> >>>> WangDi
> >>>>>
> >>>>>
> >>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>> ------------------------------------------------------------------------
> >>>>>
> >>>>> _______________________________________________
> >>>>> Lustre-devel mailing list
> >>>>> Lustre-devel at lists.lustre.org <mailto:Lustre-devel at lists.lustre.org>
> >>>>> http://lists.lustre.org/mailman/listinfo/lustre-devel
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >> --
> >>
> >> Michael Kluge, M.Sc.
> >>
> >> Technische Universität Dresden
> >> Center for Information Services and
> >> High Performance Computing (ZIH)
> >> D-01062 Dresden
> >> Germany
> >>
> >> Contact:
> >> Willersbau, Room WIL A 208
> >> Phone:  (+49) 351 463-34217
> >> Fax:    (+49) 351 463-37773
> >> e-mail: michael.kluge at tu-dresden.de <mailto:michael.kluge at tu-dresden.de>
> >> WWW:    http://www.tu-dresden.de/zih
> >>
> >





More information about the lustre-devel mailing list