[Lustre-discuss] Service thread count parameter

Jean-Francois Le Fillatre jean-francois.lefillatre at clumeq.ca
Mon Oct 15 12:01:05 PDT 2012


Hi David,

Yes this is one strange formula... There are two ways of reading it:

- "one thread per 128MB of RAM, times the number of CPUs in the system"
On one of our typical OSSes (24 GB, 8 cores), that would give: ((24*1024) /
128) * 8 = 1536
And that's waaaay out there...

- "as many threads as you can fit (128MB * numbers of CPUs) in the RAM of
your system"
Which would then give: (24*1024) / (128*8) = 24
For a whole system, that's really low. But for one single OST, it almost
makes sense, in which case you'd want to multiply that by the number of
OSTs connected to your OSS.

The way we did it here is that we identified that the major limiting
parameter is the software RAID, both in terms of bandwidth performance and
CPU use. So I did some tests on a spare machine to get some load and perf
figures for one array, using sgpdd-survey. Then, taking into account the
number of OST per OSS (4) and the overhead of Lustre, I figured out that
the best thread count would be around 96 (which is 24*4, spot on).

One major limitation in Lustre 1.8.x (I don't know if it has changed in
2.x) is that only the global thread count for the OSS can be specified. We
have cases where all OSS threads are used on a single OST, and that completely
trashes the bandwidth and latency. We would really need a max thread count
per OST too, so that no single OST would get hit that way. On our systems,
I'd put the max OST thread count at 32 (to stay in the software RAID
performance sweet spot) and the max OSS thread count at 96 (to limit CPU
load).

Thanks!
JF



On Mon, Oct 15, 2012 at 2:20 PM, David Noriega <tsk133 at my.utsa.edu> wrote:

> How does one estimate a good number of service threads? I'm not sure I
> understand the following: 1 thread / 128MB * number of cpus
>
> On Wed, Oct 10, 2012 at 9:17 AM, Jean-Francois Le Fillatre
> <jean-francois.lefillatre at clumeq.ca> wrote:
> >
> > Hi David,
> >
> > It needs to be specified as a module parameter at boot time, in
> > /etc/modprobe.conf. Check the Lustre tuning page:
> > http://wiki.lustre.org/manual/LustreManual18_HTML/LustreTuning.html
> > http://wiki.lustre.org/manual/LustreManual20_HTML/LustreTuning.html
> >
> > Note that once created, the threads won't be destroyed, so if you want to
> > lower your thread count you'll need to reboot your system.
> >
> > Thanks,
> > JF
> >
> >
> > On Tue, Oct 9, 2012 at 6:00 PM, David Noriega <tsk133 at my.utsa.edu>
> wrote:
> >>
> >> Is this a parameter, ost.OSS.ost_io.threads_max, when set via lctl
> >> conf_parm will persist between reboots/remounts?
> >> _______________________________________________
> >> Lustre-discuss mailing list
> >> Lustre-discuss at lists.lustre.org
> >> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> >
> >
> >
> >
> > --
> > Jean-François Le Fillâtre
> > Calcul Québec / Université Laval, Québec, Canada
> > jean-francois.lefillatre at clumeq.ca
> >
>
>
>
> --
> David Noriega
> CSBC/CBI System Administrator
> University of Texas at San Antonio
> One UTSA Circle
> San Antonio, TX 78249
> Office: BSE 3.114
> Phone: 210-458-7100
> http://www.cbi.utsa.edu
>
> Please remember to acknowledge the RCMI grant , wording should be as
> stated below:This project was supported by a grant from the National
> Institute on Minority Health and Health Disparities (G12MD007591) from
> the National Institutes of Health. Also, remember to register all
> publications with PubMed Central.
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>



-- 
Jean-François Le Fillâtre
Calcul Québec / Université Laval, Québec, Canada
jean-francois.lefillatre at clumeq.ca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20121015/37efdadf/attachment.htm>


More information about the lustre-discuss mailing list