[External] : Re: jstack, profilers and other tools

Alex Otenko oleksandr.otenko at gmail.com
Mon Jul 25 08:16:45 UTC 2022

Well, there are a few things I said several times too, so we are in the
same boat. :)

Ok, just open your favourite modelling software and see:

Given a request rate and a request processing time, there is a minimal
number of threads that can process them. That's the capacity needed to do
work (i.e. for the system to remain stable).

Thread-per-request is simply the maximum number of threads you can have to
process that work.

Then you can see what your favourite modelling software says about
concurrency. It says that as you add threads, concurrency in the sense used
in Little's law decreases.

Since this is also a mathematical fact, something in the claim that adding
threads increases concurrency, needs reconciling.


On Sun, 24 Jul 2022, 20:45 Ron Pressler, <ron.pressler at oracle.com> wrote:

> > On 24 Jul 2022, at 19:26, Alex Otenko <oleksandr.otenko at gmail.com>
> wrote:
> >
> > The "other laws" don't contradict Little's law, they only explain that
> you can't have an equals sign between thread count and throughput.
> That there is an equals sign when the system is stable is a mathematical
> theorem, so there cannot exist a correct explanation for its falsehood.
> Your feelings about it are irrelevant to its correctness. It is a theorem.
> >
> > Let me remind you what I mean.
> >
> > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and
> at request rate 99 concurrency is 99. Etc.  All of this is because response
> time gets worse as the "other laws" predict. But we already see thread
> count is not a cap on concurrency, as was one of the claims earlier in this
> thread.
> >
> > If we increase thread count we can improve response times. But at thread
> count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms
> response time. Whereas arithmetically the situation keeps improving (by an
> ever smaller fraction of a microsecond), the mathematics of it cannot
> capture the notion of diminished returns.
> First, as I must have repeated three times in this discussion, we’re
> talking about thread-per-request (please read JEP 425, as all this is
> explained there), so by definition, we’re talking about cases where the
> number of threads is equal to or greater than the concurrency, i.e. the
> number of requests in flight.
> Second, as I must have repeated at least three times in this discussion,
> increasing the number of threads does nothing. A mathematical theorem tells
> us what the concurrency *is equal to* in a stable system. It cannot
> possibly be any lower or any higher. So your number of requests that are
> being processed is equal to some number if your system is stable, and in
> the case  of thread-per-request programs, which are our topic, the number
> of threads processing requests is exactly equal to the number of concurrent
> requests times the number of threads per request, which is at least one by
> definition. If you add any more threads they cannot be processing requests,
> and if you have fewer threads then your system isn’t stable.
> Finally, if your latency starts going up, then so does your concurrency,
> up to the point where one of your software or hardware components reaches
> its peak concurrency and your server destabilises. While Little’s law tells
> you what the concurrency is equal to (and so, in a thread-per-request
> program what the number of request-processing threads is equal to), the
> number of threads is not the only limit on the maximum capacity. We know
> that in a thread-per-request server, every request consumes at least one
> thread, but it consumes other resources as well, and they, too, place
> limitations on concurrency. All this is factored into the bounds on the
> concurrency level. It’s just that we empirically know that the limitation
> on threads is hit *first* by many servers, which is why async APIs and
> lightweight user-mode threads were invented.
> Note that Little’s law, being a mathematical theorem, applies to every
> component separately, too. I.e., you can treat your CPU as the server, the
> requests would be the processing bursts, and the maximum concurrency would
> be the number of cores.
> >
> > So that answers why we are typically fine with a small thread count.
> >
> That we are not typically fine writing scalable thread-per-request
> programs with few threads is the reason why async I/O and user-mode threads
> were created. It is possible some people are fine, but clearly many are
> not. If your thread-per-request program needs to handle only a small number
> of requests concurrently, and so needs only a few threads, then there’s no
> need for you to use virtual threads. That is exactly why, when this
> discussion started what feels like a year ago, I said that when there are
> virtual threads, there must be many of them (or else they’re not needed).
> — Ron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220725/a8085b79/attachment.htm>

More information about the loom-dev mailing list