[External] : Re: jstack, profilers and other tools

Ron Pressler ron.pressler at oracle.com
Fri Jul 29 00:01:05 UTC 2022



On 29 Jul 2022, at 00:11, Alex Otenko <oleksandr.otenko at gmail.com<mailto:oleksandr.otenko at gmail.com>> wrote:

Thanks.

That works under _assumption_ that the time stays constant, which is also a statement made in the JEP.

But we know that with thread count growing the time goes down. So one needs more reasoning to explain why T goes up - and at the same time keep the assumption that time doesn't go down.

A new request arrives — a thread is created. What time goes down?
If you’re talking about fanout, i.e. adding threads to perform operations in parallel, the maths actually stays exactly the same. First, remember that we’re not talking about threads *we’re adding* but threads that are created by virtue of the fact that every request gets a thread. Second, if you want to do the calculation with threads directly, rather than requests, then the concurrency goes up by the same factor as the latency is reduced (https://inside.java/2020/08/07/loom-performance/).


In addition, it makes sense to talk of thread counts when that presents a limit of some sort - eg if N threads are busy and one more request arrives, the request waits for a thread to become available - we have a system with N threads. Thread-per-request does not have that limit: for any number of threads already busy, if one more request arrives, it still gets a thread.

That could be true, but that’s not the point of thread-per-request at all. The most important point is that a thread-per-request system is one that consumes a thread for the entire duration of processing a request.


My study concludes that thread-per-request is the case of infinite number of threads (from mathematical point of view). In this case talking about T and its dependency on request rate is meaningless.

That is incorrect.


Poisson distribution of requests means that for any request rate there is a non-zero probability for any number of requests in the system - ergo, the number of threads. Connecting this number to a rate is meaningless.

Little’s proof is independent of distribution.


Basically, you need to support "a high number of threads" for any request rate, not just a very high request rate.

I sort of understand what you’re trying to say, but it’s more misleading than helpful. If your server gets 10 requests per second on average, then if their average latency is never more than 0.1, then if you can only have one thread, your system would still be stable (i.e. there would be no queues growing without bounds). If the requests momentarily arrive close together, or if some latencies are momentarily high, then the queue will momentarily grow. So no, since the goal is to keep the server stable, if you have a low throughput you do not need many threads.

— Ron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220729/6f4cb95a/attachment.htm>


More information about the loom-dev mailing list