[External] : Re: jstack, profilers and other tools

Ron Pressler ron.pressler at oracle.com
Sun Jul 24 14:18:30 UTC 2022


Little’s law dictates that concurrency must rise with throughput (= request rate) assuming latency doesn’t drop, i.e. the number of requests being served rises. If your program is thread-per-request then, by definition, if the number of requests rises so must the number of threads. Of course, while we’re only talking about thread-per-request here, if you choose to do something else you can disentangle requests from threads and then concurrency is not tied to the number of threads (but then you give up on synchronous code and on full platform support).

All this is covered in JEP 425, which explains that to reach higher throughputs you must either abandon the thread as the unit of concurrency and write asynchronous code, or use threads that can be plentiful. The reasons we invested to much in making threads that can be plentiful are: 1. There are many people who prefer the synchronous style, and 2. The asynchronous style is fundamentally at odds with the design of the language and the platform, which cannot support it as well as they can the synchronous style (at least not without an overhaul to very basic concepts).

BTW, I don’t understand your point about there being “other laws at play.” Little’s law is not a physical law subject to refutation by observation, but a mathematical theorem. As such, short of an inconsistency in the foundation of mathematics, all other mathematical theorems must be consistent with it. To accommodate higher request rates, latency must drop or concurrency must rise — that must always be true. Other laws may state other things that must also be true, but they cannot contradict this.

— Ron

On 24 Jul 2022, at 14:07, Alex Otenko <oleksandr.otenko at gmail.com<mailto:oleksandr.otenko at gmail.com>> wrote:

I think none of this statement has anything to do with Little's law.

On Sat, 23 Jul 2022, 02:04 Ron Pressler, <ron.pressler at oracle.com<mailto:ron.pressler at oracle.com>> wrote:
We’re talking about thread-per-request programs. In such programs, one thread has a concurrency of one (i.e. it handles one request, hence “thread-per-request”). As I explained, to get higher concurrency than what’s allowed by the number of OS threads you can *either* use user-mode threads *or* not represent a unit of concurrency as a thread, but here we’re talking about the former. All that is covered in JEP 425.

— Ron

On 23 Jul 2022, at 00:25, Alex Otenko <oleksandr.otenko at gmail.com<mailto:oleksandr.otenko at gmail.com>> wrote:

I think the single threaded example I gave speaks for itself. 1 thread can sustain various throughputs with various concurrency. I've shown a case with 99 concurrent requests, as per Little's law (and I agree with it),  and it's easy to see how to get any higher concurrency.

There are other laws at play, too, so my example latency wasn't random. But this has been long enough.


On Thu, 21 Jul 2022, 12:30 Ron Pressler, <ron.pressler at oracle.com<mailto:ron.pressler at oracle.com>> wrote:
Little’s law has no notion of threads, only of “requests.” But if you’re talking about a *thread-per-request* program, as I made explicitly clear, then the number of threads is equal to or greater than the number of requests.

And yes, if the *maximum* thread count is low, a thread-per-request program will have a low bound on the number of concurrent requests, and hence, by Little’s law, on throughput.

— Ron

On 20 Jul 2022, at 19:24, Alex Otenko <oleksandr.otenko at gmail.com<mailto:oleksandr.otenko at gmail.com>> wrote:

To me that statement implies a few things:

- that Little's law talks of thread count

- that if thread count is low, can't have throughput advantage


Well, I don't feel like discussing my imperfect grasp of English.

On Tue, 19 Jul 2022, 23:52 Ron Pressler, <ron.pressler at oracle.com<mailto:ron.pressler at oracle.com>> wrote:


On 19 Jul 2022, at 18:38, Alex Otenko <oleksandr.otenko at gmail.com<mailto:oleksandr.otenko at gmail.com>> wrote:

Agreed about the architectural advantages.

The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency.

> The throughput advantage to virtual threads comes from one aspect — their *number* — as explained by Little’s law.



Yes, and that is correct. As I explained, a higher maximum number of threads does indeed mean it is possible to reach the higher concurrency needed for higher throughput, so virtual threads, by virtue of their number, do allow for higher throughput. That statement is completely accurate, and yet it means something very different from (the incorrect) “increasing the number of threads increases throughput”, which is how you misinterpreted the statement.

This is similar to saying that AC allows people to live in areas with higher temperature, and that is a very different statement from saying that AC increases the temperature (althoughI guess it happens to also do that).

— Ron



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220724/3812b7fc/attachment-0001.htm>


More information about the loom-dev mailing list