[External] : Re: jstack, profilers and other tools

eric at kolotyluk.net eric at kolotyluk.net
Wed Jul 13 18:26:29 UTC 2022


Just testing my intuition here… because reading what Ron says is often eye-opening… and changes my intuition

 

1.	Loom improves concurrency via Virtual Threads

a.	And consequently, potentially improves throughput

2.	A key aspect of concurrency is blocking, where blocked tasks enable resources to be applied to unblocked tasks (where Fork-Join is highly effective)

a.	Pre-Loom, resources such as Threads could be applied to unblocked tasks, but

                                                               i.      Platform Threads are heavy, expensive, etc. such that the number of Platform Threads puts a bound on concurrency

b.	Post-Loom, resources such as Virtual Threads can now be applied to unblocked tasks, such that

                                                               i.      Light, cheap, etc. Virtual Threads enable a much higher bound on concurrency

                                                             ii.      According to Little’s Law, throughput can rise because the number of threads can rise.

3.	Little’s Law also says “The only requirements are that the system be stable and non-preemptive;”

a.	While the underlying O/S may be preemptive, the JVM is not, so this requirement is met.
b.	But, Ron says, “While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).”
c.	Which I take to imply, that increasing the number of Virtual Threads increases the stability… ?

                                                               i.      Even in Loom, there is an upper bound on Virtual Threads created, albeit a much higher upper bound.

4.	Where I am still confused is

a.	In Loom, I would expect that even when all our CPU Cores are at 100%, 100% throughput, the system is still stable?

                                                               i.      Or maybe I am misinterpreting what Ron said?

b.	However, latency will suffer, unless

                                                               i.      more CPU Cores are added to the overall load, via some load balancer

                                                             ii.      flow control, such as backpressure, is added such that queues do not grow without bound (a topic I would love to explore more)

                                                           iii.      Or, does an increase in latency mean a loss of stability?

 

Cheers, Eric

 

From: loom-dev <loom-dev-retn at openjdk.org> On Behalf Of Ron Pressler
Sent: July 13, 2022 6:30 AM
To: Alex Otenko <oleksandr.otenko at gmail.com>
Cc: Rob Bygrave <robin.bygrave at gmail.com>; Egor Ushakov <egor.ushakov at jetbrains.com>; loom-dev at openjdk.org
Subject: Re: [External] : Re: jstack, profilers and other tools

 

The application of Little’s law is 100% correct. Little’s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system’s level of concurrency is bounded at a very low level — say, 10 — then having more than 10 threads is unhelpful, but as we’re talking about a program that uses virtual threads, we know that is not the case.

 

Also, Little’s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).

 

— Ron





On 13 Jul 2022, at 14:00, Alex Otenko <oleksandr.otenko at gmail.com <mailto:oleksandr.otenko at gmail.com> > wrote:

 

This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. 

 

Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec.

 

Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads.

 

On Tue, 12 Jul 2022, 15:47 Ron Pressler, <ron.pressler at oracle.com <mailto:ron.pressler at oracle.com> > wrote:

 





On 11 Jul 2022, at 22:13, Rob Bygrave <robin.bygrave at gmail.com <mailto:robin.bygrave at gmail.com> > wrote:

 

> An existing application that migrates to using virtual threads doesn’t replace its platform threads with virtual threads

 

What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads.  Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here?

 

 

The throughput advantage to virtual threads comes from one aspect — their *number* — as explained by Little’s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput.





 

 

> unusual for an application that has any virtual threads to have fewer than, say, 10,000

 

In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance.  

 

1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for.

 

The important point is that tooling needs to adapt to a high number of threads, which is why we’ve added a tool that’s designed to make sense of many threads, where jstack might not be very useful.

 

— Ron

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220713/4c8e82ac/attachment-0001.htm>


More information about the loom-dev mailing list