Jetty and Loom
Alan Bateman
Alan.Bateman at oracle.com
Mon Jan 4 19:44:04 UTC 2021
On 04/01/2021 17:55, Greg Wilkins wrote:
>
>
> Oh having been providing an application container for over 25 years, I
> can definitely tell you that those bloated libraries and stupidly deep
> calls stacks are a thing, regardless of the importance or need for
> them. There is just a certain class of development process for which
> no problem cannot be solved by adding a few more dependencies.
> Needless bloat is a thing which is no more going to be solved by Loom
> than it was by async or reactive or dependency injection or buzzword
> coding style. The bloat examples just show that Loom is not a
> silver bullet and cannot magic away deep stacks.... but then nothing
> can. It's not a criticism of Loom, just an observation of reality and
> that it's not a silver bullet... ie a bit of temperance on the "just
> spawn another thread" meme .
>
> :
>
>
> Again, I'm not saying that Thread pools are silver bullets that solve
> all problems. I'm just saying that there are other reasons for using
> them other than just because kernel threads are slow to start. In the
> right circumstance, a thread pool (or an executor that limits
> concurrent executions) is a good abstraction. My stress on them
> is in reaction to the "forget about thread pools" meme, which I think
> would be better as a little less shiny projectile sounding.
I haven't seen any memes go by on these topics.
There have been a few examples with people trying out the builds that
created thread pools of virtual threads, usually by replacing a
ThreadFactory for a fixed thread pool when they meant to replace the
thread pool. There are also examples like we are discussing here where
the number of request handlers that can run concurrently is limited by
the number of network connections rather than the number of threads.
Virtual threads can run existing code but I would be concerned with the
memory footprint if they are just used to run a lot of existing bloat.
I'm looking at stack traces every day too and they are deeper that many
other languages/runtimes but a 1000+ stack traces seems a bit excessive
when the work to do is relatively simple. Compiled frames are very
efficient and virtual threads would be much happier with thread stacks
that are a few KB.
>
> As a tangent, I am intrigued by some of the claims from Loom regarding
> cancellation and interrupt. The ability to cancel tasks or impose
> deadlines is definitely a very nice feature to have, but I can't see
> how cancelling/interrupting a virtual thread is any easier than
> cancelling/interrupting a kernel thread. You still have all the
> finally blocks to unwind and applications that catch and ignore
> ThreadDeath exceptions. It didn't work for kernel threads, so I
> can't see what is different for virtual threads.... but I REALLY hope
> I'm wrong and that Loom has discovered a safe way to cancel a task.
With virtual threads there is a 1:1 relationship between the task and
the Thread that executes it. This is good for Thread interrupt as it
avoids races that arise with thread borrowing and when cancel(true) maps
to Thread::interrupt.
There has been some exploration and prototypes of cancellation,
including exploring how both mechanisms can co-exist but we haven't been
happy with it. It's a topic that the project will come back to.
>
> If an application has lots of requests are waiting for some special
> message to arrive, but they use synchronize/wait/notify for that
> waiting, then that's not support by Loom, so you'd only need 16
> requests to the application to wait like that before all 16 of you
> kernel threads are now blocked in something Loom doesn't support.
Just on this passing comment: The intention is to replace the Java
monitor implementation so the current limitation with parking while
holding a monitor goes away. It's a big project with lots of unknowns,
all early stages at this point in time. For now, Object.wait works with
the FJP managedBlocker mechanism to increase parallelism during the
wait. So the pool 16 carrier threads will increase when executing code
where Object.wait is used. For those that are desperate then the option
to mechanically replace the monitors with j.u.c locks and Condition
objects is there.
> :
>
> It may well be that eventually virtual threads improve and that we
> think of new techniques so that a server like jetty could be
> implemented in kernel threads. However I think we are a long way from
> that today.
>
> In short, I think that adding yet another level of "virtual" between
> tasks and CPU cores just makes it even harder to do the mechanical
> sympathetic optimizations that are needed for high performance
> servers. But then I'm not sure the innards of a server like Jetty is
> the primary target for Loom anyway and that it is more targeted at
> allowing business logic to be simply written and to take advantage of
> all the smarts in the server without the complexity of async APIs
The goal is make it possible to write code that scales as well as async
code does but without the the complexity of async. The intention is that
debuggers, profilers, and everything else just works as people expect.
-Alan.
More information about the loom-dev
mailing list