Jetty and Loom

Greg Wilkins gregw at
Wed Dec 30 08:14:55 UTC 2020


The Jetty team have blogged about our initial experiments with Loom virtual


It hasn't all been plain sailing and I'm sure/hope that we'll be corrected
on some aspects etc.

However, out of that work we have produced a branch of jetty ( that we
think has a good way to integrate Loom.     We've not just replaced the
Thread Pool with some kind of virtual thread factory, as that would still
leave jetty doing lots of work internally thinking some threads can't block
and avoiding head-of-line blocking for flow control etc. etc.

Instead, we have kept the core of Jetty running async on kernel threads,
but marked any task that is selected to call the application as
non-blocking, as they now just spawn a virtual thread and return.    The
result is that we have a pool of kernel threads doing the selecting and
other high priority tasks, but when a HTTP connection is selected, the
selector thread directly spawns a virtual thread that does the reading,
parsing handling and writing of the response - free to block as it likes.
The selector thread then returns to selecting without executing a new
selecting thread (as it may do if the task was directly blocking).

We think this is a reasonable approach and have tested it to 1000 clients
and it has better latency than the equivalent async application, but
slightly more CPU.   We now need to crank this to 10,000 or 100,000 clients
for part 3 of our blog... but that will take us a few weeks and it would be
good to get feedback on our current experiments before we do that.

We will also continue to work on the jetty-loom branch and since the
changes are rather minimal, if they prove to be a good way to go, then I
don't see why they won't soon migrate to a main branch of jetty.


Greg Wilkins <gregw at> CTO

More information about the loom-dev mailing list