loom ea based jdk18 feedback

Jon Ross jonross at gmail.com
Fri Nov 19 19:40:09 UTC 2021


On Fri, Nov 19, 2021 at 3:04 AM Alan Bateman <Alan.Bateman at oracle.com> wrote:
>
> On 18/11/2021 21:13, Jon Ross wrote:
> If you are doing your own implementation of blocking I/O outside of the
> JDK then park/unpark should be sufficient. Unpark will queue the virtual
> thread to the scheduler so it continues on one of its thread. If you
> really want to restrict the scheduler to 1 thread then run with
> -Djdk.defaultScheduler.maxPoolSize=1. You are in very advanced territory
> if want fine control on the OS threads, this isn't a priority for this
> project right now but we will come back to it.

Yes, having control over OS threads is a requirement for me. So is
controlling how/when virtual threads run. I will come back to loom
another time.

The rest of this email is entirely for your benefit because you are
actively soliciting feedback. You are free to do with this what you
will, including completely ignoring it. I am not (as some others in
this thread have claimed) arguing, or advocating that you do anything
different. This is entirely an offer to help you understand a
customer's POV of loom for their specific use case. I am not asking
for anything at all, not even for you to care about my use case.

I guess the TL:DR is that I don't think this is very "advanced territory".

I come from a FinTech background. Most of the frameworks in the java
low-latency FinTech space (LMAX, the OpenHFT stuff, Real Logic / SBE,
Nasdaq, etc) use some custom alternative to NIO. From a simple JNI
epoll implementation to more exotic APIs that are not BSD
socket-based. They can fallback to NIO for local testing, but are
rarely deployed using NIO. The event-loop IO API is the only IO used.
The "tasks" are short-lived, and always return to the same event-loop
to wait on the next thing. Everything is callback-based. These
frameworks are fairly obsessed with avoiding context switches, as a
context switch is a material % of a tasks' budgeted work time. I don't
think loom will get much adoption in this space as it exists today. I
do not think this claim is an "overstatement" or hyperbolic.
Stackless coroutines work great for these types of frameworks. In
C++/Rust/Zig/Scala(macro)/Kotlin the compiler is hiding the callback,
so the required control still exists, and it moves the API from
callbacks to something that looks synchronous but isn't.  I think this
is the main benefit of any type of fiber/coroutine for these types of
things. Clean up the API. The cost over callbacks is small (0 to 10s
of  nanos), and the event-loop is a bit more complex to write, but the
resulting API is so much nicer to use. Again, not advocating that you
move to a stackless model. Just pointing out it works well for this
use case.
The low-latency FinTech market is pretty niche, and I think you can
safely ignore it. I don't t think JVM languages are used for the
majority of this market but are surprisingly well represented.

What I would be a little more concerned about if I was you, is that
the FinTech model is not so dissimilar to netty, (except compulsive
obsession w/ avoiding context switches). They also are typically
deployed on custom JNI networking layers. The newer ones aren't BSD
sockets based. Their API also offers fairly fine-grain control over
OS-Thread usage (I don't know how many use it). Netty is the defacto
scheduler for the frameworks that use it. I'd think netty would have
similar issues as FinTech w/ loom. Have you worked with them yet?
Maybe you've already arrived at a good solution for netty? I'm sure
you're aware, but netty underpins almost every back-end java
framework. I would think that early netty adoption would be important
to you? Unless you consider loom an alternative to netty?  These are
all rhetorical questions I would be asking myself if I was you. I'm
not asking for, or expecting an answer to any of these.

ok. done.
Thanks for reading. good luck.

-Jon


-Jon


More information about the loom-dev mailing list