loom ea based jdk18 feedback

Ron Pressler ron.pressler at oracle.com
Fri Nov 19 20:14:15 UTC 2021


Hi.

A Jewish mother gives her son two ties for his birthday. When she sees him wearing one of them to dinner that night, she says, “Hmm, I see you don’t like the other one."

When we need to do multiple things, we usually do one first. Whenever that happens, some always complain why we didn’t do the other first. Of course, we can hold off on release until they’re both done, but that means that no one will get what they want any sooner.

In this case the choice is simple. The feature that addresses finer-grained control over virtual thread scheduling is very important, which is exactly why we’re working on it. There’s no need to doubt if we’ve pondered the points you raise, because the code clearly shows that we have. But however popular this advanced feature is, it still helps a *smaller* number of people than those who can benefit from virtual threads without it. Therefore, we can either do the thing that helps more people first, or not help them until we can help the smaller, though still very important, group, too. 

The former option dominates the latter so clearly, that I have no doubt that no one would have chosen differently, which makes *me* wonder, do the people who respond with “but do my thing first” actually want us, the stewards of Java, to do what’s best for the Java ecosystem at large or do they want us to do what helps *them*? But that’s just a rhetorical question I’d ask myself if I were you.

— Ron

> On 19 Nov 2021, at 19:40, Jon Ross <jonross at gmail.com> wrote:
> 
> On Fri, Nov 19, 2021 at 3:04 AM Alan Bateman <Alan.Bateman at oracle.com> wrote:
>> 
>> On 18/11/2021 21:13, Jon Ross wrote:
>> If you are doing your own implementation of blocking I/O outside of the
>> JDK then park/unpark should be sufficient. Unpark will queue the virtual
>> thread to the scheduler so it continues on one of its thread. If you
>> really want to restrict the scheduler to 1 thread then run with
>> -Djdk.defaultScheduler.maxPoolSize=1. You are in very advanced territory
>> if want fine control on the OS threads, this isn't a priority for this
>> project right now but we will come back to it.
> 
> Yes, having control over OS threads is a requirement for me. So is
> controlling how/when virtual threads run. I will come back to loom
> another time.
> 
> The rest of this email is entirely for your benefit because you are
> actively soliciting feedback. You are free to do with this what you
> will, including completely ignoring it. I am not (as some others in
> this thread have claimed) arguing, or advocating that you do anything
> different. This is entirely an offer to help you understand a
> customer's POV of loom for their specific use case. I am not asking
> for anything at all, not even for you to care about my use case.
> 
> I guess the TL:DR is that I don't think this is very "advanced territory".
> 
> I come from a FinTech background. Most of the frameworks in the java
> low-latency FinTech space (LMAX, the OpenHFT stuff, Real Logic / SBE,
> Nasdaq, etc) use some custom alternative to NIO. From a simple JNI
> epoll implementation to more exotic APIs that are not BSD
> socket-based. They can fallback to NIO for local testing, but are
> rarely deployed using NIO. The event-loop IO API is the only IO used.
> The "tasks" are short-lived, and always return to the same event-loop
> to wait on the next thing. Everything is callback-based. These
> frameworks are fairly obsessed with avoiding context switches, as a
> context switch is a material % of a tasks' budgeted work time. I don't
> think loom will get much adoption in this space as it exists today. I
> do not think this claim is an "overstatement" or hyperbolic.
> Stackless coroutines work great for these types of frameworks. In
> C++/Rust/Zig/Scala(macro)/Kotlin the compiler is hiding the callback,
> so the required control still exists, and it moves the API from
> callbacks to something that looks synchronous but isn't.  I think this
> is the main benefit of any type of fiber/coroutine for these types of
> things. Clean up the API. The cost over callbacks is small (0 to 10s
> of  nanos), and the event-loop is a bit more complex to write, but the
> resulting API is so much nicer to use. Again, not advocating that you
> move to a stackless model. Just pointing out it works well for this
> use case.
> The low-latency FinTech market is pretty niche, and I think you can
> safely ignore it. I don't t think JVM languages are used for the
> majority of this market but are surprisingly well represented.
> 
> What I would be a little more concerned about if I was you, is that
> the FinTech model is not so dissimilar to netty, (except compulsive
> obsession w/ avoiding context switches). They also are typically
> deployed on custom JNI networking layers. The newer ones aren't BSD
> sockets based. Their API also offers fairly fine-grain control over
> OS-Thread usage (I don't know how many use it). Netty is the defacto
> scheduler for the frameworks that use it. I'd think netty would have
> similar issues as FinTech w/ loom. Have you worked with them yet?
> Maybe you've already arrived at a good solution for netty? I'm sure
> you're aware, but netty underpins almost every back-end java
> framework. I would think that early netty adoption would be important
> to you? Unless you consider loom an alternative to netty?  These are
> all rhetorical questions I would be asking myself if I was you. I'm
> not asking for, or expecting an answer to any of these.
> 
> ok. done.
> Thanks for reading. good luck.
> 
> -Jon
> 
> 
> -Jon



More information about the loom-dev mailing list