: Re: effectiveness of jdk.virtualThreadScheduler.maxPoolSize
Ron Pressler
ron.pressler at oracle.com
Mon Jan 9 20:14:14 UTC 2023
On 9 Jan 2023, at 19:39, Arnaud Masson <arnaud.masson at fr.ibm.com<mailto:arnaud.masson at fr.ibm.com>> wrote:
The average latency of different request types doesn’t matter much, I think.
What matters (for “interactive” web apps at least) is rather the request latency relatively to its own intrinsic duration.
Example:
Adding 1s to a 10s request is not much worse than adding 10ms to a 100ms request.
Adding 1s to a 100ms request is a more serious problem.
Maybe, but since it’s unclear that time-sharing can actually provide any significant help, we can't take any action until we know more about what happens in real servers, why it occurs, and at what frequency. Remember that time sharing will only have any significant impact in specific situations, whose importance cannot be determined without data. So either you give virtual threads a try in your server workload, and if you encounter a problem your information would help us and be much appreciated, or you don’t, and wait until someone else can give us the insight we need to tackle any problems that arise.
Of course, it is perfectly reasonable to decide that you’d rather have others try things first and have them help us if they run into problems, but you need to accept that in order to justify prioritising some work, we must refer to actual problems that people encounter. You’ll notice that all JEPs and even smaller work items are always justified by some problem that someone has actually encountered. Even if you think that you would prioritise things differently, you can appreciate that this way adds, at worst, only a little latency to the process: the more likely a hypothetical problem is (and so the more important), the sooner someone will run into it and we’ll be able to start working on fixing it once they report it.
Not sure to understand the last part “some fixed set of background processing operations”, is it about using non-Loom/classic executor when time sharing is more needed? (It was considered as useless when it was suggested earlier as a workaround, so I’m a bit confused).
It’s about a case where it’s not that incoming requests can vary widely in the resources they consume, but where there’s some known set of CPU-heavy tasks that need to run in the background. If that work is directly related to the workload on the server, it’s unlikely that scheduling can help (as scheduling does not help scalability, which is only a function of the resources an average request consumes and the average rate of requests), but if it’s bounded in some way, then that set of tasks can be run on on specialised thread-pools as they are today. It’s neither a “workaround” nor a solution to the same problem.
Sounds like a good workaround but it adds some complexity for the developer that must distinguish in advance which request must go to Loom Executor and which request must go to classic Executor. (Something Go developers don’t have to worry about now.)
Such tasks are run on specialised thread-pools today, too (as fairness with thread pools is worse than with virtual threads anyway), so there’s no added complexity whatsoever. And until you identify some actual problematic workloads in a reasonable program, you can’t possibly know what Go developers need not worry about compared to Java developers (as they might well need to have only a fixed set of such tasks, too). It is certainly possible that someone else knows something we don’t, but knowing that they do is of little help; we need to know what they know. Once you do find a relevant workload, please report it.
— Ron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230109/ebeb21e4/attachment-0001.htm>
More information about the loom-dev
mailing list