remove corePoolSize in ForkJoinPool<init> and hope to improve FJP's algorithm

Alan Bateman alan.bateman at oracle.com
Tue Feb 18 16:23:22 UTC 2025


On 18/02/2025 09:18, 唐佳未(佳未) wrote:
> :
>
> From the data, we can see that the tasks which yielded earliest are 
> only awakened and resumed at the very end, resulting in long latencies.
> Besides, I’m wondering if we could reduce latency issues under high 
> pressure by increasing the number of threads available for executing 
> tasks. It's a commonly used method when combining 
> *ThreadPoolExecutor*with a *SynchronousQueue*.
>

In FJP, each worker thread owns a local queue. A worker thread executes 
the tasks in its local queue before scanning other queues for work.

There are unowned submission queues that are used when for tasks 
submitted by (mostly) platform threads. If a platform thread unparks a 
virtual thread (for example, a platform thread in the TPE uses a SQ to 
rendezvous with a virtual thread in your scenario, then the task for the 
virtual thread will be pushed to one of these unowned submission queues. 
Same thing if there is a timeout, the task to continue the virtual 
thread will be pushed to an unowned submission queue.

My reading of your mail is that a virtual thread is calling Thread.sleep 
and you are measuring the time until it continues. In the "high CPU" 
case it may be that FJP workers only execute tasks in their local queue 
so they don't scan the unowned submission queues very often, is this 
what you are seeing? With the JDK 24 EA builds it would be useful to 
execute `jcmd <pid> Thread.vthread_scheduler` a few times to gets some 
stats as I think this would help the discussion.

-Alan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20250218/10167d55/attachment.htm>


More information about the loom-dev mailing list