Re: re:remove corePoolSize in ForkJoinPool<init> and hope to improve FJP's algorithm
Alan Bateman
alan.bateman at oracle.com
Thu Feb 20 10:26:50 UTC 2025
On 20/02/2025 01:40, 唐佳未(佳未) wrote:
> :
>
> java.util.concurrent.ForkJoinPool at 678ad349[Running, parallelism = 8,
> size = 8, active = 7, running = 0, steals = 3931, tasks = 0,
> submissions = 1426]java.util.concurrent.ForkJoinPool at 678ad349[Running,
> parallelism = 8, size = 8, active = 8, running = 0, steals = 5908,
> tasks = 0, submissions = 48]
> java.util.concurrent.ForkJoinPool at 678ad349[Running, parallelism = 8,
> size = 8, active = 8, running = 0, steals = 7731, tasks = 0,
> submissions = 236]
> java.util.concurrent.ForkJoinPool at 678ad349[Running, parallelism = 8,
> size = 8, active = 2, running = 0, steals = 10370, tasks = 0,
> submissions = 0]
Thanks for the jcmd output. It shows that there are no queued tasks in
the worker queues (tasks = 0) but many tasks are in the external
submission queues. Tasks for virtual threads are pushed to an external
submission queue when a virtual thread is initially started, unparked by
a platform thread, unblocked by another thread exiting a monitor that
the virtual thread was blocked on, or awoken after sleep/timed-park.
Your first mail speaks of a usage wth ThreadPoolExecutor and
SynchronousQueue so I will guess there is some hand off from a platform
thread to a virtual thread that would result in the task for the virtual
thread being pushed to an external queue.
Can you tell us a bit about the "run function"? I can't tell from the
mails so far if this function is mostly compute bound or whether these
virtual threads are blocking regularly to allow carriers be released to
do other work. One of the mails mentions "tasks switched out but I
wasn't sure how to read that. Even without this then you are correct
that the scheduling is not fair.
-Alan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20250220/f87b0ec5/attachment.htm>
More information about the loom-dev
mailing list