Busy Polling

Viktor Klang viktor.klang at oracle.com
Mon Mar 6 13:27:50 UTC 2023


Thanks for the background, Francesco.

In that case I think Alan's suggestion is probably the right way to go—providing the number of threads to the default scheduler at startup to reduce the risk of starvation.
________________________________
From: Francesco Nigro <nigro.fra at gmail.com>
Sent: Monday, 6 March 2023 11:23
To: Viktor Klang <viktor.klang at oracle.com>; Viktor Klang <viktor.klang at gmail.com>
Cc: Alan Bateman <alan.bateman at oracle.com>; Carl M <java at rkive.org>; loom-dev at openjdk.org <loom-dev at openjdk.org>
Subject: Re: Busy Polling

Hi @Viktor Klang<mailto:viktor.klang at gmail.com>

probably is to get the benchmark method to run in a virtual thread and be able to measure "parking"/awake costs, thread local lookup and few other behaviours where the running thread context makes difference.
Conversely, a thread dispatch to the FJ specialized pool handling V thread is required, but can create noise that affects results.
My 2c (given that I've used it recently for a similar purpose).

Il giorno lun 6 mar 2023 alle ore 11:11 Viktor Klang <viktor.klang at oracle.com<mailto:viktor.klang at oracle.com>> ha scritto:
Just for my understanding—what is he purpose of running JMH on top of VirtualThread?
________________________________
From: loom-dev <loom-dev-retn at openjdk.org<mailto:loom-dev-retn at openjdk.org>> on behalf of Alan Bateman <Alan.Bateman at oracle.com<mailto:Alan.Bateman at oracle.com>>
Sent: Monday, 6 March 2023 10:08
To: Carl M <java at rkive.org<mailto:java at rkive.org>>; loom-dev at openjdk.org<mailto:loom-dev at openjdk.org> <loom-dev at openjdk.org<mailto:loom-dev at openjdk.org>>
Subject: Re: Busy Polling

On 06/03/2023 07:12, Carl M wrote:
> Hi Again,
>
> I've been experimenting with Virtual threads some more and encountered a sharp corner when running with JMH.  I am currently using JMH with a custom executor to run on Virtual threads, but ran into an issue when using a larger number of threads.
>
> JMH works by spawning a small fixed number of threads which do a loop until a volatile boolean has been set.  Once set, the threads continue on in the next part of the benchmark.  However, When the number of Group threads exceeds the number of Carrier Thread (which is typically the number of processors), the program will hang.   For example, my machine has 8 processors, so I run 8 Group threads for my benchmark.  JMH spawns 9 threads, 8 to do the benchmark, and 1 as a control.  Since the first 8 threads get stuck doing work, the control thread never gets scheduled to set the volatile boolean, and the benchmark never proceeds.
>
> I tried calling Thread.yield() which does work, but spoils the benchmark numbers.   As far as I can tell there isn't a way to control the parallelism of the Carrier thread pool (always FJP?).   If it were possible to expand this by a thread, the benchmark could proceed (at the cost of the number of worker buckets not being a nice power of 2.  Lastly, it superficially seems like ManagedBlocker might  fit in here some how, but I can't see how it would adapt exactly.
>
> I'm wondering if y'all can provide some guidance on what to do here?

The system properties for overriding parallelism and the max pool size
of the carrier threads are jdk.virtualThreadScheduler.parallelism and
jdk.virtualThreadScheduler.maxPoolSize. They are documented the
"Implementation Note" section of the Thread's class description and the JEP.

Off-hand, I don't know if there is a good recipe for configuring JMH +
JDK to run a benchmark with a virtual thread. Such a setup might be
useful for benchmark APIs or code that blocks a lot.  I haven't seen JMH
configuted with a custom executor so I assume what might be replacing
platform threads with virtual threads and some/all of the first 8 are
pinned so there is starvation.  Maybe others that have explored this
issue can reply, it might be worth checking the jmh-dev archives too in
case this topic has been discussed already.

-Alan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230306/59d45935/attachment-0001.htm>


More information about the loom-dev mailing list