Thread scheduling imbalance / starvation

Ron Pressler ron.pressler at oracle.com
Sun Apr 16 14:40:00 UTC 2023


Hi.

What you’re seeing is the result of the virtual thread scheduler not employing time sharing. That is because we have yet to identify workloads, especially those that are best served by virtual threads — namely, servers — that can benefit from it. Once we find such workloads we’ll be able to utilise time sharing.

In your example, the scheduler is able to keep all threads busy with work without blocking on the semaphore by just running some threads.

— Ron

On 16 Apr 2023, at 06:30, Martin Traverso <mtraverso at gmail.com<mailto:mtraverso at gmail.com>> wrote:

Hi,

First of all, I'd like to thank you for this feature! We've been eagerly awaiting it in the Trino project and we believe it will help us dramatically simplify many parts of the codebase.

I've been playing around with virtual threads and I've noticed some odd behaviors. Given the following code:

    import java.util.ArrayList;
    import java.util.List;
    import java.util.concurrent.ExecutionException;
    import java.util.concurrent.Semaphore;
    import java.util.concurrent.atomic.AtomicLong;

    public class Test
    {
        public static void main(String[] args)
                throws InterruptedException
        {
            int processors = Runtime.getRuntime().availableProcessors();

            Semaphore semaphore = new Semaphore(processors, true);
            List<AtomicLong> counters = new ArrayList<>();
            for (int i = 0; i < 2 * processors; i++) {
                AtomicLong counter = new AtomicLong();
                counters.add(counter);
                Thread.ofVirtual().start(() -> {
                    while (true) {
                        semaphore.acquireUninterruptibly();
                        counter.incrementAndGet();
                        semaphore.release();
                    }
                });
            }

            Thread.sleep(10_000);

            counters.stream()
                    .map(AtomicLong::get)
                    .sorted()
                    .forEach(System.out::println);
        }
    }

I would expect the counts to be approximately equal, but I'm getting the following result:

    0
    0
    0
    0
    0
    0
    0
    0
    0
    0
    2435341
    2448274
    2466202
    2497258
    2539030
    2572744
    2592871
    2611658
    2651392
    2657913

If I change the number of permits for the semaphore to a value smaller than the number of processors, then the results come out as expected. It also works as expected if I change the core loop to make a call to Thread.yield() on the first iteration:

    while (true) {
        semaphore.acquireUninterruptibly();
        if (counter.incrementAndGet() == 1) {
            Thread.yield();
        }
        semaphore.release();
    }


If I place a call to Thread.yield() after the semaphore.release() call, then all the threads make some progress, but the values are still unbalanced:

    while (true) {
        semaphore.acquireUninterruptibly();
        counter.incrementAndGet();
        semaphore.release();
        Thread.yield();
    }

    196257
    196257
    196258
    196260
    196260
    196260
    196261
    196261
    401737
    401740
    401744
    401757
    1644985
    1651301
    1677466
    1683009
    1694577
    1702710
    1710970
    1843037

I'm running the following version of the JDK on an Macbook Pro with an M1 Max CPU:

openjdk version "20" 2023-03-21
OpenJDK Runtime Environment Zulu20.28+85-CA (build 20+36)
OpenJDK 64-Bit Server VM Zulu20.28+85-CA (build 20+36, mixed mode, sharing)

I'm not sure if this is a bug or if I'm misunderstanding how virtual threads are supposed to work. Any help or clarification would be greatly appreciated!

Thanks!
- Martin

----
Martin Traverso
Co-founder @ Trino Software Foundation, Co-creator of Presto and Trino (https://trino.io<https://trino.io/>)


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230416/0a428d35/attachment-0001.htm>


More information about the loom-dev mailing list