Project Loom VirtualThreads hang

Alex Otenko oleksandr.otenko at gmail.com
Wed Dec 28 05:14:53 UTC 2022


I am not a Loom developer, so can't speak of what should be supported.

RingBuffer as written there is not synchronized correctly and can lead to
hangs even if you assume the behaviour of Thread.yield is fair. I won't go
into detail, but the producers not synchronizing between themselves leads
to hangs.


Alex

On Wed, 28 Dec 2022, 04:31 robert engels, <rengels at ix.netcom.com> wrote:

> OK. I created a stand-alone test case that demonstrates the problem. It is
> available at https://github.com/robaho/vthread_test
>
> If you run VThreadTest 8 8 1000000 (at least on my 4 core iMac),
> eventually the VThreadTest native thread will ‘spin’ attempting to unpark()
> a virtual thread (and have it read from the queue). If you use the debugger
> you see that the the vthread it is attempting to unpark is unparked - it
> has a run state=2 and runPermit=true but it never runs - leading to the
> VThreadTest thread to spin indefinitely.
>
> The reason is the 8 producers are “spinning” as well, trying to put into
> the main queue. This means that the ‘consumer’ vthread can never run to
> pull an item from the queue to allow the main to make progress.
>
> Adding a Thread.yield() to the spinning put() does not help.
>
> Adding a LockSupport.parkNanos(1) to the spinning put() allows it to work.
>
> I think Thread.yield() is broken in this regard. I know that
> Thread.yield() is documented to be advisory only, but if there is no
> virtual thread preemption a lot of code is going to require extra effort
> and testing to port to virtual threads. Developers expect “fair” thread
> scheduling in the absence of features like real-time priorities, etc.
>
> If Thread.yield() is fixed, I also suggest that a thread calling
> LockSupport.unpark() does an implicit Thread.yield() - which should be very
> efficient if the number of runnable threads is less than the carrier thread
> count. Almost all lock-free structures use LockSupport park/unpark as their
> under pinnings, and this would allow things to work without preemption.
> This program forces this issue, but a lot of lock-free structures are based
> on the program making “some progress”. In this case, the designer could
> “know” that it takes at most N nanos of spinning before a consumer will
> take the object - but if there are enough other spinning vthreads “in that
> moment” you will have a hang. This will lead to hard to diagnose timing
> bugs.
>
> For reference, using parkNanos(1) with native threads and 8 8 500000, it
> completes in 21 secs. Using virtual threads it completes in 7 secs!
>
> A proper Thread.yield() should provide even better performance.
>
> Note as well, if you remove the parkNanos(1) and use native threads, the
> program completes successfully - albeit the cpus spin, spin, spin - so be
> careful. It takes 70 secs to run the above test.
>
> *** all of the above said, in the original testing and why I opened this
> issue, the carrier threads did not look allocated - and still the unparked
> vthread never ran. I will continue to test using the original setup.
>
>
>
> On Dec 27, 2022, at 4:27 PM, robert engels <rengels at ix.netcom.com> wrote:
>
> Hi devs,
>
> First,
>
> Thanks for this amazing work!!! It literally solves the only remaining
> problem Java had.
>
> Sorry for the long email.
>
> I have been very excited to test-drive Project Loom in JDK19. I have
> extensive experience in highly concurrent systems/HFT/HPC, so I usually :)
> know what I am doing.
>
> For the easiest test, I took a highly threaded (connection based) server
> based system (Java port of Go’s nats.io message broker), and converted
> the threads to virtual threads. The project (jnatsd) is available here
> <https://github.com/robaho/jnatsd>. The ‘master’ branch runs very well
> with excellent performance, but I thought switching to virtual threads
> might be able to improve things over using async IO, channels, etc. (I have
> a branch for this that works as well, but it is much more complex, and
> didn’t provide a huge performance benefit)/
>
> There are two branches ’simple_virtual_threads’ and ‘virtual_threads’.
>
> In the former, it is literally a 2 line change to enable the virtual
> threads but it doesn’t work. I narrowed it down the issue that
> LockSupport.unpark(thread) does not work consistently. At some point, the
> virtual thread is never scheduled again. I enabled the debug options and I
> see that the the virtual thread is in:
>
> yield0:365, Continuation (jdk.internal.vm)
> yield:357, Continuation (jdk.internal.vm)
> yieldContinuation:370, VirtualThread (java.lang)
> park:499, VirtualThread (java.lang)
> parkVirtualThread:2606, System$2 (java.lang)
> park:54, VirtualThreads (jdk.internal.misc)
> park:369, LockSupport (java.util.concurrent.locks)
> run:88, Connection$ConnectionWriter (com.robaho.jnatsd)
> run:287, VirtualThread (java.lang)
> lambda$new$0:174, VirtualThread$VThreadContinuation (java.lang)
> run:-1, VirtualThread$VThreadContinuation$$Lambda$50/0x0000000801065670 (java.lang)
> enter0:327, Continuation (jdk.internal.vm)
> enter:320, Continuation (jdk.internal.vm)
>
> The instance state is:
>
> this = {VirtualThread$VThreadContinuation at 1775}
>  target = {VirtualThread$VThreadContinuation$lambda at 1777}
>   arg$1 = {VirtualThread at 1699}
>    scheduler = {ForkJoinPool at 1781}
>    cont = {VirtualThread$VThreadContinuation at 1775}
>    runContinuation = {VirtualThread$lambda at 1782}
>    state = 2
>    parkPermit = true
>    carrierThread = null
>    termination = null
>    eetop = 0
>    tid = 76
>    name = ""
>    interrupted = false
>    contextClassLoader = {ClassLoaders$AppClassLoader at 1784}
>    inheritedAccessControlContext = {AccessControlContext at 1785}
>    holder = null
>    threadLocals = null
>    inheritableThreadLocals = null
>    extentLocalBindings = null
>    interruptLock = {Object at 1786}
>    parkBlocker = null
>    nioBlocker = null
>    Thread.cont = null
>    uncaughtExceptionHandler = null
>    threadLocalRandomSeed = 0
>    threadLocalRandomProbe = 0
>    threadLocalRandomSecondarySeed = 0
>    container = {ThreadContainers$RootContainer$CountingRootContainer at 1787
> }
>    headStackableScopes = null
>   arg$2 = {Connection$ConnectionWriter at 1780}
>  scope = {ContinuationScope at 1776}
>  parent = null
>  child = null
>  tail = {StackChunk at 1778}
>  done = false
>  mounted = false
>  yieldInfo = null
>  preempted = false
>  extentLocalCache = null
> scope = {ContinuationScope at 1776}
> child = null
>
> As you see in the above, the parkPermit is true, but it never runs again.
>
> In the latter branch, ‘virtual_threads’, I changed the lock-free
> RingBuffer class to use simple synchronized primitives - under the
> assumption that with virtual threads lock/wait/notify should be highly
> efficient. It worked, but it was nearly 2x slower than the original thread
> based lock-free implementation. So, I added a ’spin loop’ in the RingBuffer
> methods. This code is completely optional and can be no-op’d, and I was
> able to increase performance to above that of the Thread based version.
>
> I dug a little deeper, and decided that using Thread.yield() should be
> even more efficient than LockSupport.parkNanos(1) - problem is that
> changing that simple line brings back the hangs. I think there is very
> little semantic difference between LockSupport.parkNanos(1) and
> Thread.yield() but the latter should avoid any timer scheduling. The
> RingBuffer code there is fairly trivial.
>
> So, before I dig deeper, is this a known issue that Thread.yield() does
> not work as expected? Is it is known issue that LockSupport.unpark() fails
> to reschedule threads?
>
> Is it possible because the VirtualThreads do not implement the Java memory
> model properly?
>
> Any ideas how to further diagnose?
>
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20221228/a0c9508e/attachment-0001.htm>


More information about the loom-dev mailing list