Another virtual threads migration story: ReentrantReadWriteLock
David Lloyd
david.lloyd at redhat.com
Wed Jan 29 19:59:45 UTC 2025
I'm not sure this is a true statement. The synchronizer base class
(`ReentrantReadWriteLock.Sync`) which is being used appears to be
package-private; it should be possible to change it to extend
`java.util.concurrent.locks.AbstractQueuedLongSynchronizer` instead of AQS,
and use a larger (perhaps up to 31 bit) counter without impacting anything
that I can easily find. I don't see any place where extending these
counters would necessarily impact subclasses.
On Wed, Jan 29, 2025 at 11:25 AM robert engels <rengels at ix.netcom.com>
wrote:
> Nice catch! I am not sure you are going to get a resolution on this other
> than using your own implementation.
>
> The AbstractQueuedSynchronizer needs to be changed to use a long to hold
> the state - which will break subclasses - so it probably won’t happen.
>
> > On Jan 29, 2025, at 10:58 AM, Matthew Swift <matthew.swift at gmail.com>
> wrote:
> >
> > Hi folks,
> >
> > As you may remember from a few months ago, we converted our LDAP
> Directory server/proxy[1] over to using virtual threads. It's been going
> pretty well and we're lucky enough to be able to leverage JDK21 as we have
> full control over most (not all) of the code-base, which puts us in the
> enviable position where we can convert code to avoid thread pinning issues.
> That being said, we regularly test using the latest JDK24 EA builds as well.
> >
> > We recently hit what I feel is quite a major limitation in
> ReentrantReadWriteLock, which was somewhat hidden before in the old world
> of large-but-not-super-large platform thread pools:
> >
> > Error: Maximum lock count exceeded at
> ReentrantReadWriteLock.java:535,494 AbstractQueuedSynchronizer.java:1078
> ReentrantReadWriteLock.java:738 ...
> >
> > I'm sure that we're not alone in making extensive use of RW locks for
> synchronizing configuration changes to runtime components: the write lock
> ensures that regular processing is paused while the configuration change is
> applied. The component in this case could be something that talks to a
> remote microservice over HTTP, a logging backend, etc. In this case, there
> is no configuration change - just a few 100s millisecond latency in the
> remote service for some reason (e.g. GC pause?), which has caused many
> virtual threads to get blocked inside the component while holding the read
> lock. The RW lock then fails with the above error once there are 64K
> concurrent threads holding the read lock.
> >
> > Given that scaling IO to millions of concurrent IO bound tasks was one
> of the key motivations for vthreads, it seems a bit surprising to me that a
> basic concurrency building block of many applications is constrained to 64K
> concurrent accesses. Are you aware of this limitation and its implications?
> A workaround now is to go hunting for RW locks in our application and using
> alternative approaches OR, where the lock is in a third party library (e.g.
> logging / telemetry), wrapping the library calls in a Semaphore limited to
> <64K permits. It seems a bit unsatisfactory to me. What do you think? Are
> there plans to implement a RW lock based on AbstractQueuedLongSynchronizer?
> >
> > Kind regards,
> > Matt
> >
> > [1] those unfamiliar with the tech, think of it is a distributed
> database for storing identities
>
>
--
- DML • he/him
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20250129/73cf3c56/attachment.htm>
More information about the loom-dev
mailing list