<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<div class="moz-cite-prefix">On 2025-01-29 20:19, Dr Heinz M. Kabutz
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:279a810d-3100-42f4-b3e2-5e6ccd939e0b@javaspecialists.eu">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>Once the write lock has been requested, no new read locks will
be issued (since Java 6, in Java 5 there was an issue with
starvation), so it could take a bit of time, depending on how
long each of the operations is, but eventually it should do the
write.</p>
<p>I'd investigate using StampedLock with tryOptimisticRead() and
then writeLock(). The idioms are a bit more complicated, but
this will hopefully work.<br>
</p>
<pre class="moz-signature" cols="72">Regards
Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java™ Specialists' Newsletter" - <a
class="moz-txt-link-abbreviated"
href="http://www.javaspecialists.eu" moz-do-not-send="true">www.javaspecialists.eu</a>
Java Champion - <a class="moz-txt-link-abbreviated"
href="http://www.javachampions.org" moz-do-not-send="true">www.javachampions.org</a>
JavaOne Rock Star Speaker
Tel: +30 69 75 595 262
Skype: kabutz
</pre>
<div class="moz-cite-prefix">On 2025-01-29 20:12, robert engels
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:B2895CD5-7DBD-4302-86A2-1A6220985E56@ix.netcom.com">
<meta http-equiv="content-type"
content="text/html; charset=UTF-8">
Given that, it still seems the writer (configuration changer I
assume) is going to be potentially stalled a long time.
<div><br>
</div>
<div>In my experience, copy on write is ideal for configuration
change management - it doesn’t work for things like db
transactions - but I am not sure you would ever have millions
of connections to a db - more like a request queue would be
used by the clients, so it wouldn’t be an issue.</div>
<div><br>
</div>
<div>Interestingly, Go doesn’t even have a reentrant lock in
their stdlib.<br>
<div><br>
<blockquote type="cite">
<div>On Jan 29, 2025, at 11:34 AM, Matthew Swift <a
class="moz-txt-link-rfc2396E"
href="mailto:matthew.swift@gmail.com"
moz-do-not-send="true"><matthew.swift@gmail.com></a>
wrote:</div>
<br class="Apple-interchange-newline">
<div>
<p dir="ltr">Just to be clear, the threads are not
blocked on the write lock here. They have all
successfully acquired the read lock. </p>
<p dir="ltr">But I agree, copy on write is an
alternative approach when available, otherwise it's
semaphores all the way down...</p>
<br>
<div class="gmail_quote gmail_quote_container">
<div dir="ltr" class="gmail_attr">On Wed 29 Jan 2025,
18:30 robert engels, <<a
href="mailto:rengels@ix.netcom.com"
moz-do-not-send="true"
class="moz-txt-link-freetext">rengels@ix.netcom.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">But
tbh, blocking that many threads seems doesn’t seem
efficient or performant. It is isn’t cheap. I would
think that a copy on write for the configuration
change would be a better solution.<br>
<br>
> On Jan 29, 2025, at 11:23 AM, robert engels
<<a href="mailto:rengels@ix.netcom.com"
target="_blank" rel="noreferrer"
moz-do-not-send="true"
class="moz-txt-link-freetext">rengels@ix.netcom.com</a>>
wrote:<br>
> <br>
> Nice catch! I am not sure you are going to get
a resolution on this other than using your own
implementation. <br>
> <br>
> The AbstractQueuedSynchronizer needs to be
changed to use a long to hold the state - which will
break subclasses - so it probably won’t happen.<br>
> <br>
>> On Jan 29, 2025, at 10:58 AM, Matthew Swift
<<a href="mailto:matthew.swift@gmail.com"
target="_blank" rel="noreferrer"
moz-do-not-send="true"
class="moz-txt-link-freetext">matthew.swift@gmail.com</a>>
wrote:<br>
>> <br>
>> Hi folks,<br>
>> <br>
>> As you may remember from a few months ago,
we converted our LDAP Directory server/proxy[1] over
to using virtual threads. It's been going pretty
well and we're lucky enough to be able to leverage
JDK21 as we have full control over most (not all) of
the code-base, which puts us in the enviable
position where we can convert code to avoid thread
pinning issues. That being said, we regularly test
using the latest JDK24 EA builds as well.<br>
>> <br>
>> We recently hit what I feel is quite a
major limitation in ReentrantReadWriteLock, which
was somewhat hidden before in the old world of
large-but-not-super-large platform thread pools:<br>
>> <br>
>> Error: Maximum lock count exceeded at
ReentrantReadWriteLock.java:535,494
AbstractQueuedSynchronizer.java:1078
ReentrantReadWriteLock.java:738 ...<br>
>> <br>
>> I'm sure that we're not alone in making
extensive use of RW locks for synchronizing
configuration changes to runtime components: the
write lock ensures that regular processing is paused
while the configuration change is applied. The
component in this case could be something that talks
to a remote microservice over HTTP, a logging
backend, etc. In this case, there is no
configuration change - just a few 100s millisecond
latency in the remote service for some reason (e.g.
GC pause?), which has caused many virtual threads to
get blocked inside the component while holding the
read lock. The RW lock then fails with the above
error once there are 64K concurrent threads holding
the read lock.<br>
>> <br>
>> Given that scaling IO to millions of
concurrent IO bound tasks was one of the key
motivations for vthreads, it seems a bit surprising
to me that a basic concurrency building block of
many applications is constrained to 64K concurrent
accesses. Are you aware of this limitation and its
implications? A workaround now is to go hunting for
RW locks in our application and using alternative
approaches OR, where the lock is in a third party
library (e.g. logging / telemetry), wrapping the
library calls in a Semaphore limited to <64K
permits. It seems a bit unsatisfactory to me. What
do you think? Are there plans to implement a RW lock
based on AbstractQueuedLongSynchronizer?<br>
>> <br>
>> Kind regards,<br>
>> Matt<br>
>> <br>
>> [1] those unfamiliar with the tech, think
of it is a distributed database for storing
identities<br>
> <br>
<br>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</blockquote>
<p>Following on from my suggestion to consider using StampedLock
instead of ReentrantReadWriteLock - a word of warning that we
cannot use it as a drop-in replacement with new
StampedLock().asReadWriteLock(), because it does not have writer
starvation protection. RRWL does. In your example, with millions
of readers and one occasionally writer, the write lock might never
become available. Here is a small example:</p>
<p>import java.util.*;<br>
import java.util.concurrent.*;<br>
import java.util.concurrent.locks.*;<br>
<br>
// Based on email discussion in loom-dev on 2025-01-29 entitled:<br>
// Another virtual threads migration story: ReentrantReadWriteLock<br>
public class StampedLockRWLockStarvation {<br>
public static void main(String... args) throws
InterruptedException {<br>
var rwlocks = List.of(new ReentrantReadWriteLock(),<br>
new StampedLock().asReadWriteLock());<br>
<br>
for (ReadWriteLock rwlock : rwlocks) {<br>
if (checkForWriterStarvation(rwlock) >
1_000_000_000) {<br>
throw new AssertionError("Writer starvation
occurred!!!");<br>
} else {<br>
System.out.println("No writer starvation");<br>
}<br>
}<br>
}<br>
<br>
private static long checkForWriterStarvation(ReadWriteLock
rwlock) throws InterruptedException {<br>
System.out.println("Checking " + rwlock.getClass());<br>
try (var mainPool =
Executors.newVirtualThreadPerTaskExecutor()) {<br>
mainPool.submit(() -> {<br>
System.out.println("Going to start readers ...");<br>
try (var pool =
Executors.newVirtualThreadPerTaskExecutor()) {<br>
for (int i = 0; i < 10; i++) {<br>
int readerNumber = i;<br>
pool.submit(() -> {<br>
rwlock.readLock().lock();<br>
try {<br>
System.out.println("Reader " +
readerNumber + " is reading ...");<br>
Thread.sleep(1000);<br>
} catch (InterruptedException e) {<br>
throw new
CancellationException("interrupted");<br>
} finally {<br>
rwlock.readLock().unlock();<br>
}<br>
System.out.println("Reader " +
readerNumber + " is done");<br>
});<br>
try {<br>
Thread.sleep(500);<br>
} catch (InterruptedException e) {<br>
throw new RuntimeException(e);<br>
}<br>
}<br>
}<br>
});<br>
Thread.sleep(1800);<br>
System.out.println("Going to try to write now ...");<br>
long timeToAcquireWriteLock = System.nanoTime();<br>
rwlock.writeLock().lock();<br>
try {<br>
timeToAcquireWriteLock = System.nanoTime() -
timeToAcquireWriteLock;<br>
System.out.printf("time to acquire write lock =
%dms%n",<br>
(timeToAcquireWriteLock / 1_000_000));<br>
System.out.println("Writer is writing ...");<br>
Thread.sleep(1000);<br>
} catch (InterruptedException e) {<br>
throw new CancellationException("interrupted");<br>
} finally {<br>
rwlock.writeLock().unlock();<br>
}<br>
System.out.println("Writer is done");<br>
return timeToAcquireWriteLock;<br>
}<br>
}<br>
}</p>
<p>With ReentrantReadWriteLock, once we ask for the write lock, no
more read locks are issued until that has been serviced.<br>
</p>
<p>Using the correct idioms for StampedLock with tryOptimisticRead()
should avoid this starvation, but we do have to be careful that we
might be reading in-progress writes.<br>
</p>
<p>StampedLock would not have a practical limit on number of
concurrent reads AFAIK.<br>
</p>
<p><br>
</p>
</body>
</html>