Question about using virtual thread

Attila Kelemen attila.kelemen85 at gmail.com
Tue Jun 27 20:38:28 UTC 2023


Did a lot more testing, and here are my findings:

I have checked the following 4 DB pools: DBCP2, Hikari, C3P0 and VIBUR. All
of them seems to be virtual thread friendly, though it is very hard to
measure since even if they were not, they are just too fast. So, I mostly
base this on that I have inspected their source code, and they don't rely
on synchronized nor on native calls.

As for DB / JDBC drivers, it varies a lot more. For the testing setup, I
have started 4 times as many parallel actions as many carrier threads I
have, and wherever I write "sleep" I always mean a 60 ms sleep. So, when
running the jobs concurrently, then - without carrier thread pinning - the
benchmark should measure 60 ms (+overhead), while a complete pinning would
make it 240 ms (+overhead).

## H2 (com.h2database:h2:2.1.214)

It is not virtual thread friendly, since it is full of synchronized blocks.
To prove the problem empirically, I have created a variant of H2 with SLEEP
function, and indeed the benchmark measured 240+ ms. However, I have also
created a variant where I replaced every synchronized in H2 with Java 5
locks, and the measurement reported 60+ ms as expected.

I have reported this issue to the devs:
https://github.com/h2database/h2database/issues/3824


## HSQLDB (org.hsqldb:hsqldb:2.7.2)

The same as with H2, but I didn't repeat the experiment where I removed all
the synchronized.

It seems the development for the fix is already on its way:
https://sourceforge.net/p/hsqldb/discussion/73673/thread/e003a3a566/


## MariaDB JDBC driver (org.mariadb.jdbc:mariadb-java-client:3.1.4)

Seems virtual thread friendly.


## PostgreSQL JDBC driver (org.postgresql:postgresql:42.6.0)

The newest version (42.6.0) seems to be virtual thread friendly as
advertised. I have also checked the older version (42.4.3) before the fix,
and indeed, the older version completely pins the carrier thread.


## Derby (org.apache.derby:derby:10.16.1.1)

This is a bit weird, because for the explained setup, it completes the
benchmark in about 220 ms which is less than 240 ms, and I don't know what
to think about this, because if it pins the thread, then how could it
partially unpin during the sleep? If it doesn't, then 220 ms is awfully
slow, and with a normal query there is no such obscene overhead.

So, it doesn't seem to be virtual thread friendly, but I'm not sure.


## MsSQL JDBC driver (com.microsoft.sqlserver:mssql-jdbc:12.2.0.jre11)

Seems virtual thread friendly. Though there is noticeably more overhead
here than what the other drivers have, so maybe there is some small pinning
somewhere?


## Oracle

Haven't checked it, because it is rather inconvenient to install, but maybe
someone wants to? :) That said, I remember that many years back, I had to
step into the ojdbc jar checking a nasty bug in the driver I did quite a
bit of look around, and if memory serves me right, ojdbc is unsynchronized,
and uses only Java code, so it should be safe.


## Bonus: log4j / log4j2

I haven't measured this fully, but noticed that log4j2 is not virtual
thread friendly (holds an intrinsic lock while writing to a file), which is
rather awkward given how common logs are (and log4j2 is quite widely used).
I have checked logback as well, but that seems to be using Java 5 locks, so
luckily we can still rely on Ceki Gücü :)


If anyone wants to add some additional tests, then of course that is
welcome. The repository is at <https://github.com/kelemen/loom-db-test>,
and should build out-of-the-box (assuming you have JDK 20 installed in a
well known location). To reproduce the above, the most convenient way is to
run the following command:

`./jmh.sh
--testedDb="H2.SLEEP,H2.NOSYNC.SLEEP,HSQL.SLEEP,POSTGRES.SLEEP,POSTGRES.OLD.SLEEP,DERBY.SLEEP,MARIA.SLEEP,MSSQL.SLEEP"
--forkType="LIMITED_EXECUTOR,VIRTUAL_THREADS"`

Attila
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230627/8b2396e8/attachment.htm>


More information about the loom-dev mailing list