Synchronized methods and virtual threads

Ron Pressler ron.pressler at oracle.com
Mon Jun 3 23:16:10 UTC 2024



> On 3 Jun 2024, at 21:23, Robert Engels <robaho at icloud.com> wrote:
> 
> Sure. People used synchronized to create “lock objects” long before java.util.concurrent - routinely for higher level debugging.
> 
> You have to also remember a bit of history regarding when Java was created and the relative performance of the machines then. Having an intrinsic lock that the JVM could optimize was highly beneficial - especially when there was no optimizing JIT. 

The reason for native monitors wasn’t performance (they predate the JIT compiler) but what, at the time, was believed to be a natural programming model. It was that early ubiquitous use of synchronized (Vector, Hashtable, StringBuffer, and quite a few old IO methods) that necessitated special optimisation, not the other way around. Over the years, native monitors have caused quite a bit of pain to the VM’s implementors — to this day — and the ultimate goal is to replace them with a Java implementation (i.e. effectively redirect their implementation to j.u.c).

> There is a LOT of code in the JVM/JDK that uses synchronized… Implementing j.u.c requires compiler/intrinsics/atomics/LockSupport - basically making these objects special anyway. I don’t know of any Java developer that had a problem using synchronized. That being said, if j.u.c can achieve the performance of monitors then the only reason to support it is backward compatibility.

Replacing synchronized with j.u.c locks may be unnecessary busy-work, in some cases it’s easier to rely on some native VM functionality in JDK classes because they’re loaded early in the process’s initialisation (JDK classes are sometimes under more severe restrictions than user classes for that reason), and in some cases the synchronization object is public and so part of the API. Changing it could cause binary incompatibilities. But we certainly don’t consider them superior. We would encourage new code to use j.u.c locks (that’s where current and future enhancements would be focused) unless there’s a very good reason not to do that.

> 
> Tbh, I am not a fan of the loosey-goosey release every 6 months change anything in Java the way the wind blows mentality. The only aspects that allowed Java to succeed were backwards compatibility and cross platform (and then the JIT for performance) - my opinion of course.

The way backward compatibility works in Java — and it works very effectively — is that anything that is part of the specification (or non-SE JDK APIs) enjoys a very high degree of backward compatibility (and removals undergo the deprecation process), while anything that is not explicitly specified must not be relied upon, and is subject to change at any time. That is the only way users can enjoy both backward compatibility and enhancements. In particular, specific optimisations or techniques employed by the JVM can and do change drastically over time. There are macro- and micro-benchmarks that should not regress on average, and performance over a wide range of programs should generally improve over time, but code should absolutely not rely on the JVM optimising things tomorrow the same way it does today. Writing code that deliberately relies on unspecified behaviour, i.e. behaviour that is not now nor has ever been subject to any promise of backward compatibility — whether it’s a specific optimisation or some functionality — means accepting the responsibility to maintain and modify that code as the unspecified behaviour changes (just as we strive to do with such code in the JDK).

— Ron


More information about the loom-dev mailing list