RFR: 8040140 System.nanoTime() is slow and non-monotonic on OS X
staffan.larsen at oracle.com
Mon Apr 14 14:55:20 UTC 2014
The current implementation of System.nanoTime() on OS X uses gettimeofday() which has only microsecond precision and no guarantees on being monotonic.
The proposal is to use the system call mach_absolute_time() instead of gettimeofday() and to add safeguard to guarantee that the time is monotonic (similar to what we already have on solaris).
mach_absolute_time() is essentially a direct call to RDTSC, but with conversion factor to offset for any system sleeps and frequency changes. The call returns something that can be converted to nanoseconds using information from mach_timebase_info(). Calls to mach_absolute_time() do not enter the kernel and are very fast. The resulting time has nanosecond precision and as good accuracy as one can get.
Since the value from RDTSC can be subject to drifting between CPUs, we implement safeguards for this to make sure we never return a lower value than the previous values. This adds some overhead to nanoTime() but guards us against possible bugs in the OS. For users who are willing to trust the OS and need the fastest possible calls to System.nanoTime(), we add a flag to disable this safeguard: -XX:+AssumeMonotonicOSTimers.
This change also adds support for AssumeMonotonicOSTimers to the solaris code (see JDK-6864866) and removes the use of Atmoic::load() for a 64-bit value which was necessary on 32-bit platforms (which we don’t support any longer).
This change has been proposed earlier at: http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-May/009496.html
More information about the hotspot-runtime-dev