Reviving JEP 230: Microbenchmark Suite

Claes Redestad claes.redestad at oracle.com
Thu Sep 6 14:57:46 UTC 2018


Hi,

JEP 230: Microbenchmark Suite[1] was proposed during JDK 9 development, 
but was put on hold for various mundane reasons. I think time has come 
to revive this effort, andI've volunteered to take over ownership of 
this JEP.

Some time after JEP 230 was temporarily abandoned, the 
jmh-jdk-microbenchmarks project[2] was conceived to ensure some of the 
work done to prepare didn't go to waste. A set of previously closed 
source microbenchmarks was open sourced and contributed there, and 
artifacts from this project are actively being used to track 
performance. Thissituation is not ideal, however.

A side-effect of the more rapid release cadence is that the bulk of new 
feature development is being done in project repositories. This creates 
some very specific challenges when developing benchmarks out-of-tree, 
especially those that that aim to track new and emerging APIs.

For starters we're pushed toward setting up branches that mirror the 
project layout of the various OpenJDK projects (valhalla, amber, ...), 
then we need to set up automated builds from each such branch, then have 
some automated means to match these artifacts with appropriate builds of 
the project-under-test etc. And how do we even deal with the case when 
the changes we really want to test are in javac? Nothing is impossible, 
of course, but the workarounds and added work is turning out to be costly.

By co-locating microbenchmarks, matching the JDK-under-test with an 
appropriate microbenchmark bundle would be trivial and automatic in most 
cases, and while one always need to be wary of subtle changes creeping 
into benchmark bundles and the JDK between builds, this is something we 
already test for automatically as regressions are detected.

Also, we wouldn't really lose the ability to pick up an existing 
microbenchmark artifact as appropriate (a "golden" bundle; appropriate 
when comparing library and VM changes over a longer time). A standalone 
project can be considered a good enough fit for that case, so one 
alternative to moving all of jmh-jdk-microbenchmarks into the JDK would 
be keep maintaining the standalone project for benchmarks that are 
considered mature and stable. My preference is to make 
jmh-jdk-microbenchmarks redundant and then shut it down, though. I think 
most would typically build and keep a "golden" bundle of a specific 
version around for longer-term regression tests, so a separate 
standalone project of JDK micros doesn't make much sense.

Also of note is that one of the more controversial aspects of JEP 230 
was the question of where to put them. We've since consolidated the 
OpenJDK source repositories into a single one, so much of what was 
discussed back then no longer applies. Mainly this simplifies the steps 
needed to integrate a microbenchmark suite into the JDK, and I've 
updated the JEP text accordingly.

Thanks!

/Claes

[1] http://openjdk.java.net/jeps/230
[2] http://openjdk.java.net/projects/code-tools/jmh-jdk-microbenchmarks/


More information about the platform-jep-discuss mailing list