Reviving JEP 230: Microbenchmark Suite
Martin Buchholz
martinrb at google.com
Mon Sep 10 04:47:07 UTC 2018
I am in agreement with most of what Claes says.
My personal experience is maintaining the benchmarks in
test/jdk/java/util/Collection. They are hacky home-brew
microbenchmarks, but they get the job done. They double as actual
correctness tests, which have found at least one bug not found by any
other test. I have never joined the jmh world, waiting for jmh to
become available for the jdk repo.
Colocation is a huge social advantage - I'd like the work to be done
by the developer to be the same as when writing a test - simply
creating the file in the right directory as part of the same changeset
is enough. I'd like benchmarking to be as similar as possible to
testing - so much so that benchmarking should perhaps be a special
mode supported by jtreg, as e.g. junit is today. The existence of
jmh-jdk-microbenchmarks as a separate repo is a barrier.
I would hope that jmh's API should be stable by now, so that tests
should rarely need porting to new versions of jmh.
Version skew between jdks and microbenchmarks is a problem, but no
different from jtreg tests. In both cases you are likely to be using
some new API just added to the jdk (in the same changeset!).
Designing a good benchmark framework is very hard, especially when the
target is adoption by ordinary programmers.
More information about the jdk-dev
mailing list