JMH and continuos integration
Nitsan Wakart
nitsanw at yahoo.com
Fri Nov 11 09:43:19 UTC 2016
I use JMH for CI for a while now, there's nothing JMH does which precludes it from CI but the notion of running benchmarks and tracking their results is quite foreign to allot of QA organizations/environments.JMH produces data, to get usable data consider the following (no doubt a partial list):- Quiet enough lab for the sort of measurement you have in mind. Goes without saying perhaps, but happens. Do you need real machines? set CPUs to a fixed frequency? isolate benchmark and other processes from each other? The less noise in your lab the better.- Choosing correct forks/warmup/iteration counts so you measure what you think you measure. Verify occasionally as the code changes. It is tempting to shorten test times, and you may run up against the finite capacity/budget of your lab.- Treat nanosecond level benchmarks with suspicion. Measuring at this level is not always constructive and may not be a good fit for most people.
The data itself is no longer the old pass/fail type and requires a different approach:- Define regression. This is a quantitive rather than boolean judgement. Some variance is expected. Choose margins wisely. Do you compare to last build? all-time-best? golden standard?- Capture score as well as error. An err regression can indicate an iteration/run-to-run variance issue introduction.- Differentiating inter version, intra version regressions.- Dealing with regression backlogs and tracking. Etc etc...There's no popular open solution in this area, lots of people have brewed their own. Some people more than once...Good luck.
On Friday, November 11, 2016 2:13 AM, Leonardo Gomes <leonardo.f.gomes at gmail.com> wrote:
Hi,
I'm looking for feedback on how people use JMH as part of their development
process.
I've played a bit with https://github.com/blackboard/jmh-jenkins and am
wondering if people in this discussion list do use any sort of automation
(Jenkins or other) to detect performance regression on their software, on a
pull-request basis.
I've seen JMH being used on different open-source projects, but the
benchmarks seem to have been introduced mostly to compare different
implementations (when introducing / validating changes) and not as part of
CI.
Any feedback would be appreciated.
Thank you,
Leonardo.
More information about the jmh-dev
mailing list