FR: summary stats to detect multi-modal behaviour of benchmarks

Nitsan Wakart nitsanw at yahoo.com
Mon Sep 8 07:53:16 UTC 2014


Hi,
In my work I come across benchmarks that have 'modes' of performance. For instance a benchmark exercising code which suffers from false sharing usually has a probability of that false sharing manifesting. The run to run variance is very indicative of these sort of issues when they happen, but the summary of all the runs put together hides this behaviour from me. I've hit similar issues around unstable compilation results and timing sensitive benchmarks.
Currently I parse the output by hand/script to detect these anomalies and look out for any large error indicators in the summary, but I was wondering if we can make the summary statistics pluggable to better detect these behaviours, or if perhaps other people have better solutions to this problem.
Thanks,
Nitsan


More information about the jmh-dev mailing list