how to randomize test in JMH

Nitsan Wakart nitsanw at yahoo.com
Sun Apr 10 16:51:10 UTC 2016


Having the loop in place offers the compiler 'unfair' optimisation opportunities(loop unrolling, hoisting etc. see samples for loops risks). I tend to allocate a range of data to work on and loop through it, processing one element at a time. The variance between data points should average out. If the work under measurement is small you might want to use a data set size that is a power of 2 so that you can avoid using '%' and use '&' instead. I would add the data point selection method as benchmark to give you some idea of its overhead.



> On 10 Apr 2016, at 17:53, J Kinable <j.kinable at gmail.com> wrote:
> 
> I've got 2 algorithms which I would like to compare. An algorithm takes a
> data instance as input and produces a certain result. To do a fair but
> thorough comparison, I would like to run my algorithms on several different
> data instances. Obviously, to make it a fair comparison, both algorithms
> should use the same pool of data instances. How would you create such a
> test. I could do the following:
> 
> @Benchmark
> public void testAlgorithm1(){
>  for(Instance instance : instances)  algorithm1.run(instance);
> }
> 
> @Benchmark
> public void testAlgorithm2(){
>  for(Instance instance : instances)  algorithm2.run(instance);
> }
> 
> Does this make sense, or would you use a different approach, e.g. use some
> of the Setup parameters/annotations?
> 
> thanks,
> 
> Joris


More information about the jmh-dev mailing list