Usage of Blackhole in a loop distorts benchmark results

Nitsan Wakart nitsanw at yahoo.com
Wed Jan 17 21:07:01 UTC 2018


I appreciate your curiosity, now if you really want to know why things are the way they are, I suggest you do it by profiling A and B with -prof perfasm and keep digging ;-)
You have not run into an issue with JMH, but into the reality of the JVM.
Have fun!

> On 17 Jan 2018, at 22:43, Сергей Цыпанов <sergei.tsypanov at yandex.ru> wrote:
> 
> @Nitsan
> 
> thank you for your explanation, I've read your article and found it quite helpful.
> 
> One thing, however is odd to me: different ratio between benchmarks. Assume volatile reads inside of Blackhole bring some some steady impact onto benchmarks and then it should be the same for all iterations, right?
> If this assumption is correct then ratio must remain the same in spite of the fact absolute values differ. In my case ratio is different.
> 
> @Alexey
> 
> 1) JMH-generated code provides loops calling @Benchmark-annotated method like this one:
> 
> -------
> do {
>   blackhole.consume(l_iteratorfromstreambenchmark0_0.iteratorFromCollectedList(l_data1_1));
>   operations++;
> } while(!control.isDone);
> -------
> 
> Here we also have an instance of org.openjdk.jmh.infra.Blackhole swallowing the value returned from my method and volatile-read related effects should effectively take place for the preceding code just like for 'goodOldLoopReturns' method in Nitsan's example preventing DCE. But results indicate the opposite.
> 
> 2) As mentioned in the comment to JMHSample_34_SafeLooping.measureWrong_2 HotSpot does loop unrolling, so I pass -XX:LoopUnrollLimit=0 as an argument of @Fork.jvmAppendArgs over my "accumulating" benchmark. Result of 'iteratorFromStream' remains the same, but 'forEach' gets almost 3 times slower (740 -> 1971 ns). Both do looping, but unrolling affects only one of them. What's the matter?
> 
> Best regards!
> 
> 
> 



More information about the jmh-dev mailing list