Value returned by benchmark method.

Artem Barger artem at bargr.net
Wed Aug 10 19:22:24 UTC 2016


​Hi,


> Somewhat. However, we would need to explore are there less painful ways
> to achieve this result already, or not. For example, if you want to
> assert that @Benchmark methods return the same result, you may construct
> a custom @Setup method that will call the methods of interest directly,
> compare their results, and throw exceptions when results disagree.
>
>
​Not sure I get your explanation here :{​



> I think this is less painful than turning JMH into full-fledged assert
> machine. Some workloads would indeed benefit from checking that results
> returned from @Benchmark are the same across different @Benchmark-s, but
> many would require scripting to spell out the invariants to test. This
> seems like something @Setup/@TearDown methods are already providing.
>
> See e.g.:
>  http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-
> samples/src/main/java/org/openjdk/jmh/samples/JMHSample_
> 05_StateFixtures.java
>
> Alternatively, you may want to structure your project is such a way that
> there is a shared implementation, separate benchmarks that benchmark
> those implementations, into those implementations, and separate unit
> tests that assert implementation correctness.
>
>
​No-no, I think you got my question wrong. Let me try to explain and let's
assume we considering only numeric cases.
All I'm trying to achieve is to be able to show for example the variance
and an averages of returned values across the
benchmark. For example I can think of different numeric approximation of
integral computation, which I'd like to compare
one could be more performant ​than other, however due to the natural
trade-off compromise it returns approximation with
greater error or bigger variance. Therefore all I need is just to be able
to gather and print this information additionally to
the regular JMH benchmark output.



> (To the extreme point, JMH-annotated tests and JUnit/TestNg tests are
> known to coexist well, which makes it possible to run the functional and
> performance tests off the same compilation unit -- JMH's own tests
> exploit that all the time)
>

​Again, I wasn't considering to assert my tests at all, just need to
compare performance and the accuracy at the same time.​


More information about the jmh-dev mailing list