Value returned by benchmark method.
Aleksey Shipilev
aleksey.shipilev at gmail.com
Wed Aug 10 19:46:28 UTC 2016
On 08/10/2016 10:22 PM, Artem Barger wrote:
> Somewhat. However, we would need to explore are there less painful ways
> to achieve this result already, or not. For example, if you want to
> assert that @Benchmark methods return the same result, you may construct
> a custom @Setup method that will call the methods of interest directly,
> compare their results, and throw exceptions when results disagree.
>
> Not sure I get your explanation here :{
Ah, if you want to record each and every result coming from a
@Benchmark, then we have a problem: we do not record the computation
results (this is a massive performance feature, not a bug), and we
probably never will.
> Alternatively, you may want to structure your project is such a way that
> there is a shared implementation, separate benchmarks that benchmark
> those implementations, into those implementations, and separate unit
> tests that assert implementation correctness.
>
>
> No-no, I think you got my question wrong. Let me try to explain and
> let's assume we considering only numeric cases. All I'm trying to
> achieve is to be able to show for example the variance and an
> averages of returned values across the benchmark. For example I can
> think of different numeric approximation of integral computation,
> which I'd like to compare one could be more performant than other,
> however due to the natural trade-off compromise it returns
> approximation with greater error or bigger variance. Therefore all I
> need is just to be able to gather and print this information
> additionally to the regular JMH benchmark output.
OK! My answer stands: run the workload code separately, and compute
whatever based on the returned data coming from workload methods. JMH
measures performance, not the arbitrary user metrics (with little
exceptions we occasionally regret).
We wouldn't do better than recording results in a list/array and dumping
it at the end of the run anyway. You might do that yourself by putting
"recording" code into @Benchmark method, and dump the buffer at
@TearDown. But I think it would be less painful to avoid dragging JMH
into this.
Thanks,
-Aleksey
More information about the jmh-dev
mailing list