Value returned by benchmark method.

Aleksey Shipilev aleksey.shipilev at gmail.com
Wed Aug 10 20:44:38 UTC 2016


On 08/10/2016 11:32 PM, Artem Barger wrote:
> 
> On Wed, Aug 10, 2016 at 10:46 PM, Aleksey Shipilev
> <aleksey.shipilev at gmail.com <mailto:aleksey.shipilev at gmail.com>> wrote:
> 
>     OK! My answer stands: run the workload code separately, and compute
>     whatever based on the returned data coming from workload methods. JMH
>     measures performance, not the arbitrary user metrics (with little
>     exceptions we occasionally regret).
> 
> ​Yeah, I agree, just though that introducing a new annotation
> allowing to collect numerical results​ from the workloads and latter
> showing stats on these results could be somewhat interesting.

Analyzing the code for a non-performance metrics does normally require
bringing in other tools. E.g. we regularly bring in JOL to weight
instances or peek into their internals. It is completely fine to have
JMH as one of the tools in a toolbelt, but it would be odd to add more
tangential features to JMH turning it into a bad swiss-army knife replica.

>     We wouldn't do better than recording results in a list/array and dumping
>     it at the end of the run anyway. You might do that yourself by putting
>     "recording" code into @Benchmark method, and dump the buffer at
>     @TearDown. But I think it would be less painful to avoid dragging JMH
>     into this.
> 
> ​Yes, this can work, just afraid that adding these workaround manipulations
> might bring additional performance hit. :/

There is no magic on JMH side: it will have to do the same! If you
cannot imagine how to do stuff fast in our own code, then chances are
infrastructure support would not help you much.

Thanks,
-Aleksey




More information about the jmh-dev mailing list