Question to JMH and JIT

Sergey Ponomarev stokito at gmail.com
Tue Feb 5 06:36:25 UTC 2019


Hi David,

You are right and it's fairly a good point: maybe it worth to create some
kind of usergroup list or forum topic where users can discuss results of
benchmarking?
And it might be good to create some FAQ or topic in the documentation with
links to articles about performance optimization done by JIT.


On Tue, 5 Feb 2019 at 07:59, David Holmes <david.holmes at oracle.com> wrote:

> Folks,
>
> This is not a suitable topic for the discuss list, please take technical
> discussions elsewhere.
>
> Thanks,
> David
>
> On 5/02/2019 5:37 am, Ngor wrote:
> > In addition I highly recommend reading the paper on the topic:
> >
> > Statistically Rigorous Java Performance Evaluation
> > https://dri.es/files/oopsla07-georges.pdf
> >
> > On Mon, Feb 4, 2019 at 7:44 AM Herr Knack <koehlerkokser at gmail.com>
> wrote:
> >
> >> Hey there,
> >>
> >> I currently try to find a mathematical model to compute the approximate
> >> runtime of a function term out of its basic structure for my machine
> >> (Ubuntu 16.04, i7-5500U, 2.40GHz×4). For first microbenchmark
> measurements
> >> I used the JMH and measured the throughput of some simple functions (4
> >> different forks, 3 warmup and 5 measurement iterations each fork, 10 sec
> >> time each iteration).
> >>
> >> I found out, that it is really difficult to see the connection between
> the
> >> structure of the function term and its runtime in the microbenchmark. I
> >> assume that the JIT compiler does some weird stuff to optimize the
> >> functions, but I can‘t see which optimizations in detail are done there.
> >>
> >> Could JIT be the problem? Does anyone know if the workflow of JIT
> >> optimizations is documented anywhere?
> >>
> >> I hope someone can help, because I‘m getting crazy about it. Here some
> >> results I got. I converted the throughput (runs/sec) to a average time
> per
> >> run in ns. The standard deviation is smaller as you might think, so the
> >> average time is most likely correct in an area of +/-0.3 ns :
> >>
> >> - (((0.5 * x) * 0.5) * (((((0.5 * x) * 0.5) / 0.5) + (((0.5 * x) * 0.5)
> *
> >> 0.5)) * (((0.5 * x) * 0.5) + x))) + x, 15 operations (3+, 11*, 1/),
> 28.09
> >> ns
> >>
> >> - x + ((((0.5 * x) * (0.5 * x)) * (0.5 * x)) + ((0.5 * x) * ((((0.5 *
> x) *
> >> (0.5 * x)) * (0.5 * x)) * (((0.5 * x) * (0.5 * x)) / ((((0.5 * x) *
> ((0.5 *
> >> x) * (0.5 * x))) / (0.5 + ((0.5 * x) * (0.5 * x)))) + (((0.5 * x) *
> (0.5 *
> >> x)) * (0.5 * x))))))), 35 operations (4+,29*,2/), 38.61 ns
> >>
> >> - (((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) * (0.5 * x)) + (x +
> >> ((((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) * (0.5 * x)) / (((x /
> >> (0.5 * x)) + 0.5) - ((((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) *
> >> (0.5 * x)) * (x / (x / (0.5 * x))))))), 38 operations (4+-,18*,16/),
> 40.70
> >> ns
> >>
> >> I assume that there is an constant offset for method calls and so on. In
> >> addition I suppose that already calculated term pieces don‘t have to be
> >> calculated again. But these conclusions did not help me either.
> >>
> >> Regards,
> >>
> >> KKokser
> >>
>


-- 
Sergey Ponomarev <https://linkedin.com/in/stokito>, skype:stokito


More information about the discuss mailing list