Question to JMH and JIT
Herr Knack
koehlerkokser at gmail.com
Mon Feb 4 13:53:55 UTC 2019
Hello Aleksey and thanks for your fast reply!
Sorry for the unclearness. In my benchmark methods I just execute any
arbitrary function looking like the examples from above and consume them
with a JMH blackhole. Before each iteration I compute a random input value
between -10 and 10 which is put into the benchmark method via a static
state class. This random generatingcode is not measured with the code in
the benchmark method. So the benchmark code looks like this:
package org.sample;
>
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Random;
>
> import org.openjdk.jmh.annotations.*;
> import org.openjdk.jmh.infra.Blackhole;
>
> @Fork(4)
> @Warmup(iterations = 3)
> @Measurement(iterations = 5)
> public class MyBenchmark {
>
> @State(Scope.Thread)
> public static class MyState {
> Random random;
> double[] inputs = new double[2];
> double a;
> double b;
>
> @Setup(Level.Invocation)
> public void setup() {
> random = new Random();
> inputs = random.doubles(inputs.length,-10,10).toArray();
> a = inputs[0];
> b = inputs[1];
> }
> }
>
> @Benchmark
> public void test1(MyState state, Blackhole blackhole) {
> blackhole.consume((((state.a / (state.a / (0.5 * state.a))) *
> (state.a / (state.a / (0.5 * state.a)))) * (0.5 * state.a)) + (state.a +
> ((((state.a / (state.a / (0.5 * state.a))) * (state.a / (state.a / (0.5 *
> state.a)))) * (0.5 * state.a)) / (((state.a / (0.5 * state.a)) + 0.5) -
> ((((state.a / (state.a / (0.5 * state.a))) * (state.a / (state.a / (0.5 *
> state.a)))) * (0.5 * state.a)) * (state.a / (state.a / (0.5 *
> state.a))))))));
> }
>
> ...
> }
>
Thank you for your hints, I will also try profiling the benchmark. Maybe
there are some other ideas?
Regards,
KKokser
On Mon, Feb 4, 2019 at 2:15 PM Aleksey Shipilev <shade at redhat.com> wrote:
> On 2/4/19 1:42 PM, Herr Knack wrote:
> > Could JIT be the problem?
>
> Yes, it could.
>
> > Does anyone know if the workflow of JIT
> > optimizations is documented anywhere?
>
> Even if it was, it is unlikely to help you. You need to drill into what is
> going on in the
> benchmark. It is hard to do based on your current explanation, because
> there is no benchmark code.
>
> > I hope someone can help, because I‘m getting crazy about it.
> Profile the benchmark with -prof perfasm at different workload sizes, and
> that should highlight what
> is the final generated code, which would give you some idea what might go
> wrong.
>
> Thanks,
> -Aleksey
>
>
More information about the discuss
mailing list