Peer-review JNI Construction tests please

Aleksey Shipilev ashipile at redhat.com
Fri Jan 3 15:54:08 UTC 2020


Hi,

On 12/22/19 11:33 AM, Adam Retter wrote:
> Would someone be able to peer-review my 3 simple benchmarks and tell
> me if I am taking the correct approach, or if I am doing something
> wrong?

Profiler would give you the answer. For tiny benchmarks like this, -prof perfnorm would give you the
idea what you are hunting for: excess instructions, excess cycles, excess memory accesses, or any
combination of the above. Then, regular "perf record" / "perf report" would give you the native
profile for the native code that you can study.

For example, there is no surprise that Invoke loses to Static, as it does additional SetLong upcall.

Conceptually, it is really the job for the person who does the benchmarking to explain the result.
Reviewers should only be tasked with checking if those explanations make sense with both
benchmarking and profiling data. That is, it is not enough for your benchmarking results to say "A
appears faster than B", but it should conclude with "A is faster than B, because A does <thing1>,
and B does <thing2>". Hardly anybody cares if A is faster than B, most care _why_ it is faster, and
if that "why" explanation makes any sense.

Stylistically, I would write benchmarks without explicit Blackholes:

    @Benchmark
    public Object fooByCall() {
        return new FooByCall();
    }

Also, AverageTime with nanosecond timeunit seems enough. Also, 5x1s warmup and 5x1s measurement
seems enough for the tiny benchmark like this.

-Aleksey



More information about the jmh-dev mailing list