Is it reasonable to compare outputs between JMH and hprof?
Aleksey Shipilev
aleksey.shipilev at oracle.com
Thu Dec 4 20:39:02 UTC 2014
Hi Wang,
On 04.12.2014 07:09, Wang Weijun wrote:
> I am comparing the difference of SHA-1 and SHA-256. First I wrote a JMH benchmark:
>
> @Benchmark
> public void sig1(Blackhole bh) throws Exception {
> bh.consume(sig("SHA-1"));
> }
>
> @Benchmark
> public void sig2(Blackhole bh) throws Exception {
> bh.consume(sig("SHA-256"));
> }
You can just return byte[] from the @Benchmark methods here for brevity.
> public static void main(String args[]) throws Exception {
> int i = Arrays.hashCode(sig("SHA-1"));
> i += Arrays.hashCode(sig("SHA-256"));
> System.out.println(i);
> }
>
> static byte[] sig(String alg) throws Exception {
> MessageDigest md = MessageDigest.getInstance(alg);
> md.update(new byte[10000]);
> return md.digest();
> }
> Why is the output so different from JMH?
Step back from profiler for a bit. What you just did is a custom
benchmark, which you then tried to "measure" with profiler. If you
"just" measure the time spent in different parts of your custom main()
with System.nanoTime(), you will probably see the same "weird" distribution.
That is because you have committed a number of benchmark sins JMH was
trying to protect you from:
1. The absence of warmup.
2. Mixing the profiles in the same run.
3. High-overhead result consumption.
Once you realize that, your question transforms into: "Is it reasonable
to compare the JMH-driven benchmark and this custom benchmark?" The
answer should be obvious once you digest the JMH samples.
> Is it reasonable comparing them?
In addition to what Bernd said, what are you trying to compare?
Profilers are not the tools for measuring time. Profilers are the tools
for assessing where the time is spent. In other words, they tell you how
the time is distributed around the code, not the absolute timings. This
is because profilers normally incur overhead and try to recover from
even greater overhead by being probabilistic. We are just crossing
fingers in the belief the profiler overhead is uniform across all
methods, lines of code, etc.
If anything, the saner idea would be attaching the profiler to a JMH
benchmark to explain the benchmark numbers.
Thanks,
-Aleksey.
More information about the jmh-dev
mailing list