Partial -prof gc results (churn and time missing for some parameters)
Aleksey Shipilev
aleksey.shipilev at oracle.com
Thu Apr 30 09:23:19 UTC 2015
Hi,
On 30.04.2015 03:18, Bernd Eckenfels wrote:
> if I run the following test project the results for the GC profiler are
> incomplete for the parameter runs with largest bufsize. I could understand that the large objects are directly promoted, but shouldnt there be some churn anyway?
>
> churn and count is present for smaller sizes and constantly absent for all benchmarks with 1MB buffer.
>
>
> https://github.com/ecki/jmhbench-cryptospeed
>
>> java -jar target\jmhbench-cryptospeed.jar -t 4 -wi 1 -prof gc \.MD5$
> # JMH 1.9.1 (released 6 days ago)
> # VM invoker: C:\Program Files\Java\jdk1.8.0_40\jre\bin\java.exe
>
> # Run complete. Total time: 00:01:06
>
> Benchmark (bufsize) Mode Cnt Score Error Units
> HashBenchmark.MD5 0 avgt 3 1.245 ± 2.054 us/op
> HashBenchmark.MD5:·gc.alloc.rate 0 avgt 3 1274.045 ± 3040.383 MB/sec
> HashBenchmark.MD5:·gc.alloc.rate.norm 0 avgt 3 414.026 ± 315.179 B/op
> HashBenchmark.MD5:·gc.churn.PS_Eden_Space 0 avgt 3 1277.885 ± 2983.771 MB/sec
> HashBenchmark.MD5:·gc.churn.PS_Eden_Space.norm 0 avgt 3 415.364 ± 314.255 B/op
> HashBenchmark.MD5:·gc.churn.PS_Survivor_Space 0 avgt 3 0.143 ± 1.813 MB/sec
> HashBenchmark.MD5:·gc.churn.PS_Survivor_Space.norm 0 avgt 3 0.047 ± 0.617 B/op
> HashBenchmark.MD5:·gc.count 0 avgt 3 76.000 counts
> HashBenchmark.MD5:·gc.time 0 avgt 3 90.000 ms
> ...
> HashBenchmark.MD5 1048576 avgt 3 10289.990 ± 2722.121 us/op
> HashBenchmark.MD5:·gc.alloc.rate 1048576 avgt 3 0.179 ± 0.042 MB/sec
> HashBenchmark.MD5:·gc.alloc.rate.norm 1048576 avgt 3 482.475 ± 222.011 B/op
> HashBenchmark.MD5:·gc.count 1048576 avgt 3 ? 0 counts
Note that "gc.count" is at zero: means no GCs are reported to MX beans
altogether. Which probably indicates the test does not trigger GCs much.
With 0.179 MB/sec allocation rate, that is not surprising. I was able to
produce a few GC events with longer iterations and -jvmArgs "-Xmx16m
-Xms16m".
I would guess that nothing cheesy is going on. On small datasets, you
produce the same 400+ bytes per benchmark op, but every microsecond,
which adds up to 400+ Mb/(sec*thread) GC pressure. On large datasets,
you produce the same 400 bytes each 10 milliseconds, and thus the GC
pressure is four orders of magnitude smaller.
In other words, GC churn are samplers based on observed GC counts, and
thus susceptible for sampling errors on low GC pressures and short runs.
Thanks,
-Aleksey
More information about the jmh-dev
mailing list