Would like to understand why gc time grows

Yu Zhang yu.zhang at oracle.com
Mon Feb 2 19:40:21 UTC 2015


Barak,

Thanks for the gc log.

The stop the world pause in your log includes, young, mixed, remark and 
clean up phases.
For details on how to under stand gc logs, please see
https://blogs.oracle.com/g1gc/date/20140801
https://blogs.oracle.com/g1gc/date/20140807

Here is a chart of the gc pauses from your log. Can not conclude gc 
pause is getting longer.
The default MaxGCPauseMillis is 200 (ms), does it work for you? Or you 
need lower pause time?

Thanks,
Jenny

On 2/2/2015 10:58 AM, Barak Yaish wrote:
> Sure, upload the file to 
> https://www.dropbox.com/sh/kinl4zo8apg6kh2/AADlh9b-X_Zk6L_tgPL_lyyma?dl=0
>
> Can you explain which messages indicates the time the app was paused 
> due to gc?
>
> Thanks!
>
> On Mon, Feb 2, 2015 at 6:23 PM, Yu Zhang <yu.zhang at oracle.com 
> <mailto:yu.zhang at oracle.com>> wrote:
>
>     Barak,
>
>     If you can share the entire gc log, we can understand what is
>     going on better.  From the snip, user/read ratio is 12.8, and you
>     have 13 gc threads. It looks normal.
>
>     Thanks,
>     Jenny
>
>     On 2/2/2015 7:39 AM, Barak Yaish wrote:
>>     Hi,
>>
>>     I'm having my application running on java8 with g1gc enabled.
>>     I've enabled gc logs, and I can see that gc time is growing ( at
>>     least this is what I think..)
>>
>>     VM flags:
>>     -Xms6g -Xmx6g
>>     -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:G1HeapRegionSize=32M -XX:+PrintAdaptiveSizePolicy
>>
>>
>>     A snippet from the gc log:
>>
>>     {Heap before GC invocations=342 (full 1):
>>      garbage-first heap   total 6291456K, used 5257482K
>>     [0x0000000640000000, 0x00000007c0000000, 0x00000007c0000000)
>>       region size 32768K, 104 young (3407872K), 10 survivors (327680K)
>>      Metaspace       used 31200K, capacity 31664K, committed 31872K,
>>     reserved 1077248K
>>       class space    used 3496K, capacity 3596K, committed 3712K,
>>     reserved 1048576K
>>     2015-02-02T17:36:48.237+0200: 4067.021: [GC pause (G1 Evacuation
>>     Pause) (young) 4067.021: [G1Ergonomics (CSet Construction) start
>>     choosing CSet, _pending_cards: 23499, predicted base time: 11.30
>>     ms, remaining time: 188.70 ms, target pause time: 200.00 ms]
>>      4067.021: [G1Ergonomics (CSet Construction) add young regions to
>>     CSet, eden: 94 regions, survivors: 10 regions, predicted young
>>     region time: 159.99 ms]
>>      4067.021: [G1Ergonomics (CSet Construction) finish choosing
>>     CSet, eden: 94 regions, survivors: 10 regions, old: 0 regions,
>>     predicted pause time: 171.29 ms, target pause time: 200.00 ms]
>>     , 0.2319995 secs]
>>        [Parallel Time: 229.0 ms, GC Workers: 13]
>>           [GC Worker Start (ms): Min: 4067021.3, Avg: 4067024.1, Max:
>>     4067029.4, Diff: 8.0]
>>           [Ext Root Scanning (ms): Min: 0.0, Avg: 0.5, Max: 1.3,
>>     Diff: 1.3, Sum: 6.9]
>>           [Update RS (ms): Min: 0.6, Avg: 5.3, Max: 7.5, Diff: 6.9,
>>     Sum: 69.2]
>>              [Processed Buffers: Min: 1, Avg: 10.2, Max: 33, Diff:
>>     32, Sum: 132]
>>           [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1,
>>     Sum: 1.0]
>>           [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0,
>>     Diff: 0.0, Sum: 0.0]
>>           [Object Copy (ms): Min: 219.7, Avg: 219.9, Max: 220.0,
>>     Diff: 0.3, Sum: 2858.1]
>>           [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1,
>>     Sum: 0.9]
>>           [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff:
>>     0.2, Sum: 1.2]
>>           [GC Worker Total (ms): Min: 220.7, Avg: 225.9, Max: 228.8,
>>     Diff: 8.0, Sum: 2937.3]
>>           [GC Worker End (ms): Min: 4067250.0, Avg: 4067250.1, Max:
>>     4067250.2, Diff: 0.2]
>>        [Code Root Fixup: 0.2 ms]
>>        [Code Root Migration: 0.0 ms]
>>        [Code Root Purge: 0.0 ms]
>>        [Clear CT: 1.4 ms]
>>        [Other: 1.4 ms]
>>           [Choose CSet: 0.0 ms]
>>           [Ref Proc: 0.8 ms]
>>           [Ref Enq: 0.0 ms]
>>           [Redirty Cards: 0.2 ms]
>>           [Free CSet: 0.1 ms]
>>        [Eden: 3008.0M(3008.0M)->0.0B(2976.0M) Survivors:
>>     320.0M->320.0M Heap: 5134.3M(6144.0M)->2168.1M(6144.0M)]
>>     Heap after GC invocations=343 (full 1):
>>      garbage-first heap   total 6291456K, used 2220163K
>>     [0x0000000640000000, 0x00000007c0000000, 0x00000007c0000000)
>>       region size 32768K, 10 young (327680K), 10 survivors (327680K)
>>      Metaspace       used 31200K, capacity 31664K, committed 31872K,
>>     reserved 1077248K
>>       class space    used 3496K, capacity 3596K, committed 3712K,
>>     reserved 1048576K
>>     }
>>      [Times: user=2.94 sys=0.00, real=0.23 secs]
>>
>>     And the "user" part is keep growing... Can you explain what this
>>     actually means? What exact process in the gc in contributing to
>>     this? And what can be done in order to minimize this time?
>>
>>     Thanks.
>>
>>
>>
>>
>>     _______________________________________________
>>     hotspot-gc-use mailing list
>>     hotspot-gc-use at openjdk.java.net  <mailto:hotspot-gc-use at openjdk.java.net>
>>     http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150202/83c03e59/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: hihgjffh.png
Type: image/png
Size: 15756 bytes
Desc: not available
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150202/83c03e59/hihgjffh-0001.png>


More information about the hotspot-gc-use mailing list