G1 root cause and tuning
Yu Zhang
yu.zhang at oracle.com
Tue Mar 31 02:23:12 UTC 2015
Charlie,
Thanks for the comments. please see my response inline.
Thanks,
Jenny
On 3/30/2015 6:41 PM, charlie hunt wrote:
> Hi Jenny,
>
> One possibility is that there is not enough available contiguous
> regions to satisfy a 300+ MB humongous allocation.
>
> If we assume a 22 GB Java heap, (a little larger than the 22480M shown
> in the log), with 2048 G1 regions (default as you know), the region
> size would be about 11 MB. That implies there needs to be about 30
> contiguous G1 regions available to satisfy the humongous allocation
> request.
Good point!
>
> An unrelated question … do other GCs have a similar pattern of a
> rather large percentage of time in Ref Proc relative to the overall
> pause time, i.e. 24.7 ms / 120 ms ~ 20% of the pause time. If that’s
> the case, then if -XX:+ParallelRefProcEnabled is not already set,
> there may be some low hanging tuning fruit. But, it is not going to
> address the frequent humongous allocation problem. It is also
> interesting in that the pause time goal is 2500 ms, yet the actual
> pause time is 120 ms, and eden is being sized at less than 1 GB out of
> a 22 GB Java heap. Are the frequent humongous allocations messing
> with the heap sizing heuristics?
Most of the time, the RefProc is below 10ms, but jumps to 20-60ms, so it
might help with enabling parallelrefproc. I do not remember in jdk7, if
it is on by default or not.
This log is really strange, as most of the time, the heap usage is ~9g
out of 22g, then the humongous allocations jumps in. As the log entries
are mangled, it is hard to connect the dots.
>
> hths,
>
> charlie
>
>> On Mar 30, 2015, at 7:19 PM, Yu Zhang <yu.zhang at oracle.com
>> <mailto:yu.zhang at oracle.com>> wrote:
>>
>> Medan,
>>
>> Thanks for the logs. The log messages are somewhat mangled, some of
>> the records are not complete.
>> There is 1 Full gc in wrapper.log.14. Others do not have full gc.
>> This workload has a lot of humongous objects allocations, up to 10g
>> size. Though g1 can reclaim some humongous objects at young gc, a lot
>> of reclamations are done during full gc.
>>
>> But this log snip does not make sense to me.
>>
>> 152549.805: [GC pause (young) 152549.805: [G1Ergonomics (CSet
>> Construction) start choosing CSet, predicted base time: 115.49 ms,
>> remaining time: 2384.51 ms, target pause time: 2500.00 ms]
>> 152549.805: [G1Ergonomics (CSet Construction) add young regions to
>> CSet, eden: 155 regions, survivors: 23 regions, predicted young
>> region time: 71.62 ms]
>> 152549.805: [G1Ergonomics (CSet Construction) finish choosing CSet,
>> eden: 155 regions, survivors: 23 regions, old: 0 regions, predicted
>> pause time: 187.12 ms, target pause time: 2500.00 ms]
>> , 0.12006414 secs]
>> [Parallel Time: 93.1 ms]
>> [GC Worker Start (ms): 152549804.8 152549804.8 152549804.8
>> 152549804.8 152549804.8 152549804.8 152549804.8 152549804.8
>> 152549804.8 152549804.9 152549804.9 152549805.0 152549805.0
>> Avg: 152549804.8, Min: 152549804.8, Max: 152549805.0, Diff: 0.2]
>> [Ext Root Scanning (ms): 13.3 13.4 18.6 13.2 17.0 18.5
>> 15.3 17.2 14.8 0.1 17.0 17.2 11.4
>> Avg: 14.4, Min: 0.1, Max: 18.6, Diff: 18.5]
>> [Update RS (ms): 51.9 52.5 48.9 52.0 51.3 48.4 50.8
>> 50.0 50.7 47.1 49.1 49.6 52.2
>> * Avg: 50.3, Min: 47.1, Max: 52.5, Diff: 5.4]
>> [Processed Buffers : 27 20 14 29 21 18 18 29 17 8 17 24 24
>> Sum: 266, Avg: 20, Min: 8, Max: 29, Diff: 21]
>> [Scan RS (ms): 0.3 0.0 0.0 0.2 0.0 0.0 0.0 0.0 0.0
>> 0.0 0.0 0.0 0.0
>> Avg: 0.1, Min: 0.0, Max: 0.3, Diff: 0.3]
>> [Object Copy (ms): 21.7 21.4 19.7 21.8 18.9 20.3 21.1
>> 20.0 21.6 18.2 21.0 20.2 23.5
>> Avg: 20.7, Min: 18.2, Max: 23.5, Diff: 5.3]
>> [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
>> 0.0 0.0 0.0 0.0
>> Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0]
>> [Termination Attempts : 41 40 33 38 51 32 43 31 32 45 34 1 42
>> Sum: 463, Avg: 35, Min: 1, Max: 51, Diff: 50]
>> [GC Worker End (ms): 152549892.1 152549892.0 152549892.1
>> 152549892.0 152549892.1 152549892.1 152549892.0 152549892.1
>> 152549892.0 152549892.0 152549892.0 152549892.1 152549892.1
>> Avg: 152549892.1, Min: 152549892.0, Max: 152549892.1, Diff: 0.1]
>> [GC Worker (ms): 87.3 87.3 87.3 87.3 87.3 87.3 87.2
>> 87.3 87.2 87.1 87.1 87.1 87.1
>> Avg: 87.2, Min: 87.1, Max: 87.3, Diff: 0.2]
>> [GC Worker Other (ms): 5.9 5.9 5.9 5.9 5.9 5.9 5.9 5.9
>> 5.9 27.8 6.0 6.1 6.1
>> Avg: 7.6, Min: 5.9, Max: 27.8, Diff: 21.9]
>> [Clear CT: 0.2 ms]
>> [Other: 26.7 ms]
>> [Choose CSet: 0.0 ms]
>> [Ref Proc: 24.7 ms]
>> [Ref Enq: 0.1 ms]
>> [Free CSet: 1.0 ms]
>> [Eden: 620M(932M)->0B(960M) Survivors: 92M->64M Heap:
>> 7037M(22480M)->6476M(22480M)]
>> [Times: user=1.34 sys=0.00, real=0.12 secs]
>> 152549.925: [G1Ergonomics (Heap Sizing) attempt heap expansion,
>> reason: humongous allocation request failed, allocation request:
>> 305776936 bytes]
>> 152549.925: [G1Ergonomics (Heap Sizing) expand the heap, requested
>> expansion amount: 260046848 bytes, attempted expansion amount:
>> 260046848 bytes]
>> 152549.925: [G1Ergonomics (Heap Sizing) did not expand the heap,
>> reason: heap expansion operation failed]
>> Total time for which application threads were stopped: 0.1240664 seconds
>> 152549.958: [G1Ergonomics (Heap Sizing) attempt heap expansion,
>> reason: humongous allocation request failed, allocation request:
>> 305776936 bytes]
>> 152549.958: [G1Ergonomics (Heap Sizing) expand the heap, requested
>> expansion amount: 260046848 bytes, attempted expansion amount:
>> 260046848 bytes]
>> 152549.958: [G1Ergonomics (Heap Sizing) did not expand the heap,
>> reason: heap expansion operation failed]
>> 152549.958: [G1Ergonomics (Heap Sizing) attempt heap expansion,
>> reason: allocation request failed, allocation request: 305776936 bytes]
>> 152549.958: [G1Ergonomics (Heap Sizing) expand the heap, requested
>> expansion amount: 305776936 bytes, attempted expansion amount:
>> 306184192 bytes]
>> 152549.958: [G1Ergonomics (Heap Sizing) did not expand the heap,
>> reason: heap expansion operation failed]
>> 152549.958: [Full GC
>>
>> Before allocating humongous object, 6 out of 22g heap is used, but
>> allocation 300m object caused a full gc? I do not have an
>> explanation for this.
>>
>> *
>> Thanks,
>> Jenny
>> On 3/27/2015 8:30 AM, Medan Gavril wrote:
>>> Hi ,
>>>
>>> I saw your G1 presentation and I found it good and interesting. I am
>>> new to G1 tuning and I would need you suggestions if you have time.
>>>
>>> In our app, when we have a FULL GC :
>>> 1. it restarts the application
>>> 2. we cannot get the right data to understand the root cause
>>>
>>> We switched from CMS to G1 in order to avoid long FULL GCs.
>>>
>>> JRE 1.17 update 17 it is being used.
>>>
>>> GC params:
>>>
>>> wrapper.java.additional.1=-server
>>> wrapper.java.additional.2=-XX:+PrintCommandLineFlags
>>> wrapper.java.additional.3=-XX:+UseG1GC
>>> wrapper.java.additional.7=-XX:MaxGCPauseMillis=2500
>>> wrapper.java.additional.8=-Dsun.rmi.dgc.client.gcInterval=0x7ffffffffffffffe
>>> wrapper.java.additional.9=-Dsun.rmi.dgc.server.gcInterval=0x7ffffffffffffffe
>>> wrapper.java.additional.10=-XX:+HeapDumpOnOutOfMemoryError
>>> wrapper.java.additional.11=-verbose:gc
>>> wrapper.java.additional.12=-XX:+PrintGCDetails
>>> wrapper.java.additional.13=-Ducmdb.home=%SERVER_BASE_DIR%
>>>
>>> wrapper.java.additional.52=-XX:+PrintGCTimeStamps
>>> wrapper.java.additional.53=-XX:+PrintGCApplicationStoppedTime
>>> wrapper.java.additional.55=-XX:+PrintAdaptiveSizePolicy
>>>
>>> The error from wrapper log is:
>>>
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | [Eden:
>>> 692M(968M)->0B(972M) Survivors: 56M->52M Heap:
>>> 8127M(22480M)->7436M(22480M)]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | [Times: user=1.51
>>> sys=0.02, real=0.19 secs] /
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.265:
>>> [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>>> humongous allocation request failed, allocation request: 189267984
>>> bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.265:
>>> [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
>>> amount: 188743680 bytes, attempted expansion amount: 188743680 bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.265:
>>> [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap
>>> expansion operation failed]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | Total time for which
>>> application threads were stopped: 0.2031307 seconds/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>>> humongous allocation request failed, allocation request: 189267984
>>> bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
>>> amount: 188743680 bytes, attempted expansion amount: 188743680 bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap
>>> expansion operation failed]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>>> allocation request failed, allocation request: 189267984 bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
>>> amount: 189267984 bytes, attempted expansion amount: 192937984 bytes]/
>>> /INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285:
>>> [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap
>>> expansion operation failed]/
>>> */INFO | jvm 1 | 2015/03/25 15:23:43.268 | 93238.285: [Full GC/*
>>> /ERROR | wrapper | 2015/03/25 15:25:57.694 | JVM appears hung:
>>> *Timed out waiting for signal from JVM.*/
>>> /ERROR | wrapper | 2015/03/25 15:25:58.021 | JVM did not exit /
>>>
>>>
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.335:
>>> [G1Ergonomics (Concurrent Cycles) request concurrent cycle
>>> initiation, reason: occupancy higher than threshold, occupancy:
>>> 10603200512 bytes, allocation request: 14584896 bytes, threshold:
>>> 10607394780 bytes (45.00 %), source: concurrent humongous allocation]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.337:
>>> [G1Ergonomics (Concurrent Cycles) request concurrent cycle
>>> initiation, reason: requested by GC cause, GC cause: G1 Humongous
>>> Allocation]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.337:
>>> [G1Ergonomics (Concurrent Cycles) initiate concurrent cycle, reason:
>>> concurrent cycle initiation requested]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.337: [GC pause
>>> (young) 92696.338: [G1Ergonomics (CSet Construction) start choosing
>>> CSet, predicted base time: 165.76 ms, remaining time: 2334.24 ms,
>>> target pause time: 2500.00 ms]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.338:
>>> [G1Ergonomics (CSet Construction) add young regions to CSet, eden:
>>> 114 regions, survivors: 8 regions, predicted young region time:
>>> 32.04 ms]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.289 | 92696.338:
>>> [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 114
>>> regions, survivors: 8 regions, old: 0 regions, predicted pause time:
>>> 197.80 ms, target pause time: 2500.00 ms]/
>>> /INFO | jvm 1 | 2015/03/25 15:14:41.398 | (initial-mark),
>>> 0.15117107 secs]/
>>> /
>>> /
>>> /We increased the wrapper timeout but still no useful data about the
>>> FULL GC./
>>> /
>>> /
>>> /Any suggestion is highly appreciated. Currently I suggested to add
>>> "PrintHeapAtGCExtended "/
>>> /
>>> /
>>> /Best Regards,
>>> Gabi Medan/
>>>
>>>
>>>
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net <mailto:hotspot-gc-use at openjdk.java.net>
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150330/7037e613/attachment-0001.html>
More information about the hotspot-gc-use
mailing list