Why does G1GC do Full GC when it only needs 40 bytes?
yao
yaoshengzhe at gmail.com
Mon Jun 9 18:42:48 UTC 2014
Hi Martin,
Just quickly went through your gc log and following lines caught my
attention.
2014-06-06T17:11:29.109+0300: 34003.013: [GC pause (young) 34003.013:
[G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards:
127096, predicted base time: 293.06 ms, remaining time: 206.94 ms,
target pause time: 500.00 ms]
34003.013: [G1Ergonomics (CSet Construction) add young regions to
CSet, eden: 872 regions, survivors: 113 regions, predicted young
region time: 880.35 ms]
34003.013: [G1Ergonomics (CSet Construction) finish choosing CSet,
eden: 872 regions, survivors: 113 regions, old: 0 regions, predicted
pause time: 1173.41 ms, target pause time: 500.00 ms]
34003.014: [G1Ergonomics (Heap Sizing) attempt heap expansion,
reason: region allocation request failed, allocation request: 4194304
bytes]
34003.014: [G1Ergonomics (Heap Sizing) expand the heap, requested
expansion amount: 4194304 bytes, attempted expansion amount: 8388608
bytes]
34003.014: [G1Ergonomics (Heap Sizing) did not expand the heap,
reason: heap expansion operation failed]
34021.371: [G1Ergonomics (Heap Sizing) attempt heap expansion,
reason: recent GC overhead higher than threshold after GC, recent GC
overhead: 12.44 %, threshold: 10.00 %, uncommitted: 0 bytes,
calculated expansion amount: 0 bytes (20.00 %)]
34021.371: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
initiation, reason: occupancy higher than threshold, occupancy:
32212254720 bytes, allocation request: 0 bytes, threshold: 0 bytes
(0.00 %), source: end of GC]
(to-space exhausted), 18.3580340 secs]
[Parallel Time: 15011.2 ms, GC Workers: 13]
[GC Worker Start (ms): Min: 34003013.8, Avg: 34003018.9, Max:
34003045.1, Diff: 31.3]
[Ext Root Scanning (ms): Min: 0.0, Avg: 8.4, Max: 16.7, Diff:
16.7, Sum: 109.6]
[Update RS (ms): Min: 0.0, Avg: 2.7, Max: 6.8, Diff: 6.8, Sum: 34.9]
[Processed Buffers: Min: 0, Avg: 39.9, Max: 153, Diff: 153, Sum: 519]
[Scan RS (ms): Min: 0.2, Avg: 22.6, Max: 35.7, Diff: 35.5, Sum: 294.2]
[Object Copy (ms): Min: 14949.9, Avg: 14968.9, Max: 15009.4,
Diff: 59.5, Sum: 194595.3]
[Termination (ms): Min: 0.1, Avg: 3.1, Max: 5.1, Diff: 5.0, Sum: 40.9]
[GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0]
[GC Worker Total (ms): Min: 14979.6, Avg: 15005.8, Max: 15011.0,
Diff: 31.3, Sum: 195075.8]
[GC Worker End (ms): Min: 34018024.7, Avg: 34018024.7, Max:
34018024.8, Diff: 0.1]
[Code Root Fixup: 0.0 ms]
[Clear CT: 2.0 ms]
[Other: 3344.9 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 199.9 ms]
[Ref Enq: 0.9 ms]
[Free CSet: 1.0 ms]
[Eden: 6976.0M(11.1G)->0.0B(11.1G) Survivors: 904.0M->0.0B Heap:
30.0G(30.0G)->30.0G(30.0G)]
Typically, it means that there is a increasing object allocation in a short
amount of time and G1 think it needs more heap. However, the following GC
indicates live object size is only ~15G, half of your heap. Combine these
two, it means that probably we need to ask G1 to do more clean up for your
application to make sure there is enough space to handle such allocation
traffic.
2014-06-06T13:48:17.164+0300: 21811.068: [Full GC 29G->15G(30G),
53.9089520 secs]
[Eden: 0.0B(5456.0M)->0.0B(1536.0M) Survivors: 0.0B->0.0B Heap:
29.9G(30.0G)->15.1G(30.0G)]
To avoid full GC, here are two important G1 parameters you can tune
(suggested numbers are based on my personal experience)
-XX:*G1ReservePercent*=20 (default: 10)
Sets the percentage of reserve memory to keep free so as to reduce the risk
of to-space overflows. The default is 10 percent. When you increase or
decrease the percentage, make sure to adjust the total Java heap by the
same amount. This setting is not available in Java HotSpot VM, build 23.
-XX:*G1HeapWastePercent*=5 (default: 10)
Sets the percentage of heap that you are willing to waste. The Java HotSpot
VM does not initiate the mixed garbage collection cycle when the
reclaimable percentage is less than the heap waste percentage. The default
is 10 percent. This setting is not available in Java HotSpot VM, build 23.
In this way, G1 will reserve some heap (20%) for emergency and do more
clean up, which should help with your situation.
In addition, even if you want to make sure GC is always "on", it is still
better to set InitiatingHeapOccupancyPercent slight higher than your live
object ratio. For example, you may want to set it to 55% (slightly larger
than 15.1 / 30).
The good news is, we know one of our production services running with 100GB
heap under G1 but there is no full GC at all and its throughput is also
better than CMS. Please try to tune these parameters and see if it helps. I
found this page is also super useful:
http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html
Best
Shengzhe
On Mon, Jun 9, 2014 at 7:29 AM, Martin Makundi <
martin.makundi at koodaripalvelut.com> wrote:
> 1. "-XX:InitiatingHeapOccupancyPercent=0 should trigger concurrent gcs
> too much. "
>
> We use InitiatingHeapOccupancyPercent to make sure the gc is always
> active. How is InitiatingHeapOccupancyPercent related to Full GC:s?
>
> 2. "What is the jdk version?
>
> java version "1.7.0_55"
> Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
>
>
> 3. Can you share the gc log?"
>
> Log is available at 81.22.250.165/log
>
> 4. I wonder if it is possible to have mixed set of different/automatically
> varying sizes of G1HeapRegionSize's small for smaller objects and larger
> for larger objects?
>
>
> **
> Martin
>
>
> 2014-06-07 11:03 GMT+03:00 Martin Makundi <
> martin.makundi at koodaripalvelut.com>:
>
> Why does G1GC do Full GC when it only needs only 40 bytes?
>>
>> Is there a way to tune this so that it would try to free up "some" chunk
>> of memory and escape the full gc when enough memory has been freed? This
>> would lower the freeze time?
>>
>> See logs:
>>
>> [Times: user=12.89 sys=0.16, real=1.25 secs]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC
>> concurrent-root-region-scan-start]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC
>> concurrent-root-region-scan-end, 0.0000180 secs]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC concurrent-mark-start]
>> 34022.632: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>> allocation request failed, allocation request: 40 bytes]
>> 34022.632: [G1Ergonomics (Heap Sizing) expand the heap, requested
>> expansion amount: 8388608 bytes, attempted expansion amount: 8388608 bytes]
>> 34022.632: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
>> heap expansion operation failed]
>> {Heap before GC invocations=1867 (full 1):
>> garbage-first heap total 31457280K, used 31453781K
>> [0x00007f1724000000, 0x00007f1ea4000000, 0x00007f1ea4000000)
>> region size 8192K, 0 young (0K), 0 survivors (0K)
>> compacting perm gen total 524288K, used 166271K [0x00007f1ea4000000,
>> 0x00007f1ec4000000, 0x00007f1ec4000000)
>> the space 524288K, 31% used [0x00007f1ea4000000, 0x00007f1eae25fdc8,
>> 0x00007f1eae25fe00, 0x00007f1ec4000000)
>> No shared spaces configured.
>> 2014-06-06T17:11:48.728+0300: 34022.632: [Full GC 29G->13G(30G),
>> 47.7918670 secs]
>>
>>
>> Settings:
>>
>> -server -XX:InitiatingHeapOccupancyPercent=0 -Xss4096k
>> -XX:MaxPermSize=512m -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc
>> -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods
>> -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit
>> -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC
>> -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=70
>> -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy
>> -XX:MaxGCPauseMillis=500 -XX:+UseG1GC -XX:G1HeapRegionSize=8M
>> -XX:GCPauseIntervalMillis=10000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC
>> -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC
>> -Xloggc:gc.log
>>
>
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140609/088555f3/attachment-0001.html>
More information about the hotspot-gc-use
mailing list