G1gc compaction algorithm
Martin Makundi
martin.makundi at koodaripalvelut.com
Wed Jul 16 19:03:43 UTC 2014
Hi!
Now I get lots of huge drops:
1. [Times: user=2.47 sys=0.02, real=0.29 secs]
5775.731: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
allocation request failed, allocation request: 160 bytes]
5775.731: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
amount: 4194304 bytes, attempted expansion amount: 4194304 bytes]
5775.731: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap expansion operation failed]
{Heap before GC invocations=711 (full 0):
garbage-first heap total 31457280K, used 31077261K [0x00007f9d8c000000,
0x00007fa50c000000, 0x00007fa50c000000)
region size 4096K, 0 young (0K), 0 survivors (0K)
compacting perm gen total 524288K, used 159575K [0x00007fa50c000000,
0x00007fa52c000000, 0x00007fa52c000000)
the space 524288K, 30% used [0x00007fa50c000000, 0x00007fa515bd5fe8,
0x00007fa515bd6000, 0x00007fa52c000000)
No shared spaces configured.
2014-07-16T10:41:15.638+0300: 5775.731: [Full GC 29G->12G(30G), 47.1685170
secs]
2. [Times: user=3.24 sys=0.04, real=0.32 secs]
9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
humongous allocation request failed, allocation request: 22749376 bytes]
9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
amount: 25165824 bytes, attempted expansion amount: 25165824 bytes]
9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap expansion operation failed]
9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
humongous allocation request failed, allocation request: 22749376 bytes]
9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
amount: 25165824 bytes, attempted expansion amount: 25165824 bytes]
9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap expansion operation failed]
9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
allocation request failed, allocation request: 22749376 bytes]
9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion
amount: 22749376 bytes, attempted expansion amount: 25165824 bytes]
9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap expansion operation failed]
{Heap before GC invocations=1035 (full 1):
garbage-first heap total 31457280K, used 28101287K [0x00007f9d8c000000,
0x00007fa50c000000, 0x00007fa50c000000)
region size 4096K, 31 young (126976K), 31 survivors (126976K)
compacting perm gen total 524288K, used 166022K [0x00007fa50c000000,
0x00007fa52c000000, 0x00007fa52c000000)
the space 524288K, 31% used [0x00007fa50c000000, 0x00007fa516221960,
0x00007fa516221a00, 0x00007fa52c000000)
No shared spaces configured.
2014-07-16T11:44:15.727+0300: 9555.819: [Full GC 26G->13G(30G), 50.5019980
secs]
3. [Times: user=4.33 sys=0.06, real=0.48 secs]
14858.255: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
allocation request failed, allocation request: 24 bytes]
14858.255: [G1Ergonomics (Heap Sizing) expand the heap, requested
expansion amount: 4194304 bytes, attempted expansion amount: 4194304 bytes]
14858.255: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap expansion operation failed]
{Heap before GC invocations=1422 (full 2):
garbage-first heap total 31457280K, used 31174168K [0x00007f9d8c000000,
0x00007fa50c000000, 0x00007fa50c000000)
region size 4096K, 0 young (0K), 0 survivors (0K)
compacting perm gen total 524288K, used 176349K [0x00007fa50c000000,
0x00007fa52c000000, 0x00007fa52c000000)
the space 524288K, 33% used [0x00007fa50c000000, 0x00007fa516c37428,
0x00007fa516c37600, 0x00007fa52c000000)
No shared spaces configured.
2014-07-16T13:12:38.163+0300: 14858.255: [Full GC 29G->10G(30G), 41.2695750
secs]
Is there some parameter I can tune to make the phases preceding Full GC to
already do most of this cleaning (and thus avoid Full GC) ? From my
perspective it looks like the GC has been very lazy and full gc is needed
to do the job right...
My parameters:
-server -XX:InitiatingHeapOccupancyPercent=0
-XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10
-XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m
-XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc
-XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods
-XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit
-Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC
-XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=90
-XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy
-XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC
-XX:G1HeapRegionSize=5M -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails
-XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps
-XX:+PrintGC -Xloggc:gc.log
**
Martin
2014-07-16 15:00 GMT+03:00 Thomas Schatzl <thomas.schatzl at oracle.com>:
> Hi,
>
> On Wed, 2014-07-16 at 09:45 +0200, Pas wrote:
>
>
> > On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi
> > <martin.makundi at koodaripalvelut.com> wrote:
>
> > I can add a comment, but what do you mean with
> > "continuous parallel
> > compaction" if I may ask, and what exact purpose does
> > it serve?
>
> > The terms are too generic to me to discern any
> > particular functionality.
>
> > I mean that currently compacting occurs on full gc meaning
> > stop-the-world. Would be nice if compacting would occur in
> > parallel while app is running and taking into account all
> > timing targets such as MaxGCPauseMillis etc.
>
> You mean in-place space reclamation during young gc as it occurs during
> full gc. Generally, as long as there are free regions, the current
> scheme of evacuating into other areas is sufficient and simpler (read:
> faster).
> This does not say that we won't think about it in the future if it is
> useful (e.g. to recover more nicely from evacuation failures).
>
> I agree with Yaoshengzhe that JDK-8038487 is what you should look out
> for. About the 60-70% occupied heap - that the value might be so low
> because of fragmentation at the end of humongous objects; see
> https://bugs.openjdk.java.net/browse/JDK-8031381 for a discussion.
> However since full gc can reclaim enough space to fit these objects,
> this does not seem to be the case. This may also mean that these large
> objects are simply very short-living if the heap after full gc
> dramatically decreases, so something like JDK-8027959 will help.
>
> The most proper solution for your case can only be found out by
> PrintHeapAtGC/Extended (at every GC) or G1PrintRegionLivenessInfo (at
> end of every marking) output.
>
> If you have lots of arrays that just straggle a region boundary, and so
> are basically occupying twice their size, an alternative to get back
> some memory would be to size the arrays used slightly smaller.
>
> > In case of G1, compacting occurs in the mixed phase too. The
> > documentation
> > (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is
> > unclear, but we can assume that even unreachable (dead) humongous
> > objects are collected in that phase (for example the description of
>
> No. At the moment only marking or full gc clears unreachable (dead)
> humongous objects.
>
> Only JDK-8027959 (which is out for review) removes this restriction.
> There are some restrictions to this at this time, but they they are
> purely due to cost/benefit tradeoffs.
>
> They are sort-of logically assigned to to the old generation. Which
> means that humongous region treatment breaks pure generational thinking
> with that change.
>
> > this bug https://bugs.openjdk.java.net/browse/JDK-8049332 implies
> > that we're correct), they're just not moved (evicted, as they
> > basically always represent one region).
>
> JDK-8049332 only mentions full GCs so I cannot follow that line of
> thought.
>
> Thanks,
> Thomas
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140716/a26611d7/attachment.html>
More information about the hotspot-gc-use
mailing list