G1gc compaction algorithm

Martin Makundi martin.makundi at koodaripalvelut.com
Thu Jul 17 10:31:16 UTC 2014


>
>
> > -server -XX:InitiatingHeapOccupancyPercent=0 -XX:
> > +UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10
>
> G1MixedGCLiveThresholdPercent is the upper threshold for determining
> whether old gen regions can be collected. I.e. only old regions less
> than 10% occupied are collected ever.
>
> Which means, you are going for an "expected" heap size of 120G (12G *
> 100 / G1MixedGCLiveThresholdPercent) - which obviously does not fit into
> 30G of heap. The result is inevitable full gcs.
>
> (The documentation can be read both ways I think)
>

Thanks, we are trying value 90 today, there are a couple of full gc's
already.


> Just setting the default (65) will give a more reasonable "expected"
> heap size. (~20G)
>
> Depending on length and amount of humongous allocation bursts, you also
> want to increase -XX:InitiatingHeapOccupancyPercent to something larger
> than G1MixedGCLiveThresholdPercent, otherwise concurrent marking will
> run all the time.


Is it a problem that concurrent marking runs all the time? It's bit unclear
what it means, our goal is to force gc earn its keep all the time, no idle
time. However, we are unaware if this
low G1MixedGCLiveThresholdPercent will affect adversely some other gc
features.


> You may also need to increase

 G1MixedGCLiveThresholdPercent if this buffer of 10G is too small.
>
> >  -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m
> > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc
>
> -Xnoclassgc disables all class unloading, even during full gc.


Class unloading halted the whole system for several minutes every time
classes unload so we disabled it. It occurred very often and made the
system practically unusable, so we disabled it completely.


> If you
> notice increasing "Ext root scan time" over time, this setting is set
> wrongly. Note that 7u55 can only do class unloading at full gc. Only
> 8u40 and later will also do this at concurrent mark.


> >  -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:
> > +UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m
> > -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET -XX:
> > +UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts
> > -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled -XX:
>
> You can remove CMSInitiatingOccupancyFraction. It does not have an
> effect with G1.
>

Ok, good to know.


>
> > +UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75
> > -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M
>
> G1HeapRegionSize must be a power of two. I think G1 will either round
> this to either 4M or 8M - check with -XX:+PrintFlagsFinal.
>

Ok.

>
> >  -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC
>
> MaxGCPauseMillis=75 in combination with GCPauseIntervalMillis=1000 seems
> to be a tough target, at least for 7uX. Does your application really
> need such a low pause time? It may be achievable.
>

It's a web application so basically user will experience significant
inconvenience in recurringly over 100-200 ms pauses. I am not sure how this
target time translates to actual user experience so 75 is somewhat a safe
choice.


> From the log, already collecting the young generation breaks that pause
> time goal. Try -XX:G1NewSizePercent=1 to allow smaller young generation
> sizes.
>

How does this work together with the adaptive sizing?

**
Martin


>
> >  -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC
> > -Xloggc:gc.log
>
> Thanks,
>   Thomas
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140717/7535079c/attachment.html>


More information about the hotspot-gc-use mailing list