G1OldCSetRegionThresholdPercent under ExperimentalFlag
Thomas Schatzl
thomas.schatzl at oracle.com
Fri Jun 23 12:29:17 UTC 2017
Hi,
On Thu, 2017-06-22 at 15:11 -0700, Sundara Mohan M wrote:
> Thanks for the insights on Ergo.
>
> I was trying to migrate from CMS to G1GC, the app has a low memory
> handler ( the thread which finds memory utilization from
> Runtime.getFreememory and removes some data from in memory if it
> exceeds the threshold).
>
> In CMS this handler was not invoked frequently (for ex: When I have
> 60K objects it will kick in remove ~5K LRU objects and continue
> regular operation) when i moved to G1GC this handler started kicking
> in frequently(ex: When i have 60K object it will remove 5K LRU
> objects and immediately after some time it will kick in and remove
> another 5K and goes till 10K objects are left).
>
> So, i was trying to find out why did mixed GC doesn't cleanup quick
> enough before my low memory handler kicks in.
As Bernd mentioned, G1 is only very lazily reclaiming space containing
dead objects, so such an approach has its limits.
I think CMS has this CMSTriggerInterval option that starts background
collection, which immediately reclaims space in the end (updating its
freelist) afaik.
One could get updated liveness information by jcmd/system.gc() with
-XX:ExplicitGCInvokesConcurrent starting marking regularly currently,
but it has a few drawbacks of its own:
- starts liveness analysis/marking immediately, potentially messing
with your pause time requirements
- unknown impact on prediction
- does not do space reclamation on its own, as reclamation will be
piggy-backed on the next few gcs
- will interrupt a currently running space reclamation (mixed gc)
phase. I.e. if you spam these, g1 will never reclaim any memory.
- "creatively" reuses system.gc() which might not be possible or
advisable in many cases.
- all of the above is implementation defined behavior.
There may be other caveats.
In a VM where you do not have a lot of control about memory management
by design it is definitely problematic to have another memory manager
on top where one of them does not know anything about the other.
Such an algorithm may also interact badly with future changes e.g. the
adaptive IHOP [1] feature in jdk9.
> Though i see number of young gen collection and time taken to clean
> has came down by ~40%.
>
> Another issue (may be this is expected) is after increasing
> G1OldCSetRegionThresholdPercent to 20% from 10% i am started seeing
> few mixed GC taking 1s (most of the time is spent on UpdateRS,
> MaxPause=500ms). Will get back once i have more understanding on what
> is happening..
The option allows G1 to add more regions to the set of regions to be
collected. This implies potentially longer pauses if the predictions
are incorrect in the first place.
That is one reason why this is an experimental option.
Thanks,
Thomas
[1] https://docs.oracle.com/javase/9/gctuning/garbage-first-garbage-col
lector.htm#GUID-572C9203-AB27-46F1-9D33-42BA4F3C6BF3
More information about the hotspot-gc-dev
mailing list