Odd G1GC behavior on 8u91

Thomas Schatzl thomas.schatzl at oracle.com
Mon Aug 29 08:34:18 UTC 2016


Hi Vitaly,

  just some random comments, trying to answer the questions in a single
thread:

On Wed, 2016-08-24 at 14:43 -0400, Vitaly Davidovich wrote:
> Hi guys,
> 
> Hoping someone could shed some light on G1 behavior (as seen from the
> gc log) that I'm having a hard time understanding.  The root problem
> is G1 enters a Full GC that takes many tens of seconds, and need some
> advice on what could be causing it.
>
>   [Eden: 0.0B(30.0G)->0.0B(30.0G) Survivors: 0.0B->0.0B Heap:
>95.2G(96.0G)->95.2G(96.0G)]
> [Times: user=0.08 sys=0.00, real=0.01 secs] 

As mentioned by Jenny, this odd looking log line is because a preceding
evacuation failure used up all space. I.e. evacuation failure turns
regions that contain objects that could not be copied into old gen
regions.
Since after these evacuation failures the heap is full, any allocation
in eden fails because there is not enough space, although it intended
to use 30G - as you told it to in your options.

The log output is indeed confusing, and actually this (very short) GC
is superfluous.

On Wed, 2016-08-24 at 18:36 -0400, Vitaly Davidovich wrote:
> Hi Jenny,
> 
> Very happy that you and Charlie got wind of this thread -- could use
> your expertise :).  I will email you the log directly (it's a bit
> verbose with all the safepoint + gc logging) as I believe the mailing
> list software will strip it.  To answer/comment on your email ...
> 
> I believe the fixing of young gen size (and turning off adaptive
> sizing) was done intentionally.  The developers reported that letting
> G1 manage this ergonomically caused problems, although that may be
> because the max pause time goal is too aggressive (300ms for such a
> large heap).  This is something we're also looking at revisiting, but
> trying to get a handle on the other issues first.

Please do. Note that specifying min/max young overrides the use pause
time goal (to size young gen).

> As for humongous objects, I don't see any trace of them in the log. 
> We actually saw some other poor G1 behavior with some older GC
> settings whereby the "Finalize Marking" phase was taking hundreds of
> seconds (same total heap size, but with a 15GB young), and those gc
> logs did indicate very humongous object allocations.  I can certainly
> try sharing that log with you as well, but I think that's likely a
> different issue (it's possible it's related to the G1 worker threads
> marking through large arrays fully, but I'm not sure).  

Sounds like a combination of JDK-8057003 and JDK-8159422.

Thanks,
  Thomas



More information about the hotspot-gc-use mailing list