Strange G1 behavior

Kirk Pepperdine kirk.pepperdine at gmail.com
Wed Oct 11 17:05:31 UTC 2017


Hi,

I should add that Charlie Hunt added some invaluable information in our Friend of jclarity list that I should add here. He looked at the adaptive sizing records that are in the log and came to the conclusion that the regions in tenured were not collected because they are over the 85% live mark. This according to the AS records is true.. but then one has to ask, how does a full recover so much memory if more there are so many tenured regions that are more than 85% full. I have other logs from older JVMs where heap occupancy after each mixed collection is greater than the previous collection occupancy.. and then you get the full that collects everything. I didn’t spam the list with those logs because; they didn’t have print adaptive sizing turned on and they were from older JVMs meaning the problem may have already been solved. This JVM is a 141 build so while it behaves some what differently, it’s the same spike up ‘till only a full can help, fun the full and life goes on as if nothing happened.

Kind regards,
Kirk

> On Oct 11, 2017, at 11:48 AM, Kirk Pepperdine <kirk.pepperdine at gmail.com> wrote:
> 
> Hi,
> 
> I’ve got a GC log that contains records where the “to-space” is resized to 0 regions. This setting is maintained causing overall heap occupancy to spike which then triggers a Full GC. This is a behavior that I’ve observed in many versions of the JVM from 7.x to 8.x. I was waiting for a recent built to display it and to see if I could sort out more information before raising the issue. I have a customer (Wikipedia) that can reproduce this behavior and is willing to work with us to help resolve the issue. I’ve attached their GC log.
> 
> It all starts with the young gen collection….
> 
> 2017-10-08T04:19:26.417+0000: 240984.824
> …. snip
> (to-space exhausted), 0.2808928 secs].    <—— bad, generally means survivor is too small which is why (Charlie) we still need the age data.
> …. SNIP
> 
>   [Eden: 0.0B(608.0M)->0.0B(612.0M) Survivors: 4096.0K->0.0B Heap: 11.9G(12.0G)->11.9G(12.0G)].    <—— you are now painted into cornered which most likely needs a Full GC to get out...
> …. SNIP to the next young…
>   [Eden: 0.0B(612.0M)->0.0B(612.0M) Survivors: 0.0B->0.0B Heap: 11.9G(12.0G)->11.9G(12.0G)].          <—— now a Full GC is coming, no questions.
> ….SNIP….
> 2017-10-08T04:19:26.714+0000: 240985.121: [Full GC                                                                               <—— and here it is….
> [Times: user=52.17 sys=0.12, real=26.26 secs] 
> 
> After which everything is normal. But they cycle may repeat it’s self. The condition generally takes several days to show up so I’ve not seen it reproduced yet in a test environment. Hopefully there is a test case that can run in a reasonable amount of time.
> 
> Kind regards,
> Kirk Pepperdine
> 
> <wdqs-blazegraph_jvm_gc.pid29188.log.0.zip>
> 




More information about the hotspot-gc-dev mailing list