Strange Full GC and OutOfMemory issue
Aleksey Shipilev
shade at redhat.com
Tue Apr 10 20:03:50 UTC 2018
Thanks!
On 04/10/2018 09:19 PM, Omar Kilani wrote:
> We had another one of these OOM situations this morning, I've posted
> the full GC log here:
>
> https://storage.googleapis.com/rtm-misc/20180410-gc.log
I see two issues:
$ grep -E "(free threshold|Degenerated GC)" 20180410-gc.log
; (1) This repeats often
[448587.698s][info][gc,ergo ] GC(4505) Adjusting free threshold to: 13% (3194M)
[448898.119s][info][gc,ergo ] GC(4509) Adjusting free threshold to: 8% (1966M)
[449126.580s][info][gc,ergo ] GC(4513) Adjusting free threshold to: 3% (737M)
[452465.971s][info][gc,start ] GC(4551) Pause Degenerated GC (Mark)
[452466.108s][info][gc ] GC(4551) Pause Degenerated GC (Mark) 22914M->3559M(24576M)
136.934ms
[452466.108s][info][gc,ergo ] GC(4551) Adjusting free threshold to: 13% (3194M)
[452795.943s][info][gc,ergo ] GC(4555) Adjusting free threshold to: 8% (1966M)
[453129.735s][info][gc,ergo ] GC(4559) Adjusting free threshold to: 3% (737M)
[453268.499s][info][gc,start ] GC(4564) Pause Degenerated GC (Outside of Cycle)
[453268.611s][info][gc ] GC(4564) Pause Degenerated GC (Outside of Cycle)
22622M->4088M(24576M) 111.339ms
I think this is caused by way too low min free threshold: the heuristics bounces to 13% free to
start the GC, a few successful concurrent GC cycles happen, heuristics adjusts the threshold, until
it becomes too low, the cycle degenerates, free threshold bounces back. Rinse and repeat. The real
fix would be reconsidering the adaptive heuristics control a bit. The workaround is higher
-XX:ShenandoahMinFreeThreshold=#, I would say around "10".
; (2) This is odd:
[475141.485s][info][gc] GC(4859) Pause Degenerated GC (Outside of Cycle) 6793M->6781M(24576M) 161.363ms
[475142.501s][info][gc] GC(4860) Pause Full (Allocation Failure) 6793M->5587M(24576M) 1014.266ms
[475142.671s][info][gc] GC(4861) Pause Degenerated GC (Outside of Cycle) 5594M->5587M(24576M) 167.807ms
Which is weird, because the heap occupancy is not high. I would have said this is fragmentation due
to humongous allocations, but Full GC is supposed to fix that. And we still have the degenerated
cycle right after Full GC! We would need to put more diagnostics to make the logs more informative.
I shall take a look at that tomorrow.
> This one was with jdk10-b200
> (https://builds.shipilev.net/openjdk-shenandoah-jdk10/openjdk-shenandoah-jdk10-b200-x86_64-release.tar.xz)
> with an increased heap size (16Gb -> 24Gb).
Was there any previous build where it ran successfully?
If we give you the patches, would you be able to compile and run sh/jdk10 test builds?
Thanks,
-Aleksey
More information about the shenandoah-dev
mailing list