OOM without a full GC with G1GC
Per Liden
per.liden at oracle.com
Mon May 8 07:51:01 UTC 2017
Hi,
On 2017-05-06 00:12, nezih yigitbasi wrote:
> This JVM runs the Presto distributed SQL engine and some threads use
> native codecs to read from compressed files (I see from our logs that
> GZIP is being used). Is it possible that JNI threads delay the full GC
> and the JVM hits the GCLockerRetryAllocationCount?
Yes, heavy use of JNI GetPrimitiveArrayCritical (used by libzip) can
cause allocation starvation and premature OOM, which might be what
you're seeing here. Try set GCLockerRetryAllocationCount to a big
number. That would at least help avoid the premature OOM.
There's already a bug logged for this:
https://bugs.openjdk.java.net/browse/JDK-8137099
cheers,
Per
>
> 2017-05-05 14:46 GMT-07:00 Kirk Pepperdine <kirk.pepperdine at gmail.com
> <mailto:kirk.pepperdine at gmail.com>>:
>
> Hi Nezih,
>
> What I see are some issues with TTSP. I can also see that you have
> more than 3100 threads running which would explain why there maybe
> an issue with TTSP. Certainly I’d strongly consider reducing thread
> counts. Sync durations are also quite long in many cases. There is
> an evacuation failure and then about 8 back to back to back young
> collections with tenured being completely full. I’ve no idea why
> there is no Full GC at that time. What I can see is that an
> unusually high number of collections are started after a GC Locker
> event and in the trace is a complaint about a thread stalled in the
> JNI critical section. Do you have your own native library installed?
>
> Kind regards,
> Kirk
>
>> On May 5, 2017, at 11:27 PM, nezih yigitbasi
>> <nezihyigitbasi at gmail.com <mailto:nezihyigitbasi at gmail.com>> wrote:
>>
>> Hi,
>> In one of our systems I have recently seen a JVM (1.8.0_112-b15)
>> has gone OOM without triggering a full GC. The interesting thing
>> is, even if the heap is close to almost being full a full GC is
>> NOT triggered, and then an OutOfMemoryError is thrown. Adding to
>> that when I look at the heap dump the total retained size of all
>> the objects is ~60G, which suggests that there was plenty of
>> reclaimable space.
>>
>> Any ideas about what may be going on here?
>>
>> Here is the GC
>> logs: https://gist.github.com/nezihyigitbasi/b5e86f1f429fae57fbdaa62fd91a4195
>> <https://gist.github.com/nezihyigitbasi/b5e86f1f429fae57fbdaa62fd91a4195>
>> Here is a list of our JVM
>> args: https://gist.github.com/nezihyigitbasi/909b8f3d7134d3a7ad6f9ceb3c65f747
>> <https://gist.github.com/nezihyigitbasi/909b8f3d7134d3a7ad6f9ceb3c65f747>
>>
>> Thanks,
>> Nezih
>
>
More information about the hotspot-gc-dev
mailing list