java.lang.OutOfMemoryError: GC overhead limit exceeded

Jose Otavio Carlomagno Filho jocf83 at gmail.com
Mon Mar 10 16:19:18 UTC 2014


Luca,

I believe these timeout messages are related to MQ. I remember we would get
these quite a lot when we were using JBoss 4 with MQ, if I remember
correctly they were being shown because we were keeping open connections
and sessions to some JMS queues.

I don't think this has any relation with your problem, but it might be
worth checking anyway.

Jose


On Fri, Mar 7, 2014 at 5:09 AM, luke <luke.bike at gmail.com> wrote:

> Thanks Jose,
> I think I haven't in my application "System.gc()" calls, but I'm working
> on a very large application so I'll check if some explicit call to GC has
> been introduced.
>
> I think the problem is how you wrote that "GC is running but is unable to
> free space in the heap". Is it possible that GC can't use cpu native
> threads and so can't run correctly?
>
> few minutes before this  outOfMemory in my jboss  log I see:
>             *SocketTimeoutException: *
>
> *Caused by: java.net.SocketTimeoutException: Read timed out*
> Could be related to my problem?
> thanks
> luca
>
>
>
> 2014-03-06 20:53 GMT+01:00 Jose Otavio Carlomagno Filho <jocf83 at gmail.com>
> :
>
> If I'm not mistaken, "*GC overhead limit exceeded" *means the GC is
>> running but is unable to free space in the heap.
>>
>> In many cases, this is caused by the application repeatedly calling
>> "System.gc()", which normally triggers a full GC. You should check your
>> application code and remove these calls if they exist.
>>
>> Additionally, you can add "-XX:+DisableExplicitGC" to your startup
>> parameters, this way the GC will not run if your application calls
>> "System.gc()".
>>
>> Jose
>>
>>
>> On Thu, Mar 6, 2014 at 12:13 PM, luke <luke.bike at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I'm java.lang.OutOfMemoryError in a java application working on a
>>> JBossAS.
>>> It's strange that OutOfMemory happen when application is not so
>>> stressed.
>>> In my application log I found this exception
>>>
>>> *WARN  [org.jboss.mq.Connection] Connection failure, use
>>> javax.jms.Connection.setExceptionListener() to handle this error and
>>> reconnect*
>>> *org.jboss.mq.SpyJMSException: Exiting on IOE; - nested throwable:
>>> (java.net.SocketTimeoutException: Read timed out)*
>>> *    at
>>> org.jboss.mq.SpyJMSException.getAsJMSException(SpyJMSException.java:72)*
>>> *    at org.jboss.mq.Connection.asynchFailure(Connection.java:423)*
>>> *    at
>>> org.jboss.mq.il.uil2.UILClientILService.asynchFailure(UILClientILService.java:174)*
>>> *    at
>>> org.jboss.mq.il.uil2.SocketManager$ReadTask.handleStop(SocketManager.java:466)*
>>> *    at
>>> org.jboss.mq.il.uil2.SocketManager$ReadTask.run(SocketManager.java:395)*
>>> *    at java.lang.Thread.run(Thread.java:619)*
>>> *Caused by: java.net.SocketTimeoutException: Read timed out*
>>>
>>> and after some minutes:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *2014-03-06 01:09:32,173 WARN  [org.jboss.mq.Connection] Exception
>>> listener ended abnormally:   java.lang.OutOfMemoryError: GC overhead limit
>>> exceeded    at
>>> java.lang.ThreadLocal.createInheritedMap(ThreadLocal.java:217)    at
>>> java.lang.Thread.init(Thread.java:358)    at
>>> java.lang.Thread.<init>(Thread.java:445)     at
>>> org.jboss.mq.SpyMessageConsumer.setMessageListener(SpyMessageConsumer.java:237)
>>> at
>>> it.oneans.iemx.qf.ejb.QueueService$QueueServiceExceptionListener.onException(QueueService.java:193)
>>> at
>>> org.jboss.mq.Connection$ExceptionListenerRunnable.run(Connection.java:1356)
>>>     at java.lang.Thread.run(Thread.java:619)*
>>>
>>> In my GC.log I can see  a rapid increase in heap memory:
>>>
>>> 54967.049: [GC [PSYoungGen: 171815K->3032K(2024448K)]
>>> 1716963K->1583328K(8315904K), 0.0466930 secs] [Times: user=0.20 sys=0.09,
>>> real=0.04 secs]
>>>
>>> 54967.097: [*Full GC (System)* [PSYoungGen: 3032K->0K(2024448K)] [*ParOldGen:
>>> 1580296K->1501278K*(6291456K)] 1583328K->1501278K(8315904K) [PSPermGen:
>>> 230071K->229632K(239744K)], 4.5397660 secs] [Times: user=18.01 sys=2.81,
>>> real=4.53 secs]
>>>
>>> ...
>>>
>>> 55546.522: [GC [PSYoungGen: 1883953K->129792K(1929216K)]
>>> 6315956K->4689948K(8220672K), 0.7681860 secs] [Times: user=8.76 sys=0.61,
>>> real=0.77 secs]
>>>
>>> 55561.317: [GC [PSYoungGen: 1890304K->124543K(1928448K)]
>>> 6450460K->4814699K(8219904K), 1.8698640 secs] [Times: user=3.30 sys=0.26,
>>> real=1.87 secs]
>>>
>>> ...
>>>
>>> 55754.485: [GC [PSYoungGen: 1753886K->116213K(1881920K)]
>>> 7755780K->6232689K(8173376K), 0.5959420 secs] [Times: user=4.34 sys=0.30,
>>> real=0.60 secs]
>>>
>>> 55755.083: [Full GC [PSYoungGen: 116213K->0K(1881920K)] [*ParOldGen:
>>> 6116476K->6031245K*(6291456K)] 6232689K->6031245K(8173376K) [PSPermGen:
>>> 229665K->222795K(231488K)], 36.6400980 secs] [Times: user=160.17 sys=8.40,
>>> real=36.63 secs]
>>>
>>> Could be OutOfmemory a side effect related to not enough free sockets
>>> on the server or something else?
>>>
>>>
>>> thanks in advance for any suggestions
>>> luca
>>> P.S.:my gc triggers:
>>>                 -Xms6g -Xmx6g -XX:MaxPermSize=512m
>>>                 -Dsun.rmi.dgc.client.gcInterval=2100000
>>> -Dsun.rmi.dgc.server.gcInterval=2100000
>>>                 -XX:+UseParallelOldGC -XX:+UseParallelGC
>>>                 -XX:MaxHeapFreeRatio=70 -XX:MinHeapFreeRatio=40
>>> -Xverify:none -XX:+BindGCTaskThreadsToCPUs
>>>                 -XX:NewSize=2g -XX:MaxNewSize=2g -XX:SurvivorRatio=4
>>>                  -Djava.awt.headless=true
>>>
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140310/b62277aa/attachment.html>


More information about the hotspot-gc-use mailing list