Strange behaviour of G1 GC

Alexander Bulaev alexbool at yandex-team.ru
Mon Jul 28 07:48:51 UTC 2014


Hi Charlie,
thanks for your reply.

Yes, the application is doing humongous allocations also causing Full GCs, but at least this is understanadble. I’ll try the options you mentioned. Also, AFAIK there are some improvements on humongous allocations coming in 8u20.
But the really mysterious thing is that never ending concurrent mark. Do you know something about it?

On 25.07.2014, at 23:03, charlie hunt <charlie.hunt at oracle.com> wrote:

> Hi Alexander,
> 
> Looks like your app is doing frequent large (humongous) object allocations.  Some of them are as large as 83 MB, some are 16 MB and some are 4 MB.  You could try increasing your G1 region size to 8 MB using -XX:G1HeapRegionSize=8m. That may help some of the Full GCs.
> 
> If you are frequently allocating 83 MB and 16 MB objects, (increasing region size for those is likely not to be practical), your alternatives (for now) may be limited to any one of, or any combination of:
> - Lowering the InitiatingHeapOccupancyPercent to run the concurrent cycle more frequently. Currently, humongous objects are not collected until a concurrent cycle is executed.
> - increasing the size of the overall Java heap so there is more Java heap available for humongous objects, i.e. the Full GCs occur less frequently, assuming the concurrent cycle is running as frequently as it does now, that may require tuning the InitiatingHeapOccupancyPercent.
> - refactor the application to reduce the frequency of those large object allocations, either by allocating smaller objects, or allocating and re-using the (really) large objects rather than creating new ones.
> 
> You might find it useful to take a look at this JavaOne 2013 session: https://www.parleys.com/share_channel.html#play/525528dbe4b0a43ac12124d7/about start at about the 17:15 mark with the G1 GC Analysis slide and listen through about the 28:51 mark.  This will help you understand the humongous object allocations.
> 
> hths,
> 
> charlie
> 
> On Jul 25, 2014, at 9:50 AM, Alexander Bulaev <alexbool at yandex-team.ru> wrote:
> 
>> Full log file is available at https://www.dropbox.com/s/w17iyy2cxsmhgyo/web-gc.log
>> 
>> On 25.07.2014, at 11:30, Alexander Bulaev <alexbool at yandex-team.ru> wrote:
>> 
>>> Hello!
>>> 
>>> I am writing to you about strange behaviuor of G1 GC that I have encountered in our production environment.
>>> Sometimes there are happening Full GCs that are cleaning a lot of garbage:
>>> 2014-07-24T14:27:57.020+0400: 94749.771: [Full GC (Allocation Failure)  11G->5126M(12G), 13.7944745 secs]
>>> 
>>> I suppose that this is garbage in the old generation. I expect it to be cleaned during mixed and concurrent GCs, but, according to the logs, the last concurrent phase happened over half an hour earlier prior to that Full GC:
>>> 2014-07-24T13:49:43.228+0400: 92455.979: [GC concurrent-mark-start]
>>> 
>>> And there is no evidence in logs that this concurrent mark has ever ended. Seems like that concurrent GC just hang somewhere.
>>> Same thing with mixed GCs:
>>> 2014-07-24T13:42:47.425+0400: 92040.176: [GC pause (G1 Evacuation Pause) (mixed)
>>> 
>>> Please help me understand this problem and find a solution if possible.
>>> We are using Java 8u5. I can supply these GC logs if needed.
>>> Thanks.
>>> 
>>> Best regards,
>>> Alexander Bulaev
>>> Java developer, Yandex LLC
>> 
>> Best regards,
>> Alexander Bulaev
>> Java developer, Yandex LLC
>> 
>> 
>> 
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> 

Best reagrds,
Alexander Bulaev
Java developer, Yandex LLC



More information about the hotspot-gc-use mailing list