G1gc compaction algorithm

Martin Makundi martin.makundi at koodaripalvelut.com
Wed Aug 20 08:35:34 UTC 2014


Hi!

Here is one more recent log:

http://81.22.250.165/log/gc-2014-08-20.log

with a couple of Full GC's.

**
Martin


2014-08-19 20:11 GMT+03:00 Martin Makundi <
martin.makundi at koodaripalvelut.com>:

> Hi!
>
> I suspect the application does not do the humongous allocations, I suspect
> it's the gc itself that does these. We have an allocationrecorder that
> never sees these humongous allocations within the application
> itself...assuming they are in a form that the allocation hook can detect.
>
> From within the application we have lots of objects that are kept in
> ehcache, so ehcache manages the usage. I am not familiar with ehcache
> internals but I don't think it uses humongous objects in any way.
>
> The max allocation request varies on the objects that are loaded, it is
> possible that some request is 48m so yes it can vary depending on who logs
> in and what he/she does... not very reproducible.
>
> My main concern is that IF full gc can clean up the memory, there should
> be a mechanic that does just the same as full gc but without blocking for
> long time...concurrent full gc, do the 30-60 second operation for example
> 10% overhead until whole full gc is done (that would take 30-60/10% =
> 300-600 seconds).
>
> And if this is not feasible at the moment...what can we tune to mitigate
> the peaking in the garbage accumulation and thus avoid the full gc's.
>
> Help =)
>
>  **
> Martin
>
>
> 2014-08-19 19:56 GMT+03:00 Yu Zhang <yu.zhang at oracle.com>:
>
> Martin,
>>
>> Comparing 2 logs 08-15 vs 08-11, the allocation pattern seems changed.
>> In 08-15, there are 4 requests allocation 109M
>> ' 112484.582: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>> humongous allocation request failed, allocation request: 109816768 bytes]'
>> But in 08-11, the max allocation request is 48M
>> '117595.621: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>> humongous allocation request failed, allocation request: 47906832 bytes]'
>>
>> Does the application change the allocation when performance changes?
>>
>>
>> Thanks,
>> Jenny
>>
>>
>> On 8/15/2014 3:20 AM, Martin Makundi wrote:
>>
>>> Hi!
>>>
>>> Here is our latest logs with bit more heap and suggested parameters.
>>> We tried eden max 85% with this run because 30% was unsuccessful
>>> earlier.
>>>
>>> Here is the log:http://81.22.250.165/log/gc-16m-2014-08-15.log
>>>
>>> It has a couple of full gc hits during a busy day, any new ideas?
>>>
>>> **
>>> Martin
>>>
>>>
>>> 2014-08-13 9:39 GMT+03:00 Martin Makundi<martin.makundi@
>>> koodaripalvelut.com>:
>>>
>>>> Thanks.  At the end, the system cpu is very high.  I guess there are
>>>>> page
>>>>> faults due to the heap expansion around timestamp 12002.692.  Is the
>>>>> memory
>>>>> tight on your system?
>>>>>
>>>> For performance sake we use compressedoops which limits the memory upper
>>>> bound, this means we could go to bit under 32g, 32255M. Going above
>>>> compressedoops will increase memory footprint and slow down processing
>>>> so we
>>>> would prefer just tuning the gc while within compressedoops.
>>>>
>>>>  can you afford to start with -Xms30g -Xmx30g -XX:+AlwaysPreTouch?
>>>>>
>>>> Thanks, I'll try -Xms32255M -Xmx32255M -XX:+AlwaysPreTouch
>>>>
>>>> **
>>>> Martin
>>>>
>>>>  Thanks,
>>>>> Jenny
>>>>>
>>>>> On 8/11/2014 11:29 PM, Martin Makundi wrote:
>>>>>
>>>>> Hi!
>>>>>
>>>>> I tried the new parameters:
>>>>>
>>>>>  Based on this one, can you do one with -XX:G1MaxNewSizePercent=30
>>>>>> -XX:InitiatingHeapOccupancyPercent=20 added?
>>>>>>
>>>>> This seems to hang the whole system.... we have lots of mostly short
>>>>> lived
>>>>> (ehcache timeToIdleSeconds="900") large java object trees 1M-10M each
>>>>> (data
>>>>> reports loaded into cache).
>>>>>
>>>>> Maybe eden should be even bigger instead of smaller?
>>>>>
>>>>> Here is the log from today, it hung up quite early, I suspect the gc:
>>>>> http://81.22.250.165/log/gc-16m-2014-08-12.log
>>>>>
>>>>> The process ate most of the cpu cacacity and we had to kill it and
>>>>> restart
>>>>> without -XX:G1MaxNewSizePercent=30.
>>>>>
>>>>> What you suggest?
>>>>>
>>>>> **
>>>>> Martin
>>>>>
>>>>>  The reason for G1MaxNewSizePercent(default=60) is to set an upper
>>>>>> limit
>>>>>> to Eden size.  It seems the Eden size grows to 17g before Full gc,
>>>>>> then a
>>>>>> bunch of humongous allocation happened, and there is not enough old
>>>>>> gen.
>>>>>>
>>>>>> The following log entry seems not right: The Eden Size is over 60% of
>>>>>> the
>>>>>> heap.
>>>>>> "2014-08-11T11:13:05.487+0300: 193238.308: [GC pause (young)
>>>>>> (initial-mark) 193238.308: [G1Ergonomics (CSet Construction) start
>>>>>> choosing
>>>>>> CSet, _pending_cards: 769041, predicted base time: 673.25 ms,
>>>>>> remaining
>>>>>> time: 326.75 ms, target pause time: 1000.00 ms] 193238.308:
>>>>>> [G1Ergonomics
>>>>>> (CSet Construction) add young regions to CSet, eden: 1 regions,
>>>>>> survivors:
>>>>>> 21 regions, predicted young region time: 145.63 ms] 193238.308:
>>>>>> [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 1
>>>>>> regions,
>>>>>> survivors: 21 regions, old: 0 regions, predicted pause time: 818.88
>>>>>> ms,
>>>>>> target pause time: 1000.00 ms], 0.7559550 secs]   [Parallel Time:
>>>>>> 563.9 ms,
>>>>>> GC Workers: 13]      [GC Worker Start (ms): Min: 193238308.1, Avg:
>>>>>> 193238318.0, Max: 193238347.6, Diff: 39.5]      [Ext Root Scanning
>>>>>> (ms):
>>>>>> Min: 0.0, Avg: 13.0, Max: 35.8, Diff: 35.8, Sum: 168.4]      [Update
>>>>>> RS
>>>>>> (ms): Min: 399.2, Avg: 416.8, Max: 442.8, Diff: 43.6, Sum: 5418.0]
>>>>>> [Processed Buffers: Min: 162, Avg: 232.0, Max: 326, Diff: 164, Sum:
>>>>>> 3016]
>>>>>> [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
>>>>>> [Object Copy (ms): Min: 79.9, Avg: 104.8, Max: 152.4, Diff: 72.5, Sum:
>>>>>> 1363.0]      [Termination (ms): Min: 0.0, Avg: 19.1, Max: 27.3, Diff:
>>>>>> 27.3,
>>>>>> Sum: 248.9]      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0,
>>>>>> Diff:
>>>>>> 0.0, Sum: 0.3]      [GC Worker Total (ms): Min: 524.1, Avg: 553.8,
>>>>>> Max:
>>>>>> 563.7, Diff: 39.6, Sum: 7198.8]      [GC Worker End (ms): Min:
>>>>>> 193238871.7,
>>>>>> Avg: 193238871.8, Max: 193238871.8, Diff: 0.1]
>>>>>>     [Code Root Fixup: 0.0 ms]
>>>>>>     [Clear CT: 0.3 ms]
>>>>>>     [Other: 191.7 ms]
>>>>>>        [Choose CSet: 0.0 ms]      [Ref Proc: 190.1 ms]      [Ref Enq:
>>>>>> 0.3
>>>>>> ms]      [Free CSet: 0.2 ms]
>>>>>>     [Eden: 16.0M(2464.0M)->0.0B(22.9G) Survivors: 336.0M->240.0M Heap:
>>>>>> 14.1G(28.7G)->14.1G(28.7G)]
>>>>>>   [Times: user=8.45 sys=0.04, real=0.75 secs]"
>>>>>>
>>>>>> The reason for increasing InitiatingHeapOccupancyPercent to 20 from
>>>>>> 10 is
>>>>>> we are wasting some concurrent cycles.
>>>>>>
>>>>>> We will see how this goes.  We might increase G1ReservePercent to
>>>>>> handle
>>>>>> this kind of allocation if it is not enough.
>>>>>>
>>>>>> Thanks,
>>>>>> Jenny
>>>>>>
>>>>>> Thanks,
>>>>>> Jenny
>>>>>>
>>>>>> On 8/11/2014 10:46 AM, Martin Makundi wrote:
>>>>>>
>>>>>> Hi!
>>>>>>
>>>>>> Here is our latest log with one Full GC @ 2014-08-11T11:20:02 which is
>>>>>> caused by heap full and allocation request: 144 bytes.
>>>>>>
>>>>>> http://81.22.250.165/log/gc-16m-2014-08-11.log
>>>>>>
>>>>>> Any ideas how to mitigate this kind of situation? The Full GC makes
>>>>>> quite
>>>>>> a difference to the situation but causes a painful pause also.
>>>>>>
>>>>>> **
>>>>>> Martin
>>>>>>
>>>>>>
>>>>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140820/a610b1c7/attachment.html>


More information about the hotspot-gc-use mailing list