G1 root cause and tuning
charlie hunt
charlie.hunt at oracle.com
Tue Mar 31 12:52:13 UTC 2015
Just as a clarification, the -XX:+ParallelRefProcEnabled will help reduce the time spent in reference processing. It will not help address the issue of seeing Full GCs as a result of frequent humongous object allocations, or a humongous allocations where there is not sufficient contiguous regions available to satisfy the humongous allocation request.
Thomas’s suggestion to increase the region size may help with the Full GCs as a result of humongous object allocations.
thanks,
charlie
> On Mar 31, 2015, at 7:42 AM, Medan Gavril <gabi_io at yahoo.com> wrote:
>
> HI Charlie,
>
> Currenltly we can only go to java 7 update 7x(latest).
>
> We will try the following changes:
> 1. -XX:G1HeapRegionSize=8 (then increase)
> 2. -XX:+ParallelRefProcEnabled
>
> Please let me know if you have any other suggestion.
>
> Best Regards,
> Gabi Medan
>
>
> On Tuesday, March 31, 2015 3:35 PM, charlie hunt <charlie.hunt at oracle.com> wrote:
>
>
> To add to Thomas’s good suggestions, I suppose one other alternative is to make application changes to break up the 300+ MB allocation into smaller MB allocations. This would offer a better opportunity for that humongous allocation to be satisfied.
>
> hths,
>
> charlie
>
>> On Mar 31, 2015, at 6:30 AM, Thomas Schatzl <thomas.schatzl at oracle.com <mailto:thomas.schatzl at oracle.com>> wrote:
>>
>> Hi all,
>>
>> On Mon, 2015-03-30 at 20:41 -0500, charlie hunt wrote:
>>> Hi Jenny,
>>>
>>> One possibility is that there is not enough available contiguous
>>> regions to satisfy a 300+ MB humongous allocation.
>>>
>>> If we assume a 22 GB Java heap, (a little larger than the 22480M shown
>>> in the log), with 2048 G1 regions (default as you know), the region
>>> size would be about 11 MB. That implies there needs to be about 30
>>> contiguous G1 regions available to satisfy the humongous allocation
>>> request.
>>>
>>> An unrelated question … do other GCs have a similar pattern of a
>>> rather large percentage of time in Ref Proc relative to the overall
>>> pause time, i.e. 24.7 ms / 120 ms ~ 20% of the pause time. If that’s
>>> the case, then if -XX:+ParallelRefProcEnabled is not already set,
>>> there may be some low hanging tuning fruit. But, it is not going to
>>> address the frequent humongous allocation problem. It is also
>>> interesting in that the pause time goal is 2500 ms, yet the actual
>>> pause time is 120 ms, and eden is being sized at less than 1 GB out of
>>> a 22 GB Java heap. Are the frequent humongous allocations messing
>>> with the heap sizing heuristics?
>>
>> While I have no solution for the problem we are aware of these problems:
>>
>> - https://bugs.openjdk.java.net/browse/JDK-7068229 <https://bugs.openjdk.java.net/browse/JDK-7068229> for dynamically
>> enabling MT reference processing
>>
>> - https://bugs.openjdk.java.net/browse/JDK-8038487 <https://bugs.openjdk.java.net/browse/JDK-8038487> to use mixed GC
>> instead of Full GC to clear out space for failing humoungous object
>> allocations.
>>
>> I am not sure about what jdk release "JRE 1.17 update 17" actually is.
>> From the given strings in the PrintGCDetails output, it seems to be
>> something quite old, I would guess jdk6?
>>
>> In that case, if possible I would recommend trying a newer version that
>> improves humongous object handling significantly (e.g. 8u40 is latest
>> official).
>>
>> Another option that works in all versions I am aware of is increasing
>> heap region size with -XX:G1HeapRegionSize=<X>M, where X is 8/16 or 32;
>> it seems that 4M region size has been chosen by ergonomics.
>> Start with the smaller of the suggested values.
>>
>> Thanks,
>> Thomas
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150331/7d1f8f85/attachment-0001.html>
More information about the hotspot-gc-use
mailing list