G1-GC - Full GC [humongous allocation request failed]

yu.zhang at oracle.com yu.zhang at oracle.com
Fri Oct 7 18:21:00 UTC 2016


Vitaly,

I am cc this to the dev list.

My comments in line.


On 10/07/2016 10:27 AM, Vitaly Davidovich wrote:
> Hi Jenny,
>
> On Fri, Oct 7, 2016 at 1:15 PM, yu.zhang at oracle.com 
> <mailto:yu.zhang at oracle.com> <yu.zhang at oracle.com 
> <mailto:yu.zhang at oracle.com>> wrote:
>
>     Hi, Vitaly,
>
>     Here is what happens in jdk9(I think the logic is the same as in
>     jdk8).
>
>     _reserve_regions = reserve percent*regions of the heap
>     when trying to decide regions for young gen, we look at the free
>     regions at the end of the collection, and try to honor the
>     reserve_regions
>     if (available_free_regions > _reserve_regions) {
>         base_free_regions = available_free_regions - _reserve_regions;
>     }
>
>     And there are other constrains to consider: user defined
>     constrains and pause time goal.
>
>     This is what I meant by 'try to honor' the reserved.
>     If there is enough available_free_regions, it will reserve those
>     regions. Those regions can be used as old or young.
>
> Ok, thanks.  As you say, G1 *tries* to honor it, but may not.  The 
> docs I've come across online make it sound like this reservation is a 
> guarantee, or at least they don't stipulate the reservation may not 
> work.  I don't know if it's worth clarifying that point or not, but my 
> vote would be to make the docs err on the side of "more info" than less.
Agree.
>
> The second part is what I mentioned to Charlie in my last reply - can 
> humongous *allocations* be satisfied out of the reserve, or are the 
> reserved regions only used to hold evacuees (when base_free_regions 
> are not available).
That is a good question. Here is my understanding, which need to be 
confirmed by G1 developer. In this code
HeapWord* G1CollectedHeap::humongous_obj_allocate(size_t word_size, 
AllocationContext_t context)
G1 tries to find regions from _free_list that can hold the humongous 
objects. The reserved regions are also on the _free_list (again need to 
be confirmed by developer). So my understanding is those reserved 
regions can be used as humongous allocation.

But I might be missing something.

>
> Thanks
>
>
>     Jenny
>
>     On 10/07/2016 09:51 AM, Vitaly Davidovich wrote:
>>     Hi Charlie,
>>
>>     On Fri, Oct 7, 2016 at 12:46 PM, charlie hunt
>>     <charlie.hunt at oracle.com <mailto:charlie.hunt at oracle.com>> wrote:
>>
>>         Hi Vitaly,
>>
>>         Just to clarify things in case there might be some confusion
>>         … one of the terms in G1 can be a little confusing with a
>>         term used in Parallel GC, Serial GC and CMS GC, and that is
>>         “to-space”.  In the latter case, “to-space” is a survivor
>>         space. In G1, “to-space” is any space that a G1 is evacuating
>>         objects too.  So a “to-space exhausted” means that during an
>>         evacuation of live objects from a G1 region (which could be
>>         an eden region, survivor region or old region), and there is
>>         not an available region to evacuate those live objects, this
>>         constitutes a “to-space failure”.
>>
>>         I may be wrong, but my understanding is that once a humongous
>>         object is allocated, it is not evacuated. It stays in the
>>         same allocated region(s) until it is marked as being
>>         unreachable and can be reclaimed.
>>
>>     Right, I understand the distinction in terminology.
>>
>>     What I'm a bit confused by is when Jenny said "I agree the
>>     ReservePercent=40 is too high, but that should not prevent
>>     allocating to the old gen. G1 tries to honor ReservePercent."
>>      Specifically, the "G1 tries to honor ReservePercent".  It wasn't
>>     clear to me whether that implies humongous allocations can look
>>     for contiguous regions in the reserve, or not.  That's what I'm
>>     hoping to get clarification on since other sources online don't
>>     mention G1ReservePercent playing a role for HO specifically.
>>
>>     Thanks
>>
>>
>>         charlie
>>
>>>         On Oct 7, 2016, at 11:00 AM, Vitaly Davidovich
>>>         <vitalyd at gmail.com <mailto:vitalyd at gmail.com>> wrote:
>>>
>>>         Hi Jenny,
>>>
>>>         On Fri, Oct 7, 2016 at 11:52 AM, yu.zhang at oracle.com
>>>         <mailto:yu.zhang at oracle.com> <yu.zhang at oracle.com
>>>         <mailto:yu.zhang at oracle.com>> wrote:
>>>
>>>             Prasanna,
>>>
>>>             In addition to what Vitaly said, I have some comments
>>>             about your question:
>>>
>>>             1) Humongus allocation request for 72 mb failed, from
>>>             the logs we can also see we have free space of  around 3
>>>             GB. Does this means , our application is encountering
>>>             high  amount of fragmentation ?.
>>>
>>>             It is possible. What it means is g1 can not find 36
>>>             consecutive regions for that 72 mb object.
>>>
>>>             I agree the ReservePercent=40 is too high, but that
>>>             should not prevent allocating to the old gen. G1 tries
>>>             to honor ReservePercent.
>>>
>>>         So just to clarify - is the space (i.e. regions) reserved by
>>>         G1ReservePercent allocatable to humongous object
>>>         allocations? All docs/webpages I found talk about this space
>>>         being for holding survivors (i.e. evac failure/to-space
>>>         exhaustion mitigation).  It sounds like you're saying these
>>>         reserved regions should also be used to satisfy HO allocs?
>>>
>>>         Thanks
>>>         _______________________________________________
>>>         hotspot-gc-use mailing list
>>>         hotspot-gc-use at openjdk.java.net
>>>         <mailto:hotspot-gc-use at openjdk.java.net>
>>>         http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>         <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
>>
>>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20161007/0cfae44e/attachment.htm>


More information about the hotspot-gc-dev mailing list