G1-GC - Full GC [humongous allocation request failed]
yu.zhang at oracle.com
yu.zhang at oracle.com
Fri Oct 7 17:15:54 UTC 2016
Hi, Vitaly,
Here is what happens in jdk9(I think the logic is the same as in jdk8).
_reserve_regions = reserve percent*regions of the heap
when trying to decide regions for young gen, we look at the free regions
at the end of the collection, and try to honor the reserve_regions
if (available_free_regions > _reserve_regions) {
base_free_regions = available_free_regions - _reserve_regions;
}
And there are other constrains to consider: user defined constrains and
pause time goal.
This is what I meant by 'try to honor' the reserved.
If there is enough available_free_regions, it will reserve those
regions. Those regions can be used as old or young.
Jenny
On 10/07/2016 09:51 AM, Vitaly Davidovich wrote:
> Hi Charlie,
>
> On Fri, Oct 7, 2016 at 12:46 PM, charlie hunt <charlie.hunt at oracle.com
> <mailto:charlie.hunt at oracle.com>> wrote:
>
> Hi Vitaly,
>
> Just to clarify things in case there might be some confusion … one
> of the terms in G1 can be a little confusing with a term used in
> Parallel GC, Serial GC and CMS GC, and that is “to-space”. In the
> latter case, “to-space” is a survivor space. In G1, “to-space” is
> any space that a G1 is evacuating objects too. So a “to-space
> exhausted” means that during an evacuation of live objects from a
> G1 region (which could be an eden region, survivor region or old
> region), and there is not an available region to evacuate those
> live objects, this constitutes a “to-space failure”.
>
> I may be wrong, but my understanding is that once a humongous
> object is allocated, it is not evacuated. It stays in the same
> allocated region(s) until it is marked as being unreachable and
> can be reclaimed.
>
> Right, I understand the distinction in terminology.
>
> What I'm a bit confused by is when Jenny said "I agree the
> ReservePercent=40 is too high, but that should not prevent allocating
> to the old gen. G1 tries to honor ReservePercent." Specifically, the
> "G1 tries to honor ReservePercent". It wasn't clear to me whether that
> implies humongous allocations can look for contiguous regions in the
> reserve, or not. That's what I'm hoping to get clarification on since
> other sources online don't mention G1ReservePercent playing a role for
> HO specifically.
>
> Thanks
>
>
> charlie
>
>> On Oct 7, 2016, at 11:00 AM, Vitaly Davidovich <vitalyd at gmail.com
>> <mailto:vitalyd at gmail.com>> wrote:
>>
>> Hi Jenny,
>>
>> On Fri, Oct 7, 2016 at 11:52 AM, yu.zhang at oracle.com
>> <mailto:yu.zhang at oracle.com> <yu.zhang at oracle.com
>> <mailto:yu.zhang at oracle.com>> wrote:
>>
>> Prasanna,
>>
>> In addition to what Vitaly said, I have some comments about
>> your question:
>>
>> 1) Humongus allocation request for 72 mb failed, from the
>> logs we can also see we have free space of around 3 GB. Does
>> this means , our application is encountering high amount of
>> fragmentation ?.
>>
>> It is possible. What it means is g1 can not find 36
>> consecutive regions for that 72 mb object.
>>
>> I agree the ReservePercent=40 is too high, but that should
>> not prevent allocating to the old gen. G1 tries to honor
>> ReservePercent.
>>
>> So just to clarify - is the space (i.e. regions) reserved by
>> G1ReservePercent allocatable to humongous object allocations? All
>> docs/webpages I found talk about this space being for holding
>> survivors (i.e. evac failure/to-space exhaustion mitigation). It
>> sounds like you're saying these reserved regions should also be
>> used to satisfy HO allocs?
>>
>> Thanks
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> <mailto:hotspot-gc-use at openjdk.java.net>
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>> <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20161007/dbf11de0/attachment.html>
More information about the hotspot-gc-use
mailing list