[G1GC] Evacuation failures with bursts of humongous object allocations
Thomas Schatzl
thomas.schatzl at oracle.com
Thu Dec 3 09:29:56 UTC 2020
Hi Charlie,
On 02.12.20 23:57, Charlie Gracie wrote:
> Hi,
>
> Sorry for the delayed response.
>
> I applied your suggestions to my prototype and things are working well. I am ready to
> open a PR to help me capture and resolve further enhancements. You can find a log [1]
Great!
> that contains most of the extra information you were looking for. Basically, 100% of
> the time spent in "Evacuation Failure" is in "Remove self forwards". There are 4 cases
> of "To-space exhausted" in the log I uploaded.
>
Thanks. Some observations:
- generational hypothesis works very well for this application as you
already indicated. I.e. in non-failing gcs the promotion is negligible.
So there is a high likelihood that the failing regions are always almost
empty.
- all or almost all young regions have failures, which explains the long
evacuation failure handling. Unfortunately the current algorithm needs
to iterate all (live and dead) objects during self-forward removal.
Something like JDK-8254739 could certainly do wonders.
Also being less conservative about reclaiming failed regions could help
in subsequent gcs.
>>> I believe this could be calculated at the end of a pause (young or mixed), if the next
>>> GC will be a mixed collect, otherwise it is 0. Do you agree?
>
>> Unfortunately, I need to disagree :) Because some survivors will get
>> spilled into old all the time (in a non-optimized general application).
>>
>> However old gen allocation is already tracked per GC as a side-effect of
>> the existing PLAB allocation tracking mechanism. See
>> G1CollectedHeap::record_obj_copy_mem_stats() about these values in more
>> detail.
>
> This is the only thing I am not sure I have addressed properly in my current changes.
> Hopefully, we can discuss this further in the PR once it is opened.
>
> I will file a JBS issue for this and get the PR opened so that I can work towards a
> final solution.
Okay.
Thanks,
Thomas
More information about the hotspot-gc-dev
mailing list