Discussion: improve humongous objects handling for G1
Thomas Schatzl
thomas.schatzl at oracle.com
Mon Jan 20 10:46:06 UTC 2020
Hi,
On 18.01.20 11:13, Thomas Schatzl wrote:
> Hi,
>
> On Fri, 2020-01-17 at 20:08 -0800, Man Cao wrote:
>> Thanks for the in-depth responses!
>>
[...]
>
>> of BigRamTester. A possible concern is that the "humongous
>> BigRamTester" is not representative of the production workload's
>> problem with humongous objects.
>> The humongous objects in production workload are more likely short-
>> lived, whereas they are long-lived in "humongous BigRamTester".
>
> For short-lived humongous objects eager reclaim can do miracles. If
> your objects are non-objArrays, you could check for the reason why they
> are not eagerly reclaimed - maybe the threshold for the amount of
> remembered set entries to keep these humongous objects as eligible for
> eager reclaim is too low, and increasing that one would just make it
> work. Enabling gc+humongous=debug can give more information.
>
> Note that in JDK13 we (implicitly) increased this threshold, and in
> JDK14 we removed the main reason why the threshold is as low as it is
> (calculating the number of rememebered set entries).
>
> It is likely possible to increase this threshold by one or even two
> magnitudes now, potentially increasing its effectiveness significantly
> with a one-liner change. I will file a CR for that, thought of it but
> forgot when doing the jdk14 modification.
JDK-8237500.
>>
>> For OOMs due to fragmentation and ideas related to full GC (JDK-
>> 8191565, JDK-8038487), I'd like to point out that the near-OOM cases
>> are less of a concern for our production applications. Their heap
>> sizes are sufficiently large in order to keep GC overhead low with
>> CMS in the past. When they move to G1, they almost never trigger full
>> GCs even with a non-trivial number of humongous allocations.
>> The problem is the high frequency of concurrent cycles and mixed
>> collections as a result of humongous allocations. Fundamentally it is
>
> Which indicates that eager reclaim does not work in this application
> for some reason.
Note that it would be appreciated if we all were able to discuss issues
on an actual log (gc+heap=debug,gc+humongous=debug; some rough
comparison of gc's performed with g1 and CMS, with some distribution of
g1 gc pauses) than trying to guess what each others actual problems are.
>> also due to fragmentation, but only addressing the near-OOM cases
>> would not solve the problem. Doing more active defragmentation could
>> indeed help.
>
> To me, spending the effort on combating internal fragmentation (allow
> allocation in tail ends) and external fragmentation by actively
> defragmenting seems to be at least worth comparing to other options.
>
> It could help with all problems but cases where you allocate a very
> large of humongous objects and you can't keep the humognous object
> tails filled. This option still keeps the invariant that humongous
> objects need to be allocated at a region boundary.
>
> Most of the other ideas you propose below also (seem to) retain this
> property.
After some more thought, all these solutions actually all seem to do so.
Even the arraylets would suffer from the same internal fragmentation for
the last arrayoid as it does now since they seem to stay humongous to
avoid constant copying and remapping.
There is some remark in some tech paper about arraylets
(https://www.ibm.com/developerworks/websphere/techjournal/1108_sciampacone/1108_sciampacone.html
thatt indicates that the balanced collector seems to not move the
arrayoids too. ([...] Additionally, the balanced collector never needs
to move an arraylet leaf once it has been allocated. The cost of
relocating an array is limited to the cost of relocating the spine, so
large arrays do not contribute to higher defragmentation times. [...]).
Thanks,
Thomas
More information about the hotspot-gc-dev
mailing list