Further discussion on Adaptable Heap Sizing with G1

Thomas Schatzl thomas.schatzl at oracle.com
Tue Oct 8 15:34:55 UTC 2024


Hi Jonathan,

On 05.10.24 00:39, Jonathan Joo wrote:
> Hi All,
> 
> As Kirk mentioned in his email "Aligning the Serial collector with ZGC 
> <https://mail.openjdk.org/pipermail/hotspot-gc-dev/2024-September/049616.html>", we are also working on adding Adaptable Heap Sizing (AHS) to G1.
> 
> I created a draft Pull Request 
> <https://github.com/openjdk/jdk/pull/20783> and received some comments 
> on it already, including the following points:
>  
>  1. I should Convert CurrentMaxExpansionSize to CurrentMaxHeapSize.
> 
>  2. SoftMaxHeapSize, as implemented in the PR, is different from the
>     original intent.
>
>  3. We need some sort of global memory pressure to enforce heap shrinkage.
 >
 >
> The first point I already addressed on the pull request, and I agree 
> that CurrentMaxHeapSize works well :)
>  
> Regarding the second point, we had some discussions already outside of 
> this mailing list, but if I were to summarize the main ideas, they are:
> 
>  1. The intent of SoftMaxHeapSize initially was for the GC to use this
>     value as a guide for when to start concurrent GC.
> 
>  2. Our implementation of SoftMaxHeapSize (in the PR) currently behaves
>     more like a ProposedHeapSize, where whenever we shrink and expand
>     the heap, we try to set the heap size to ProposedHeapSize,
>     regardless of the value of MinHeapSize.
> 
>  3. We need to ensure that the heap regularly approaches the value of
>     ProposedHeapSize by introducing some sort of periodic GC, which we
>     have a Google-internal patch for, and is not yet present in the PR.
>     If we are in alignment that this makes sense, I can try adding this
>     as a separate PR.

I think what this suggestion overlooks is that a SoftMaxHeapSize that 
guides used heap size will automatically guide committed size: i.e. if 
G1 shrinks the used heap, G1 will automatically shrink (and keep) the 
committed size.

So ProposedHeapSize seems to be very similar to SoftMaxHeapSize.


> As a separate point - Kirk mentioned in his email that he aims to 
> introduce an adaptive size policy where "Heap should be large enough to 
> minimize GC overhead but not large enough to trigger OOM". I think from 
> our experience in G1, we don't actively try to minimize GC overhead, as 
> we find that maintaining a higher GC overhead often results in overall 
> RAM savings >> CPU usage.
> 

I.e. if I understand this correctly: allowing a higher GC overhead, 
automatically shrinks the heap.

> I think as a general summary - the way I see it, there's value in 
> creating a simplified system where we control the majority of JVM 
> behavior simply with two flags - the maximum heap size (to prevent 
> OOMs), and a target heap size, which is our calculation of an "optimal" 
> size based on our understanding of the environment. The exact 
> calculations for this optimal size may change depending on 
> workload/preference, but what we are trying to do at this point in time 
> is allow for a way to pass in some calculation for "optimal heap size" 
> and have G1 react to it in a meaningful way. I acknowledge that the 
> current JVM behavior (as implemented in my PR) may be suboptimal in 
> terms of getting the heap to get to and stay at this "optimal heap 
> size". However, even with the basic implementation of passing this value 
> to shrinks/expands and only triggering resizes on Remarks/Full GCs, 
> we've seen dramatic improvements in heap behavior at Google, compared to 
> the current G1.

I noticed the same with the patch attached to the SoftMaxHeapSize CR 
(https://bugs.openjdk.org/browse/JDK-8236073) discounting effects of 
Min/MaxHeapFreeRatio (i.e. if you remove it, 
https://bugs.openjdk.org/browse/JDK-8238686 explains the issue).
In practice, these two flags prohibit G1 from adjusting the heap unless 
the SoftMaxHeapSize change is very large.

So I would prefer to only think of an alternative to SoftMaxHeapSize if 
it has been shown that it does not work.

There is the nit that unlike in this implementation of ProposedHeapSize, 
SoftMaxHeapSize will not cauase uncommit below MinHeapSize. This is 
another discussion on what to do about this issue - in a comment in 
https://bugs.openjdk.org/browse/JDK-8236073 it is proposed to make 
MinHeapSize manageable.
> I know there was some disagreement about the addition of this new 
> "optimal heap size" flag, and I agree that SoftMaxHeapSize is probably 
> not the right flag to represent this value. But I'd like to get some 
> thoughts on whether the above summary seems like a reasonable way of 
> reasoning about G1 AHS. If we agree, then we can always iteratively 
> improve the JVM logic to better adhere to the optimal heap size. But 
> it's not yet clear to me whether people are onboard the idea of having 
> this "optimal heap size" calculation at all, since perhaps this 
> functionality could be covered in other, existing ways.

I (still) believe that AHS and SoftMaxHeapSize/ProposedHeapSize are 
somewhat orthogonal.

AHS (https://openjdk.org/jeps/8329758) is about finding a reasonable 
heap size, and adjust on external "pressure". SoftMax/ProposedHeapSize 
are manual external tunings.

Wdyt?

   Thomas



More information about the hotspot-gc-dev mailing list