Further discussion on Adaptable Heap Sizing with G1
Jonathan Joo
jonathanjoo at google.com
Fri Oct 11 07:16:43 UTC 2024
Hi Thomas,
I think what this suggestion overlooks is that a SoftMaxHeapSize that
> guides used heap size will automatically guide committed size: i.e. if
> G1 shrinks the used heap, G1 will automatically shrink (and keep) the
> committed size.
>
> So ProposedHeapSize seems to be very similar to SoftMaxHeapSize.
>
If I'm understanding this correctly - both ProposedHeapSize and (the
proposed version of) SoftMaxHeapSize have similar semantics, but actually
modify the heap in different ways. SoftMaxHeapSize helps us determine when
to start a concurrent mark, whereas ProposedHeapSize doesn't actually
trigger any GC directly, but affects the size of the heap after a GC. Is
that correct? Would it make sense then to have both flags, where one helps
set a trigger point for a GC, and one helps us determine the heap size we
are targeting after the GC? I might also be missing some nuances here.
I.e. if I understand this correctly: allowing a higher GC overhead,
> automatically shrinks the heap.
Exactly - in practice, tuning this one parameter up (the target gc cpu
overhead) correlates with decreasing both the average as well as maximum
heap usage for a java program.
I noticed the same with the patch attached to the SoftMaxHeapSize CR
> (https://bugs.openjdk.org/browse/JDK-8236073) discounting effects of
> Min/MaxHeapFreeRatio (i.e. if you remove it,
> https://bugs.openjdk.org/browse/JDK-8238686 explains the issue).
> In practice, these two flags prohibit G1 from adjusting the heap unless
> the SoftMaxHeapSize change is very large.
> So I would prefer to only think of an alternative to SoftMaxHeapSize if
> it has been shown that it does not work.
Given that you have a much stronger mental model than I do of how all these
flags fit together in the context of G1 GC, perhaps it would be helpful to
schedule some time to chat in person! I think that would help clarify
things much more quickly than email. To be clear - I have no reason to
doubt that SoftMaxHeapSize does not work. On the other hand, could we
possibly make use of both flags? For example, could SoftMaxHeapSize
potentially be a good replacement for our periodic GC?
There is the nit that unlike in this implementation of ProposedHeapSize,
> SoftMaxHeapSize will not cause uncommit below MinHeapSize. This is
> another discussion on what to do about this issue - in a comment in
> https://bugs.openjdk.org/browse/JDK-8236073 it is proposed to make
> MinHeapSize manageable.
How useful is MinHeapSize in practice? Do we need it, or can we just set it
to zero to avoid having to deal with it at all?
I (still) believe that AHS and SoftMaxHeapSize/ProposedHeapSize are
> somewhat orthogonal.
AHS (https://openjdk.org/jeps/8329758) is about finding a reasonable
> heap size, and adjust on external "pressure". SoftMax/ProposedHeapSize
> are manual external tunings.
> Wdyt?
I agree with the general idea - for us, we used a manual external flag like
ProposedHeapSize because we did not implement any of the AHS logic in the
JVM. (We had a separate AHS thread reading in container information and
then doing the calculations, then setting ProposedHeapSize as a manageable
flag.) The way I see it is that SoftMax/ProposedHeapSize is the "output" of
AHS, and then SoftMax/ProposedHeapSize is the "input" for the JVM, after
which the JVM uses this input to adjust its behavior accordingly. Does that
align with how you see things?
If we do indeed implement AHS logic fully within the JVM, then we could
internally manage the sizing of the heap without exposing a manageable
flag. That being said, it seems to me that exposing this as a manageable
flag brings the additional benefit that one could plug in their own AHS
implementation that calculates target heap sizes with whatever data they
want (and then passes it into the JVM via the manageable flag).
Again, I wonder if meeting to discuss would be efficient, and then we can
update the mailing list with the results of our discussion. Let me know
your thoughts!
Best,
~ Jonathan
On Tue, Oct 8, 2024 at 8:35 AM Thomas Schatzl <thomas.schatzl at oracle.com>
wrote:
> Hi Jonathan,
>
> On 05.10.24 00:39, Jonathan Joo wrote:
> > Hi All,
> >
> > As Kirk mentioned in his email "Aligning the Serial collector with ZGC
> > <
> https://mail.openjdk.org/pipermail/hotspot-gc-dev/2024-September/049616.html>",
> we are also working on adding Adaptable Heap Sizing (AHS) to G1.
> >
> > I created a draft Pull Request
> > <https://github.com/openjdk/jdk/pull/20783> and received some comments
> > on it already, including the following points:
> >
> > 1. I should Convert CurrentMaxExpansionSize to CurrentMaxHeapSize.
> >
> > 2. SoftMaxHeapSize, as implemented in the PR, is different from the
> > original intent.
> >
> > 3. We need some sort of global memory pressure to enforce heap
> shrinkage.
> >
> >
> > The first point I already addressed on the pull request, and I agree
> > that CurrentMaxHeapSize works well :)
> >
> > Regarding the second point, we had some discussions already outside of
> > this mailing list, but if I were to summarize the main ideas, they are:
> >
> > 1. The intent of SoftMaxHeapSize initially was for the GC to use this
> > value as a guide for when to start concurrent GC.
> >
> > 2. Our implementation of SoftMaxHeapSize (in the PR) currently behaves
> > more like a ProposedHeapSize, where whenever we shrink and expand
> > the heap, we try to set the heap size to ProposedHeapSize,
> > regardless of the value of MinHeapSize.
> >
> > 3. We need to ensure that the heap regularly approaches the value of
> > ProposedHeapSize by introducing some sort of periodic GC, which we
> > have a Google-internal patch for, and is not yet present in the PR.
> > If we are in alignment that this makes sense, I can try adding this
> > as a separate PR.
>
> I think what this suggestion overlooks is that a SoftMaxHeapSize that
> guides used heap size will automatically guide committed size: i.e. if
> G1 shrinks the used heap, G1 will automatically shrink (and keep) the
> committed size.
>
> So ProposedHeapSize seems to be very similar to SoftMaxHeapSize.
>
>
> > As a separate point - Kirk mentioned in his email that he aims to
> > introduce an adaptive size policy where "Heap should be large enough to
> > minimize GC overhead but not large enough to trigger OOM". I think from
> > our experience in G1, we don't actively try to minimize GC overhead, as
> > we find that maintaining a higher GC overhead often results in overall
> > RAM savings >> CPU usage.
> >
>
> I.e. if I understand this correctly: allowing a higher GC overhead,
> automatically shrinks the heap.
>
> > I think as a general summary - the way I see it, there's value in
> > creating a simplified system where we control the majority of JVM
> > behavior simply with two flags - the maximum heap size (to prevent
> > OOMs), and a target heap size, which is our calculation of an "optimal"
> > size based on our understanding of the environment. The exact
> > calculations for this optimal size may change depending on
> > workload/preference, but what we are trying to do at this point in time
> > is allow for a way to pass in some calculation for "optimal heap size"
> > and have G1 react to it in a meaningful way. I acknowledge that the
> > current JVM behavior (as implemented in my PR) may be suboptimal in
> > terms of getting the heap to get to and stay at this "optimal heap
> > size". However, even with the basic implementation of passing this value
> > to shrinks/expands and only triggering resizes on Remarks/Full GCs,
> > we've seen dramatic improvements in heap behavior at Google, compared to
> > the current G1.
>
> I noticed the same with the patch attached to the SoftMaxHeapSize CR
> (https://bugs.openjdk.org/browse/JDK-8236073) discounting effects of
> Min/MaxHeapFreeRatio (i.e. if you remove it,
> https://bugs.openjdk.org/browse/JDK-8238686 explains the issue).
> In practice, these two flags prohibit G1 from adjusting the heap unless
> the SoftMaxHeapSize change is very large.
>
> So I would prefer to only think of an alternative to SoftMaxHeapSize if
> it has been shown that it does not work.
>
> There is the nit that unlike in this implementation of ProposedHeapSize,
> SoftMaxHeapSize will not cauase uncommit below MinHeapSize. This is
> another discussion on what to do about this issue - in a comment in
> https://bugs.openjdk.org/browse/JDK-8236073 it is proposed to make
> MinHeapSize manageable.
> > I know there was some disagreement about the addition of this new
> > "optimal heap size" flag, and I agree that SoftMaxHeapSize is probably
> > not the right flag to represent this value. But I'd like to get some
> > thoughts on whether the above summary seems like a reasonable way of
> > reasoning about G1 AHS. If we agree, then we can always iteratively
> > improve the JVM logic to better adhere to the optimal heap size. But
> > it's not yet clear to me whether people are onboard the idea of having
> > this "optimal heap size" calculation at all, since perhaps this
> > functionality could be covered in other, existing ways.
>
> I (still) believe that AHS and SoftMaxHeapSize/ProposedHeapSize are
> somewhat orthogonal.
>
> AHS (https://openjdk.org/jeps/8329758) is about finding a reasonable
> heap size, and adjust on external "pressure". SoftMax/ProposedHeapSize
> are manual external tunings.
>
> Wdyt?
>
> Thomas
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20241011/b12c7b59/attachment.htm>
More information about the hotspot-gc-dev
mailing list