Moving Forward with AHS for G1

Man Cao manc at google.com
Thu Apr 10 09:30:54 UTC 2025


Re Eric's comments:

Sorry to butt in. A high level question about the AHS plan for G1… are we
> interested in the
> intermediate functionality (SoftMaxHeapSize and CurrentMaxHeapSize), or is
> it AHS that
> we are interested in?


No worries, and I appreciate the comment.
The high-level rationale is that JVM should provide at least one of
SoftMaxHeapSize or CurrentMaxHeapSize as a high-precedence, manageable
flag, so that the JVM could take customized input signal for heap sizing
decisions.
Even with fully-developed AHS algorithm, it cannot satisfy all deployment
environments. E.g. custom container system or custom OS, in which the JVM
cannot detect system memory pressure via standard approaches. So these
flags are not necessarily intermediate solutions, and they could allow more
deployment environments to use AHS.

For SoftMaxHeapSize for G1, based on discussion in
https://github.com/openjdk/jdk/pull/24211, it will likely become just hint
to trigger concurrent marks, which will be unlikely to interfere with other
parts of G1 AHS.
For my original proposal of high-precedence SoftMaxHeapSize (as currently
implemented in the PR), the guidance for users is that they should either
provide a mechanism to adjust SoftMaxHeapSize dynamically to prevent GC
thrashing, or only set it temporarily and accept the risk of GC thrashing.
It is not intended as a static value that the user "sets and forgets".
For CurrentMaxHeapSize, it has similar issues as high-precedence
SoftMaxHeapSize, that it is not "sets and forgets". However, I can see that
clearly-specified OutOfMemoryError behavior from CurrentMaxHeapSize could
be more favorable than hard-to-define potential GC thrashing condition that
a high-precedence SoftMaxHeapSize could cause.

Re Kirk's comments:

> Might I suggest that an entirely new (experimental?) adaptive size policy
> be introduced that makes use of current flags in a manner that is
> appropriate to the new policy. That policy would calculate a size of Eden
> to control GC frequency, a size of survivor to limit promotion of
> transients, and a tenured large enough to accommodate the live set as well
> as manage the expected number of humongous allocations. If global heap
> pressure won’t support the ensuing max heap size, then the cost could be
> smaller eden implying higher GC overhead due to increased frequency.
> Metrics to support eden sizing would be allocation rate. The age table
> with premature promotion rates would be used to estimate the size of
> survivor. Live set size with a recent history of humongous allocations
> would be used for tenured.
> There will need to be a dampening strategy in play. My current (dumb) idea
> for Serial is to set an overhead threshold delta that needs to be exceeded
> to trigger a resize.


I don't quite understand how this adaptive size policy (ASP) solves the
problems AHS tries to solve.
AHS tries solve the problem of reaching an appropriate target *total* heap
size, based on multiple inputs (JVM flags, environment circumstances). Once
a total heap size is determined, G1 uses existing algorithms to determine
young-gen and old-gen sizes. However, the ASP seems to focus on determining
young-gen and old-gen sizes using a new algorithm.

-Man
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20250410/e19a32f1/attachment-0001.htm>


More information about the hotspot-gc-dev mailing list