Moving Forward with AHS for G1
Kirk Pepperdine
kirk at kodewerk.com
Wed Apr 9 16:14:18 UTC 2025
> On Apr 9, 2025, at 8:22 AM, Erik Osterlund <erik.osterlund at oracle.com> wrote:
>
> Hi Man,
>
> Sorry to butt in. A high level question about the AHS plan for G1… are we interested in the
> intermediate functionality (SoftMaxHeapSize and CurrentMaxHeapSize), or is it AHS that
> we are interested in?
>
> The reason I ask is that each incremental feature comes with some baggage due to being
> a (somewhat) static and manually set limit, which the AHS solution won’t need to deal with.
>
> For example, it’s unclear how a *static* SoftMaxHeapSize should behave when the livee set
> is larger than the limit. While that can maybe be solved in some reasonable way, it’s worth
> noting that AHS won’t need the solution, because there it’s a dynamic limit that the GC simply
> won’t set lower than the memory usage after GC. It will however get in the way because the
> user can now also set a SoftMaxHeapSize that conflicts with the AHS soft heap size that
> the JVM wants to use, and then we gotta deal with that.
>
> Similarly, the CurrentMaxHeapSize adds another way for users to control (read: mess up)
> the JVM behaviour that we need to respect. In the end, AHS will compute this dynamically
> instead depending on environment circumstances. I suspect the fact that it can also be
> manually set in a way that conflicts with what the JVM wants to do, will end up being a pain.
I would agree and to this point, I’ve rarely found ratios to be useful. In general, eden, survivor, and old each play a different role in object life cycle and as such each should be tuned separately from each other. Min/Max heap is the sum of the needs of the parts. Being able to meet the needs of eden, survivor and old by simply setting a max heap and relying on ratios is wishful thinking that sometimes comes true.
Might I suggest that an entirely new (experimental?) adaptive size policy be introduced that makes use of current flags in a manner that is appropriate to the new policy. That policy would calculate a size of Eden to control GC frequency, a size of survivor to limit promotion of transients, and a tenured large enough to accommodate the live set as well as manage the expected number of humongous allocations. If global heap pressure won’t support the ensuing max heap size, then the cost could be smaller eden implying higher GC overhead due to increased frequency.
Metrics to support eden sizing would be allocation rate. The age table with premature promotion rates would be used to estimate the size of survivor. Live set size with a recent history of humongous allocations would be used for tenured.
There will need to be a dampening strategy in play. My current (dumb) idea for Serial is to set an overhead threshold delta that needs to be exceeded to trigger a resize.
>
> I’m not against the plan of building these incremental features, especially if we want them
> in isolation. But if it’s AHS we want, then I wonder if it would be easier to go straight for what
> we need for AHS without the intermediate user exposed steps, because they might introduce
> unnecessary problems along the way.
I would agree with this. And I would suggest that the way to achieve it is to introduce a new experimental ASP.
>
> My 50c, no strong opinion though.
More information about the hotspot-gc-dev
mailing list