RFR: 8324995: Shenandoah: Skip to full gc for humongous allocation failures [v3]

Y. Srinivas Ramakrishna ysr at openjdk.org
Thu Feb 8 22:32:03 UTC 2024


On Thu, 8 Feb 2024 22:13:09 GMT, Kelvin Nilsen <kdnilsen at openjdk.org> wrote:

> There isn't enough memory right now. But there may have been at the end of the most recent GC. The question: Is a normal GC likely to reclaim enough contiguous memory to satisfy the humongous allocation request. If the previous normal GC was successful, then the new one is also likely to be successful.

In other words, you are saying here that if this test passes then there is a high likelihood that the failure to allocate the humongous request at this time is because that non-humongous allocations led to this space getting fragmented (or reduced), and that the space will reappear in contiguous form as soon as the ongoing concurrent gc (albeit degenerated now?) completes without taking recourse to a full gc. Does this then lead to any policy parameter change in terms of the maintenance & preservation of contiguous regions in the next epoch between GCs? In other words, asking if this has any interactions with your changes in https://github.com/openjdk/jdk/pull/17561, potentially changing the performance equation in specific directions, and if those have been considered in the data generated above for this PR.

Also wondering if the decision of full vs degenerate might also want to be driven by recent history of length of full vs degenerated, both successful & unsuccessful (especially if a map from degeneration points is manitained, although that might be overkill)? As you can tell, I am just waving my hands here, but that could also conceivably constitute a signal that informs the decision, perhaps...

-------------

PR Comment: https://git.openjdk.org/jdk/pull/17638#issuecomment-1935034421


More information about the shenandoah-dev mailing list