RFR: 8365880: Shenandoah: Unify memory usage accounting in ShenandoahFreeSet [v32]
Kelvin Nilsen
kdnilsen at openjdk.org
Fri Oct 24 20:58:17 UTC 2025
On Fri, 24 Oct 2025 19:50:31 GMT, Kelvin Nilsen <kdnilsen at openjdk.org> wrote:
>> This PR eliminates redundant bookkeeping that had been carried out by both ShenandoahGeneration and ShenandoahFreeSet. In the new code, we keep a single tally of relevant information within ShenandoahFreeSet.
>> Queries serviced by ShenandoahGeneration are now delegated to ShenandoahFreeSet.
>>
>> This change eliminates rare and troublesome assertion failures that were often raised when the ShenandoahFreeSet tallies did not match the ShenandoahGeneration tallies. These assertion failures resulted because the two sets of books are updated at different times, using different synchronization mechanisms.
>>
>> The other benefit of this change is that we have less synchronization overhead because we only have to maintain a single set of books.
>
> Kelvin Nilsen has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains 154 commits:
>
> - Merge remote-tracking branch 'jdk/master' into freeset-has-authoritative-tallies
> - Rework implementation of CompressedClassSpaceSizeInJmapHeap.java
> - fix errors in CompressedClassSpaceSizeInJmapHeap.java
> - Add debug instrumentation to CompressedClassSpaceSizeInJmapHeap.java
> - Add sleep to CompressedClassSpaceSizeInJmapHeap.java test
> - Fix up vmstructs and other infrastructure for jmap heap dump
> - After initialization, check for SoftMaxHeapSize changed by constraints enforcement
> - clamp SoftMaxHeapSize during initialization
> - revert change to SoftMaxHeapSizeConstraintFunc
> - fix anothr override declaration
> - ... and 144 more: https://git.openjdk.org/jdk/compare/97e5ac6e...abf9b1f8
> I'm not convinced the change above to move the constraints check sooner is needed in order to use the value of SoftMaxHeapSize in ShenandoahHeap::initialize().
>
> What's the error you see without this change?
I did some testing after reverting my change to the shared code, and I wasn't able to identify exactly how this change relates to particular regression failures. I've committed new changes to mitigate the issues without changing shared code:
1. Inside ShenandoahHeap::initialize(), we clamp the assignment to _soft_max_size between min_capacity() and max_capacity()
2. In ShenandoahHeap::post_initialize(), we check_soft_max_changed(). This will change the value of _soft_max_size if the current value of SoftMaxHeapSize, which has now had its constraints enforced, differs from the current value of _soft_max_size. Constraints are enforced following initialize() but before post_initialize().
-------------
PR Comment: https://git.openjdk.org/jdk/pull/26867#issuecomment-3444885624
More information about the serviceability-dev
mailing list