RFR: 8343782: G1: Use one G1CardSet instance for multiple old gen regions [v10]

Ivan Walulya iwalulya at openjdk.org
Thu Dec 19 22:26:58 UTC 2024


> Hi all,
> 
> Please review this change to assign multiple collection candidate regions to a single instance of a G1CardSet. Currently, we maintain a 1:1 mapping of old-gen regions and G1CardSet instances, assuming these regions are collected independently. However, regions are collected in batches for performance reasons to meet the G1MixedGCCountTarget.
> 
> In this change, at the end of the Remark phase, we batch regions that we anticipate will be collected together into a collection group while selecting remembered set rebuild candidates. Regions in a collection group should be evacuated at the same time because they are assigned to the same G1CardSet instances. This implies that we do not need to maintain cross-region remembered set entries for regions within the same collection group.
> 
> The benefit is a reduction in the memory overhead of the remembered set and the remembered set merge time during the collection pause. One disadvantage is that this approach decreases the flexibility during evacuation: you can only evacuate all regions that share a particular G1CardSet at the same time. Another downside is that pinned regions that are part of a collection group have to be partially evacuated when the collection group is selected for evacuation. This removes the optimization in the mainline implementation where the pinned regions are skipped to allow for potential unpinning before evacuation.
> 
> In this change, we make significant changes to the collection set implementation as we switch to group selection instead of region selection. Consequently, many of the changes in the PR are about switching from region-centered collection set selection to a group-centered approach.
> 
> Note: The batching is based on the sort order by reclaimable bytes which may change the evacuation order in which regions would have been evacuated when sorted by gc efficiency.
> 
> We have not observed any regressions on internal performance testing platforms. Memory comparisons for the Cachestress benchmark for different heap sizes are attached below.
> 
> Testing: Mach5 Tier1-6
> 
> ![16GB](https://github.com/user-attachments/assets/3224c2f1-172d-4d76-ba28-bf483b1b1c95)
> ![32G](https://github.com/user-attachments/assets/abd10537-41a9-4cf9-b668-362af12fe949)
> ![64GB](https://github.com/user-attachments/assets/fa87eefc-cf8a-4fb5-9fc4-e7151498bf73)
> ![128GB](https://github.com/user-attachments/assets/c3a59e32-6bd7-43e3-a3e4-c472f71aa544)

Ivan Walulya has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains 29 additional commits since the last revision:

 - Merge remote-tracking branch 'upstream/master' into OldGenRemsetGroupsV1
 - Albert review
 - Merge remote-tracking branch 'upstream/master' into OldGenRemsetGroupsV1
 - Merge remote-tracking branch 'upstream/master' into OldGenRemsetGroupsV1
 - fix type
 - fix space issues
 - cleanup
 - assert
 - Thomas Review
 - Merge remote-tracking branch 'upstream/master' into OldGenRemsetGroupsV1
 - ... and 19 more: https://git.openjdk.org/jdk/compare/f270c0d2...6a8039df

-------------

Changes:
  - all: https://git.openjdk.org/jdk/pull/22015/files
  - new: https://git.openjdk.org/jdk/pull/22015/files/6194442d..6a8039df

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk&pr=22015&range=09
 - incr: https://webrevs.openjdk.org/?repo=jdk&pr=22015&range=08-09

  Stats: 6194 lines in 221 files changed: 3920 ins; 1574 del; 700 mod
  Patch: https://git.openjdk.org/jdk/pull/22015.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/22015/head:pull/22015

PR: https://git.openjdk.org/jdk/pull/22015


More information about the hotspot-gc-dev mailing list