RFR: 8361099: Shenandoah: Improve heap lock contention by using CAS for memory allocation [v20]
Kelvin Nilsen
kdnilsen at openjdk.org
Wed Jan 7 00:36:14 UTC 2026
On Mon, 5 Jan 2026 21:04:45 GMT, Xiaolong Peng <xpeng at openjdk.org> wrote:
>> Shenandoah always allocates memory with heap lock, we have observed heavy heap lock contention on memory allocation path in performance analysis of some service in which we tried to adopt Shenandoah. This change is to propose an optimization for the code path of memory allocation to improve heap lock contention, along with the optimization, a better OOD is also done to Shenandoah memory allocation to reuse the majority of the code:
>>
>> * ShenandoahAllocator: base class of the allocators, most of the allocation code is in this class.
>> * ShenandoahMutatorAllocator: allocator for mutator, inherit from ShenandoahAllocator, only override methods `alloc_start_index`, `verify`, `_alloc_region_count` and `_yield_to_safepoint` to customize the allocator for mutator.
>> * ShenandoahCollectorAllocator: allocator for collector allocation in Collector partition, similar to ShenandoahMutatorAllocator, only few lines of code to customize the allocator for Collector.
>> * ShenandoahOldCollectorAllocator: allocator for mutator collector allocation in OldCollector partition, it doesn't inherit the logic from ShenandoahAllocator for now, the `allocate` method has been overridden to delegate to `FreeSet::allocate_for_collector` due to the special allocation considerations for `plab` in old gen. We will rewrite this part later and move the code out of `FreeSet::allocate_for_collector`
>>
>> I'm not expecting significant performance impact for most of the cases since in most case the contention on heap lock it not high enough to cause performance issue, but in some cases it may improve the latency/performance:
>>
>> 1. Dacapo lusearch test on EC2 host with 96 CPU cores, p90 is improved from 500+us to less than 150us, p99 from 1000+us to ~200us.
>>
>> java -XX:-TieredCompilation -XX:+AlwaysPreTouch -Xms31G -Xmx31G -XX:+UseShenandoahGC -XX:+UnlockExperimentalVMOptions -XX:+UnlockDiagnosticVMOptions -XX:-ShenandoahUncommit -XX:ShenandoahGCMode=generational -XX:+UseTLAB -jar ~/tools/dacapo/dacapo-23.11-MR2-chopin.jar -n 10 lusearch | grep "metered full smoothing"
>>
>>
>> Openjdk TIP:
>>
>> ===== DaCapo tail latency, metered full smoothing: 50% 241098 usec, 90% 402356 usec, 99% 411065 usec, 99.9% 411763 usec, 99.99% 415531 usec, max 428584 usec, measured over 524288 events =====
>> ===== DaCapo tail latency, metered full smoothing: 50% 902 usec, 90% 3713 usec, 99% 5898 usec, 99.9% 6488 usec, 99.99% 7081 usec, max 8048 usec, measured over 524288 events =====
>> ===== DaCapo tail ...
>
> Xiaolong Peng has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains 265 commits:
>
> - Merge branch 'openjdk:master' into cas-alloc-1
> - Fix build error after merging from tip
> - Merge branch 'master' into cas-alloc-1
> - Merge branch 'master' into cas-alloc-1
> - Some comments updates as suggested in PR review
> - Fix build failure after merge
> - Expend promoted from ShenandoahOldCollectorAllocator
> - Merge branch 'master' into cas-alloc-1
> - Address PR comments
> - Merge branch 'openjdk:master' into cas-alloc-1
> - ... and 255 more: https://git.openjdk.org/jdk/compare/de81d389...cf13b7b5
This is a huge PR. Thanks for working through all the details to get this working. I've identified several issues that I believe require some further attention. We can discuss in a meeting if that would be helpful.
src/hotspot/share/gc/shenandoah/heuristics/shenandoahGenerationalHeuristics.cpp line 80:
> 78: for (size_t i = 0; i < num_regions; i++) {
> 79: ShenandoahHeapRegion* region = heap->get_region(i);
> 80: assert(!region->is_active_alloc_region(), "Not expecting any active alloc region at the time");
Might change comment to: "Should be no active alloc regions when choosing collection set"
src/hotspot/share/gc/shenandoah/heuristics/shenandoahHeuristics.cpp line 102:
> 100: for (size_t i = 0; i < num_regions; i++) {
> 101: ShenandoahHeapRegion* region = heap->get_region(i);
> 102: assert(!region->is_active_alloc_region(), "Not expecting any active alloc region at the time");
Same suggestion here as with shenandoahGenerationalHeuristics.cpp.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 110:
> 108: }
> 109:
> 110: uint dummy = 0;
Don't call this "dummy". Call it regions_ready_for_refresh. Remember the value and pass it in as a new argument to attempt_allocation_slow() so that we don't have to recompute it later.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 114:
> 112: HeapWord* obj = attempt_allocation_in_alloc_regions(req, in_new_region, alloc_start_index(), dummy);
> 113: if (obj != nullptr) {
> 114: return obj;
Even in the case that we successfully fill our allocation request, if regions_ready_for_refresh is greater than some percentage of _alloc_region_count (e.g. > _alloc_region_count / 4), then we should grab the heap lock and refresh_alloc_regions() here. Otherwise, we will gradually degrade the number of directly_allocatable_regions until we are down to one before we refresh any of them.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 133:
> 131: ShenandoahHeapAccountingUpdater accounting_updater(_free_set, ALLOC_PARTITION);
> 132:
> 133: if (regions_ready_for_refresh > 0u) {
Since we've already taken the heap lock because we failed to allocate "fast", I'm ok to go ahead and refresh any regions that are ready right now, even if it's only 1 region.
I'm wondering if we can avoid thrashing in the case that there are no more regions available. We might want to keep a state variable that represents whether there exist free-set regions with which to refresh our cache. This could be updated whenever we "add to" or "rebuild" the free set, and whenever refresh_alloc_regions() find there is insufficient supply to demand. We would want to avoid repeated calls to refresh_alloc_regions() if there are no "refresh_regions_available".
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 192:
> 190: uint i = alloc_start_index;
> 191: do {
> 192: if (ShenandoahHeapRegion* r = nullptr; (r = _alloc_regions[i].address) != nullptr && r->is_active_alloc_region()) {
Note that there is a race (and performance overhead) with checking r->is_active_alloc_region(). Though a region might be active when we check it here, it may be inactive by the time we attempt to atomic_allocate_in().
This is one reason I prefer to use "volatile_top == end" to denote !is_active_alloc_region. This way, you only have to check once (rather than checking is_active() and then checking has_available()). And there is no race between when you check and when you attempt to allocate.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 194:
> 192: if (ShenandoahHeapRegion* r = nullptr; (r = _alloc_regions[i].address) != nullptr && r->is_active_alloc_region()) {
> 193: bool ready_for_retire = false;
> 194: HeapWord* obj = atomic_allocate_in(r, true, req, in_new_region, ready_for_retire);
Insert before atomic_allocate_in: int contended
Pass this as 6th arg to atomic_allocate_in()
Add this code after atomic_allocate_in():
if ((i == alloc_start_index) && (contended > 1)) {
randomize_start_index(); // I think this is realized by setting _alloc_start_index to UINT_MAX
}
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 203:
> 201: }
> 202: } else if (r == nullptr || !r->is_active_alloc_region()) {
> 203: regions_ready_for_refresh++;
Add this code:
if (i == alloc_start_index) {
randomize_start_index();
}
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 214:
> 212:
> 213: template <ShenandoahFreeSetPartitionId ALLOC_PARTITION>
> 214: HeapWord* ShenandoahAllocator<ALLOC_PARTITION>::atomic_allocate_in(ShenandoahHeapRegion* region, bool const is_alloc_region, ShenandoahAllocRequest &req, bool &in_new_region, bool &ready_for_retire) {
Add argument: int &contended
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 219:
> 217: size_t actual_size = req.size();
> 218: if (req.is_lab_alloc()) {
> 219: obj = region->allocate_lab_atomic(req, actual_size, ready_for_retire);
Pass contended arg to allocate_lab_atomic()
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 221:
> 219: obj = region->allocate_lab_atomic(req, actual_size, ready_for_retire);
> 220: } else {
> 221: obj = region->allocate_atomic(actual_size, req, ready_for_retire);
Pass contended arg to allocate_lab_atomic()
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 233:
> 231: // evacuation are not updated during evacuation. For both young and old regions r, it is essential that all
> 232: // PLABs be made parsable at the end of evacuation. This is enabled by retiring all plabs at end of evacuation.
> 233: region->concurrent_set_update_watermark(region->top());
There's a race here. Multiple mutators may be updating watermark in parallel. It may be that the mutator who most recently allocated is not the mutator who makes the "most recent" overwrite of set_update_watermark().
I think the better fix is to remove this code. Update refs should just assume that update watermark equals top for any region in the Old gen, and for any region that was in the Collector partition. It may not be easy to know which regions were "in the Collector partition". Maybe we use a Sentinel value for update_watermark on all such regions. Just overwrite update_watermark(nullptr)? And check for this in update-refs? Needs a solution, and solution needs to be documented in code comments.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 254:
> 252: // Step 1: find out the alloc regions which are ready to refresh.
> 253: for (uint i = 0; i < _alloc_region_count; i++) {
> 254: ShenandoahAllocRegion* alloc_region = &_alloc_regions[i];
We've got the heap lock here. why does this need to be atomic? Comments in the code should make this clear.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 263:
> 261: }
> 262: if (ALLOC_PARTITION == ShenandoahFreeSetPartitionId::Mutator) {
> 263: if (free_bytes > 0) {
We should have counted the entire region's available bytes as allocated when we made this a directly allocatable region. We should not need to further increase bytes allocated here.
I would like to see an assert(free_bytes < PLAB::min_size() * HeapWordSize) here. Eventually, I'd want to generalize this code so that we could refresh regions that are not yet ready to be retired. In this case, we would want to unretire the region here.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 280:
> 278: // Step 2: allocate region from FreeSets to fill the alloc regions or satisfy the alloc request.
> 279: ShenandoahHeapRegion* reserved[MAX_ALLOC_REGION_COUNT];
> 280: int reserved_regions = _free_set->reserve_alloc_regions(ALLOC_PARTITION, refreshable_alloc_regions,
I request we get rid of the min_free_words argument to free_set->reserve_alloc_regions(). See comments in the called function.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 304:
> 302: log_debug(gc, alloc)("%sAllocator: Storing heap region %li to alloc region %i",
> 303: _alloc_partition_name, reserved[i]->index(), refreshable[i]->alloc_region_index);
> 304: AtomicAccess::store(&refreshable[i]->address, reserved[i]);
Should not need to perform AtomicAccess because we hold the heap lock here.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 316:
> 314: HeapWord* ShenandoahAllocator<ALLOC_PARTITION>::allocate(ShenandoahAllocRequest &req, bool &in_new_region) {
> 315: #ifdef ASSERT
> 316: verify(req);
Insert a comment above verify(): "Conform that req corresponds to ALLOC_PARTITION"
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 338:
> 336: for (uint i = 0; i < _alloc_region_count; i++) {
> 337: ShenandoahAllocRegion& alloc_region = _alloc_regions[i];
> 338: ShenandoahHeapRegion* r = AtomicAccess::load(&alloc_region.address);
We've got heap lock and at safepoint. Do not need AtomicAccess here. That is more costly than necessary. I prefer to use regular fetch. If you prefer to keep AtomicAccess, please provide a comment in the code explaining why and we will revist.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 345:
> 343: r->unset_active_alloc_region();
> 344: }
> 345: AtomicAccess::store(&alloc_region.address, static_cast<ShenandoahHeapRegion*>(nullptr));
Same here. We do not need AtomicAccess.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 350:
> 348: total_free_bytes += free_bytes;
> 349: total_regions_to_unretire++;
> 350: _free_set->partitions()->unretire_to_partition(r, ALLOC_PARTITION);
When we reserved this directly allocatable region, we increased bytes allocated() if the ALLOC_PARTITION was mutator. Here, we need to undo that:
if (ALLOC_PARTITION == ShenandoahFreeSetPartitionId::Mutator) {
decrease_bytes_allocated(free_bytes);
}
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 353:
> 351: if (!r->has_allocs()) {
> 352: log_debug(gc, alloc)("%sAllocator: Reverting heap region %li to FREE due to no alloc in the region",
> 353: _alloc_partition_name, r->index());
This code looks suspect to me. Maybe it works as is only because we are currently doing this only immediately before rebuilding free set. If that's the case, there should be some documentation and maybe even some asserts that confirm it is true.
When we release_alloc_regions(), we should be adjusting the range for the associated partitions. The code that most closely resembles this functionality is in ShenandoahFreeSet::move_regions_from_collector_to_mutator(). This is the code that moves collector and old-collector partitions to the mutator partition after evacuation is done.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 360:
> 358: }
> 359: }
> 360: assert(AtomicAccess::load(&alloc_region.address) == nullptr, "Alloc region is set to nullptr after release");
Do not need AtomicAccess here
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 364:
> 362: _free_set->partitions()->decrease_used(ALLOC_PARTITION, total_free_bytes);
> 363: _free_set->partitions()->increase_region_counts(ALLOC_PARTITION, total_regions_to_unretire);
> 364: accounting_updater._need_update = true;
Here is where you know which tallies have been affected by this operation. This is where you should specialize the calls to freeset recompute_total_used() and recompute_total_affiliated(). Either call those from here, or add parameters to your accounting_updater object so that you do not have to overcompute each operation.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 376:
> 374: }
> 375:
> 376: THREAD_LOCAL uint ShenandoahMutatorAllocator::_alloc_start_index = UINT_MAX;
I raised questions about this in a previous review. Have I overlooked your response? What is the tradeoff between declaring this THREAD_LOCAL vs. creating a new field in ShenandoahThreadLocal? I believe we need to use fields of ShenandoahThreadLocal so that we do not incur an overhead on all threads when JVM is not configured for Shenandoah GC.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 423:
> 421: _yield_to_safepoint = false;
> 422: }
> 423:
I suppose ShenandoahCollectorAllocator::randomize_start_index() might be a no-op. On the other hand, it would probably be better to use a random index for ShenandoahCollectorAllocator as well. We don't want to hobble one GC worker more than the others just because its preferred start index happens to hold a retire-ready region.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 428:
> 426: }
> 427:
> 428: HeapWord* ShenandoahOldCollectorAllocator::allocate(ShenandoahAllocRequest& req, bool& in_new_region) {
Confer with William Kemper about this. He is working on a change that may simplify the handling of PLABs, in which case ShenandoahOldCollectorAllocator can behave the same as ShenandoahCollector.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.cpp line 436:
> 434: // Make sure the old generation has room for either evacuations or promotions before trying to allocate.
> 435: auto old_gen = ShenandoahHeap::heap()->old_generation();
> 436: if (req.is_old() && !old_gen->can_allocate(req)) {
This test for req.is_old() appears to be unnecessary. The verify(req) assert above requires that req.is_old().
Perhaps the verify() method is too abstract. Add a comment there that says: "Confirm that req.is_old()"
src/hotspot/share/gc/shenandoah/shenandoahAllocator.hpp line 56:
> 54: virtual uint alloc_start_index() { return 0u; }
> 55:
> 56: // Attempt to allocate
Comment needs to make clear that this is the main entry point for fast-path allocation from a directly allocatable region. This function delegates to slow-path allocation if it is unable to allocate from the directly allocatable regions. Not sure I like the name "attempt_allocation()". All of our allocation routines attempt to allocate and return a sentinel value (nullptr) if the allocation fails. This is no different. Just call it allocate_work(), and clarify that this is the helper routine of allocate() which does the work of allocating from a directly allocatable region without acquiring the heap lock if that is possible, and otherwise does a slow-path allocation which requires acquisition of the heap lock.
I see that your comments are trying to say this. But the comments as written are not easy to understand.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.hpp line 69:
> 67:
> 68: // Attempt to allocate in a shared alloc region using atomic operation without holding the heap lock.
> 69: // Returns nullptr and overwrites regions_ready_for_refresh with the number of shared alloc regions that are ready
Suggest this edit:
// Overwrites regions_ready_for_refresh with a lower bound on the number of shared alloc regions that are ready
// to be retired during execution of this "do_fast_allocation" function. Returns nullptr if the allocation request could
// not be fulfilled after a single traversal of directly allocatable regions.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.hpp line 79:
> 77: int refresh_alloc_regions(ShenandoahAllocRequest* req = nullptr, bool* in_new_region = nullptr, HeapWord** obj = nullptr);
> 78: #ifdef ASSERT
> 79: virtual void verify(ShenandoahAllocRequest& req) { }
Need a comment to explain what verify does. Is this simply checking to make sure the req is "properly formatted"? I think the intention is to enforce that req affiliation corresponds to ALLOC_PARTITION. Would be good to clarify this in the comment.
Do we need this to be virtual? It seems like a single templated implementation would suffice.
src/hotspot/share/gc/shenandoah/shenandoahAllocator.hpp line 91:
> 89: virtual HeapWord* allocate(ShenandoahAllocRequest& req, bool& in_new_region);
> 90: virtual void release_alloc_regions();
> 91: virtual void reserve_alloc_regions();
Need comments on these functions. Clarify pre-conditions and post-conditions. I think the intention is:
1. allocate(): Caller does not hold the heap lock. All allocations by mutator or GC are fulfilled by this function. This function tries to perform a CAS allocation without obtaining the global heap lock. If that fails, it will obtain the global heap lock and do a free-set allocation. As a side effect of doing a free-set allocation, some number of directly allocatable regions may be retired and replaced with new directly allocatable regions.
2. release_alloc_regions(): Caller must hold the heap lock. This causes all directly allocatable regions to be placed into the appropriate ShenandoahFreeSet partition. We do this in preparation for choosing a collection set and/or rebuilding the freeset.
3. reserve_alloc_regions(): Caller must hold the heap lock. This causes us to set aside N regions as directly allocatable by removing these regions from the relevant ShenandoahFreeSet partitions. Explain what happens if there are not N regions available.
Clarify: these three function represent the entirety of the "public mutation API" that is exercised by mutators and GC workers as they interact with the free set? (There is another set of functions that could be characterized as the read-only API for obtaining state information about the free set. This provides information such as available memory, allocated bytes since GC start, etc.)
src/hotspot/share/gc/shenandoah/shenandoahFreeSet.cpp line 3045:
> 3043: }
> 3044:
> 3045: int ShenandoahFreeSet::reserve_alloc_regions(ShenandoahFreeSetPartitionId partition, int regions_to_reserve, size_t min_free_words, ShenandoahHeapRegion** reserved_regions) {
I request that we not enforce min_free_words when reserving allocation regions. This defeats the purpose of allocation bias. The objective is to consume fragmented memory early in the GC cycle (when we have more mitigation options if an allocation request ever fails). Note that every region that is in any partition has at least PLAB::min_size() available memory.
By requiring that MUTATOR regions have PLAB::max_size() words, we are forcing ourselves to never consume the fragmented memory regions. (Towards the end of GC, when memory is in short supply, we will be unable to find directly allocatable MUTATOR regions. This will force ourselves to obtain the heap lock for every allocation. And these allocations will be inefficient because the remaining memory is highly fragmented.)
src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.inline.hpp line 167:
> 165: };
> 166:
> 167: HeapWord* ShenandoahHeapRegion::allocate_atomic(size_t size, const ShenandoahAllocRequest& req, bool &ready_for_retire) {
Suggest we add a fourth arg: int &contended
We initialize contended to zero
src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.inline.hpp line 187:
> 185: return nullptr;
> 186: }
> 187: }
Before iterating, increment contended by 1
src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.inline.hpp line 190:
> 188: }
> 189:
> 190: HeapWord* ShenandoahHeapRegion::allocate_lab_atomic(const ShenandoahAllocRequest& req, size_t &actual_size, bool &ready_for_retire) {
Suggest we add a fourth arg: int &contended
We initialize contended to zero
src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.inline.hpp line 218:
> 216: return nullptr;
> 217: }
> 218: }
Before we iterate, we increment contended by 1
src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.inline.hpp line 304:
> 302: }
> 303:
> 304: inline void ShenandoahHeapRegion::concurrent_set_update_watermark(HeapWord* w) {
See comment elsewhere in my feedback. I think we may want to use a special sentinel value to denote that watermark for Collector and OldCollector regions. For both of these, there is essentially not watermark value. If we try to set the value to top() from within a CAS-allocating mutator thread, we can end up setting watermark to the not-most-recent value of top(), which would result in misbehavior during update refs.
src/hotspot/share/gc/shenandoah/shenandoah_globals.hpp line 567:
> 565: "0 will allow back to back young collections to run during old " \
> 566: "collections.") \
> 567: \
once we resolve the various issues identified in feedback comments, I would be interested in results of experimenting with different values of these two parameters...
-------------
Changes requested by kdnilsen (Committer).
PR Review: https://git.openjdk.org/jdk/pull/26171#pullrequestreview-3628853514
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663181721
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663183357
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2665709301
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2665818148
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2665800328
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666506691
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666328083
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666332994
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666334248
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666334844
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666335529
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666360404
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666366038
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666526965
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666564758
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666566961
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666567671
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663324871
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663327493
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666583027
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666637281
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666628228
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663337002
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663279917
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666642051
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666643974
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2665511567
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2665632758
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666273440
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663265276
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2663261232
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666553882
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666309782
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666309888
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666310835
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666311617
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666683738
PR Review Comment: https://git.openjdk.org/jdk/pull/26171#discussion_r2666691665
More information about the hotspot-gc-dev
mailing list