RFR (S) 8241139: Shenandoah: distribute mark-compact work exactly to minimize fragmentation
Roman Kennke
rkennke at redhat.com
Mon Mar 23 12:36:57 UTC 2020
>> RFE:
>> https://bugs.openjdk.java.net/browse/JDK-8241139
>>
>> Was following up on why JLinkTest fails with Shenandoah. Figured out the dynamic work distribution
>> in mark-compact leaves alive regions in the middle of the heap. It is a generic problem with current
>> mark-compact implementation, as which regions get into each worker slice is time-dependent.
>>
>> Consider the worst case scenario: two workers would have their slices interleaved, once slice is
>> fully alive, and other is fully dead. In the end, mark-compact would finish with the same
>> interleaved heap. A humongous allocation then fails. We need to plan the parallel sliding more
>> accurately. See the code comments about what new plan does.
>>
>> Webrev:
>> https://cr.openjdk.java.net/~shade/8241139/webrev.01/
>>
>> Testing: hotspot_gc_shenandoah; known-failing test; tier{1,2,3} (passed with previous version,
>> running with new version now); eyeballing shenandoah-visualizer
>
> Found the issue about distributing the tail: we cannot blindly do round-robin selection after every
> worker is full, because that unbalances the work again! So ditched that part for:
>
> 607 if (old_wid == wid) {
> 608 // Circled back to the same worker? This means liveness data was
> 609 // miscalculated. Bump the live_per_worker limit so that
> 610 // everyone gets the piece of the leftover work.
> 611 live_per_worker += ShenandoahHeapRegion::region_size_words();
> 612 }
>
> Full webrev:
> https://cr.openjdk.java.net/~shade/8241139/webrev.02/
>
> Testing: hotspot_gc_shenandoah {fastdebug,release}; tier{1,2,3} in progress
Yep!
Probably better to say 'everyone gets *a* piece of the leftover work' ?
No new webrev needed if you change this.
Roman
More information about the hotspot-gc-dev
mailing list