RFR: 8267703: runtime/cds/appcds/cacheObject/HeapFragmentationTest.java crashed with OutOfMemory [v2]
Thomas Schatzl
tschatzl at openjdk.java.net
Tue Jun 1 13:15:23 UTC 2021
On Tue, 1 Jun 2021 09:16:43 GMT, Stefan Johansson <sjohanss at openjdk.org> wrote:
>> Please review this change to lower fragmentation when heap usage is low.
>>
>> **Summary**
>> The above test case fails because the G1 Full GC fails to compact the single used region below the needed threshold. In this case the region needs to be compacted below region index 400 to be able to fit the large array. The reason this fails is that the full GC uses a lot of parallel workers and if the system is under load it is possible that the worker finding the region to compact hasn't been able to claim any regions low enough in the heap to compact the live objects to.
>>
>> To fix this we can add a third thing to consider in the code calculating the number of workers to use for a given compaction. So far we only look at keeping the waste down and using the default adaptive calculations. If we also consider how much is used we can get a lower worker count in cases like this and that will make it much more likely to succeed with the compaction. In this case it will guarantee it since there is a single region used, so there will be only one worker and then it will compact the region to the bottom of the heap.
>>
>> **Testing**
>> Manual verification that this will cause these collections to only use a single worker. Also currently running some performance regression testing to make sure this doesn't cause any big regressions.
>
> Stefan Johansson has updated the pull request incrementally with one additional commit since the last revision:
>
> Revised approach.
Not really requesting changes, just answering my comments.
src/hotspot/share/gc/g1/g1FullCollector.cpp line 98:
> 96:
> 97: // Finally consider the amount of used regions.
> 98: uint used_worker_limit = MAX2(heap->num_used_regions(), 1u);
Should we actually end up here with zero used regions? It does not hurt, but seems superfluous
src/hotspot/share/gc/g1/g1FullCollector.cpp line 106:
> 104: worker_count, heap_waste_worker_limit, active_worker_limit, used_worker_limit);
> 105: worker_count = heap->workers()->update_active_workers(worker_count);
> 106: log_info(gc, task)("Using %u workers of %u for full compaction", worker_count, max_worker_count);
That's pre-existing, but this will change the number of active workers for the rest of the garbage collection. That made some sense previously as `G1FullCollector::calc_active_workers()` typically was very aggressive, but now it may limit other phases a bit, particularly marking which distributes on a per-reference basis.
Overall it might not make much difference though as we are talking about the very little occupied heap case.
I.e. some rough per-full gc phase might be better and might be derived easily too.
-------------
Changes requested by tschatzl (Reviewer).
PR: https://git.openjdk.java.net/jdk/pull/4225
More information about the hotspot-gc-dev
mailing list