RFR: 8267703: runtime/cds/appcds/cacheObject/HeapFragmentationTest.java crashed with OutOfMemory [v2]
Stefan Johansson
sjohanss at openjdk.java.net
Tue Jun 1 15:09:23 UTC 2021
On Tue, 1 Jun 2021 13:11:07 GMT, Thomas Schatzl <tschatzl at openjdk.org> wrote:
>> Stefan Johansson has updated the pull request incrementally with one additional commit since the last revision:
>>
>> Revised approach.
>
> src/hotspot/share/gc/g1/g1FullCollector.cpp line 106:
>
>> 104: worker_count, heap_waste_worker_limit, active_worker_limit, used_worker_limit);
>> 105: worker_count = heap->workers()->update_active_workers(worker_count);
>> 106: log_info(gc, task)("Using %u workers of %u for full compaction", worker_count, max_worker_count);
>
> That's pre-existing, but this will change the number of active workers for the rest of the garbage collection. That made some sense previously as `G1FullCollector::calc_active_workers()` typically was very aggressive, but now it may limit other phases a bit, particularly marking which distributes on a per-reference basis.
> Overall it might not make much difference though as we are talking about the very little occupied heap case.
> I.e. some rough per-full gc phase might be better and might be derived easily too.
This was one of the reasons I went with using "just used regions" and skipping the part that each worker will handle a set of regions. In most cases looking at used regions will not limit the workers much, and if it does we don't have much work to do. I've done some benchmarking and not seen any significant regressions with this patch. The biggest problem was not using enough workers for the bitmap-work.
Calculating workers per phase might be a good improvement to consider, but that would require some more refactoring.
-------------
PR: https://git.openjdk.java.net/jdk/pull/4225
More information about the hotspot-gc-dev
mailing list