RFR: 8310031: Parallel: Implement better work distribution for large object arrays in old gen [v8]
Richard Reingruber
rrich at openjdk.org
Mon Sep 18 19:57:43 UTC 2023
On Mon, 18 Sep 2023 19:54:10 GMT, Richard Reingruber <rrich at openjdk.org> wrote:
>> This pr introduces parallel scanning of large object arrays in the old generation containing roots for young collections of Parallel GC. This allows for better distribution of the actual work (following the array references) as opposed to "stealing" from other task queues which can lead to inverse scaling demonstrated by small tests (attached to JDK-8310031) and also observed in gerrit production systems.
>>
>> The algorithm to share scanning large arrays is supposed to be a straight
>> forward extension of the scheme implemented in
>> `PSCardTable::scavenge_contents_parallel`.
>>
>> - A worker scans the part of a large array located in its stripe
>>
>> - Except for the end of the large array reaching into a stripe which is scanned by the thread owning the previous stripe. This is just what the current implementation does: it skips objects crossing into the stripe.
>>
>> - For this it is necessary that large arrays cover at least 3 stripes (see `PSCardTable::large_obj_arr_min_words`)
>>
>> The implementation also makes use of the precise card marks for arrays. Only dirty regions are actually scanned.
>>
>> #### Performance testing
>>
>> ##### BigArrayInOldGenRR.java
>>
>> [BigArrayInOldGenRR.java](https://bugs.openjdk.org/secure/attachment/104422/BigArrayInOldGenRR.java) is a micro benchmark that assigns new objects to a large array in a loop. Creating new array elements triggers young collections. In each collection the large array is scanned because of its references to the new elements in the young generation. The benchmark score is the geometric mean of the duration of the last 5 young collections (lower is better).
>>
>> [BigArrayInOldGenRR.pdf](https://cr.openjdk.org/~rrich/webrevs/8310031/BigArrayInOldGenRR.pdf)([BigArrayInOldGenRR.ods](https://cr.openjdk.org/~rrich/webrevs/8310031/BigArrayInOldGenRR.ods)) presents the benchmark results with 1 to 64 gc threads.
>>
>> Observations
>>
>> * JDK22 scales inversely. Adding gc threads prolongues young collections. With 32 threads young collections take ~15x longer than single threaded.
>>
>> * Fixed JDK22 scales well. Adding gc theads reduces the duration of young collections. With 32 threads young collections are 5x shorter than single threaded.
>>
>> * With just 1 gc thread there is a regression. Young collections are 1.5x longer with the fix. I assume the reason is that the iteration over the array elements is interrupted at the end of a stripe which makes it less efficient. The prize for parallelization is paid ...
>
> Richard Reingruber has updated the pull request incrementally with one additional commit since the last revision:
>
> Scan large array stripe from first dirty card to stripe end
card_scan_scarce.java (attached to the JBS item) is a variant of your last test that allows to have just one very large array or a bunch smaller ones. I've created it to see if scanning for dirty cards can remain precise for smaller arrays but that still showed a regression.
With the last version (https://github.com/openjdk/jdk/pull/14846/commits/3e6c1b74e7caf0aa44a9688e18b7c710e3d0cb42) we assume that all cards of the large array in the current stripe after the first dirty card are dirty too. With this the regression goes away.
Baseline
--------
$ ./jdk-baseline/bin/java -Xms3g -Xmx3g -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Xlog:gc=trace -Xlog:gc+scavenge=trace card_scan_scarce 1000 1
[0.002s][warning][logging] No tag set matches selection: gc+scavenge. Did you mean any of the following? gc* gc+exit* gc+load gc+reloc gc+unmap
[0.007s][info ][gc ] Using Parallel
### bigArrLen:1000M bigArrCount:1
### System.gc
[0.500s][trace ][gc ] GC(0) PSYoung generation size at maximum: 1048576K
[0.500s][info ][gc ] GC(0) Pause Young (System.gc()) 1047M->1001M(2944M) 208.600ms
[0.932s][info ][gc ] GC(1) Pause Full (System.gc()) 1001M->1001M(2944M) 431.232ms
[1.396s][trace ][gc ] GC(2) PSYoung generation size at maximum: 1048576K
[1.396s][info ][gc ] GC(2) Pause Young (Allocation Failure) 1769M->1001M(2944M) 209.498ms
[1.756s][trace ][gc ] GC(3) PSYoung generation size at maximum: 1048576K
[1.757s][info ][gc ] GC(3) Pause Young (Allocation Failure) 1769M->1001M(2944M) 206.165ms
[2.110s][trace ][gc ] GC(4) PSYoung generation size at maximum: 1048576K
[2.110s][info ][gc ] GC(4) Pause Young (Allocation Failure) 1769M->1001M(2944M) 199.424ms
New
---
$ ./jdk-new/bin/java -Xms3g -Xmx3g -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Xlog:gc=trace -Xlog:gc+scavenge=trace card_scan_scarce 1000 1
[0.006s][info][gc] Using Parallel
### bigArrLen:1000M bigArrCount:1
### System.gc
[0.293s][trace][gc,scavenge] stripe count:200 stripe size:5125K
[0.386s][trace][gc ] GC(0) PSYoung generation size at maximum: 1048576K
[0.386s][info ][gc ] GC(0) Pause Young (System.gc()) 1047M->1001M(2944M) 93.863ms
[0.802s][info ][gc ] GC(1) Pause Full (System.gc()) 1001M->1001M(2944M) 415.417ms
[1.048s][trace][gc,scavenge] stripe count:200 stripe size:5126K
[1.215s][trace][gc ] GC(2) PSYoung generation size at maximum: 1048576K
[1.215s][info ][gc ] GC(2) Pause Young (Allocation Failure) 1769M->1001M(2944M) 166.850ms
[1.362s][trace][gc,scavenge] stripe count:200 stripe size:5126K
[1.516s][trace][gc ] GC(3) PSYoung generation size at maximum: 1048576K
[1.516s][info ][gc ] GC(3) Pause Young (Allocation Failure) 1769M->1001M(2944M) 154.607ms
[1.679s][trace][gc,scavenge] stripe count:200 stripe size:5126K
[1.835s][trace][gc ] GC(4) PSYoung generation size at maximum: 1048576K
[1.835s][info ][gc ] GC(4) Pause Young (Allocation Failure) 1769M->1001M(2944M) 156.783ms
-------------
PR Comment: https://git.openjdk.org/jdk/pull/14846#issuecomment-1724277752
More information about the hotspot-gc-dev
mailing list