RFR: Exponential thread-local GCLAB sizing

Roman Kennke rkennke at redhat.com
Tue Jul 17 15:15:52 UTC 2018


Excellent! Patch looks good to me.

Thanks,
Roman

> http://cr.openjdk.java.net/~shade/shenandoah/gclabs-sizing/webrev.01/
> 
> Three changes, two preparatory:
> 
>  *) Trace and report total allocation latency and sizes
>     This updates the allocation tracing report to highlight the metrics we are after.
> 
>  *) -XX:-UseTLAB should disable GCLABs too
>     This allows to turn any smart thing about GCLABs off, in case there are bugs.
> 
>  *) Exponential thread-local GCLAB sizing
>     This replaces the global PLABStats tracking with per-thread sizing with exponential increments,
>     and quite aggressive decay. This helps to keep GCLAB sizes at bay for threads that are seldom
>     using the GCLABs, and provide large enough GCLABs for threads that need it.
> 
>     We used to draw the line between mutator and GC threads in this, but it does not work all that
>     great. There are Java threads that evacuate quite a lot (e.g. ~1M with current 2K GCLABs), and
>     there are GC threads that evacuate almost nothing (e.g. all parallel threads that evac roots
>     during final mark, but not participate in concurrent evac). New mechanics automatically adopts
>     for either case. See the alloc tracing data from short SPECjbb below: we have less GCLAB allocs
>     in counts, less in footprint, and have them generally larger.
> 
> I think the same could be done with TLABs, but interface there is much, much messier.
> 
> Testing: tier3_gc_shenandoah, benchmarks
> 
> Thanks,
> -Aleksey
> 
> 
> === Before:
> 
>                              Shared   Shared GC        TLAB       GCLAB
>  Counts:
>                       #         500         514       52010        8078
> 
>  Latency summary:
>                sum, ms:           0           0          38          10
> 
>  Sizes summary:
>                 sum, M:         291         113     1007348       20101
> 
>  Latency histogram (time in microseconds):
>          0 -         1:         120         508       21031        6649
>          1 -         2:         344           3       28752         689
>          2 -         4:          31           1        1952          90
>          4 -         8:           3           2         142         449
>          8 -        16:           1           0          60          38
>         16 -        32:           0           0           9          49
>         32 -        64:           1           0          35          87
>         64 -       128:           0           0          12          25
>        128 -       256:           0           0          17           2
> 
>  Sizes histogram (size in bytes):
>       2048 -      4096:          21           3           1        6045
>       4096 -      8192:         145          60           0         216
>       8192 -     16384:           3          45           0          45
>      16384 -     32768:           1          51           4         481
>      32768 -     65536:           2          16        2548          16
>      65536 -    131072:          19         242        1728         242
>     131072 -    262144:           8           0        3166           0
>     262144 -    524288:          18           7        1962          23
>     524288 -   1048576:          34          18        1310          18
>    1048576 -   2097152:         246          64        1151          64
>    2097152 -   4194304:           2           6        1217         343
>    4194304 -   8388608:           0           2        1367           2
>    8388608 -  16777216:           0           0        7655           0
>   16777216 -  33554432:           1           0        9131           0
>   33554432 -  67108864:           0           0       20770         583
> 
> === After
> 
>                              Shared   Shared GC        TLAB       GCLAB
>  Counts:
>                       #         492         713       50644        3770
> 
>  Latency summary:
>                sum, ms:           0           0          38           3 <--- !!!
> 
>  Sizes summary:
>                 sum, M:         251         113     1023531       14344 <--- !!!
> 
>  Latency histogram (time in microseconds):
>          0 -         1:         166         696       19597        2808
>          1 -         2:         306          14       28232         383
>          2 -         4:          19           1        2539         100
>          4 -         8:           1           1         125         458
>          8 -        16:           0           1          76          16
>         16 -        32:           0           0          10           0
>         32 -        64:           0           0          41           4
>         64 -       128:           0           0          14           0
>        128 -       256:           0           0           9           1
>        256 -       512:           0           0           1           0
> 
>  Sizes histogram (size in bytes):
>       2048 -      4096:          25           0           1        1146 <--- !!!
>       4096 -      8192:         221           0           2         480
>       8192 -     16384:           0          97           2         483
>      16384 -     32768:           1         113           6         385
>      32768 -     65536:           3          28         445         338
>      65536 -    131072:           1         381        1969         210
>     131072 -    262144:           4           0        2766          95
>     262144 -    524288:           0          16        2268          38
>     524288 -   1048576:           9          26        1285          31
>    1048576 -   2097152:         225          37        1080          31
>    2097152 -   4194304:           2          15        1280          30
>    4194304 -   8388608:           0           0        1352          29
>    8388608 -  16777216:           0           0        7596          28
>   16777216 -  33554432:           1           0        7477          28
>   33554432 -  67108864:           0           0       23115         418
> 
> 
> 
> 
> 




More information about the shenandoah-dev mailing list