RFR: Adaptive CSet selection for Traversal

Roman Kennke rkennke at redhat.com
Wed Aug 15 17:30:45 UTC 2018


This is great stuff! The patch looks good. Does it affect performance ?

Cheers, Roman

Am 15. August 2018 19:17:22 MESZ schrieb Aleksey Shipilev <shade at redhat.com>:
>http://cr.openjdk.java.net/~shade/shenandoah/adaptive-cset-traversal/webrev.01/
>
>This patch implements Adaptive CSet selection for Traversal, which
>serves the same goal as the one
>in adaptive: pick up enough regions to satisfy the free threshold, but
>do not pick too much to walk
>into allocation failure. The caveat for Traversal is that we have
>regions we cannot trust until they
>are marked. So, we do the selection from trustworthy regions first, and
>then pick up some
>non-trustworthy for the ride.
>
>This asymmetry provides a peculiar behavior: cycles come in pairs,
>first cycle in pair marks a lot
>of already allocated regions, then memory gets depleted fast because we
>don't recycle much, the
>second cycle cleans up a lot and marks a little. However, experiments
>show it is self-balances itself:
>
>; initial cycle
>[17.265s][info][gc]  GC(0) Concurrent cleanup 8032M->224M(102400M)
>0.774ms
>
>; mark a lot, recycle a little
>[61.711s][info][gc]  GC(1) Concurrent cleanup 80575M->58239M(102400M)
>63.949ms
>
>; recycle a lot, mark a little
>[67.944s][info][gc]  GC(2) Concurrent cleanup 78402M->8125M(102400M)
>44.452ms
>
>; <repeat>
>[91.213s][info][gc]  GC(3) Concurrent cleanup 78751M->56044M(102400M)
>79.351ms
>[99.706s][info][gc]  GC(4) Concurrent cleanup 82112M->15713M(102400M)
>51.978ms
>[121.822s][info][gc] GC(5) Concurrent cleanup 81920M->54432M(102400M)
>73.059ms
>[131.019s][info][gc] GC(6) Concurrent cleanup 83141M->16839M(102400M)
>55.372ms
>[152.824s][info][gc] GC(7) Concurrent cleanup 82816M->54848M(102400M)
>71.579ms
>[163.469s][info][gc] GC(8) Concurrent cleanup 87808M->25516M(102400M)
>68.975ms
>[183.984s][info][gc] GC(9) Concurrent cleanup 87618M->54274M(102400M)
>112.172ms
>[194.910s][info][gc] GC(10) Concurrent cleanup 88002M->25833M(102400M)
>67.987ms
>[215.882s][info][gc] GC(11) Concurrent cleanup 89058M->55394M(102400M)
>102.555ms
>[228.314s][info][gc] GC(12) Concurrent cleanup 94274M->35906M(102400M)
>97.175ms
>[246.978s][info][gc] GC(13) Concurrent cleanup 93041M->53969M(102400M)
>121.889ms
>[259.877s][info][gc] GC(14) Concurrent cleanup 94530M->36227M(102400M)
>142.702ms
>
>; from now on, cycles are almost evenly spread out
>[278.797s][info][gc] GC(15) Concurrent cleanup 95330M->55554M(102400M)
>88.855ms
>[292.650s][info][gc] GC(16) Concurrent cleanup 99906M->44702M(102400M)
>103.988ms
>[309.954s][info][gc] GC(17) Concurrent cleanup 99842M->55138M(102400M)
>150.080ms
>[324.174s][info][gc] GC(18) Concurrent cleanup 99618M->44525M(102400M)
>92.232ms
>[341.826s][info][gc] GC(19) Concurrent cleanup 99426M->54275M(102400M)
>93.808ms
>[355.849s][info][gc] GC(20) Concurrent cleanup 99778M->44744M(102400M)
>112.171ms
>[373.398s][info][gc] GC(21) Concurrent cleanup 100423M->55271M(102400M)
>96.984ms
>[387.209s][info][gc] GC(22) Concurrent cleanup 99623M->44615M(102400M)
>92.764ms
>
>
>Testing: tier3_gc_shenandoah, eyeballing gc logs on specjbb
>
>Thanks,
>-Aleksey

-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.


More information about the shenandoah-dev mailing list