RFR: "Compact" heuristics for dense footprint scenarios
Roman Kennke
rkennke at redhat.com
Fri Mar 2 18:24:05 UTC 2018
It looks good to me.
Is continuous still useful? Maybe we want to remove it?
Thanks, Roman
On Fri, Mar 2, 2018 at 2:09 PM, Aleksey Shipilev <shade at redhat.com> wrote:
> http://cr.openjdk.java.net/~shade/shenandoah/compact-heuristics/webrev.01/
>
> We have the assorted options to control the footprint story. For users who are using Shenandoah for
> dense footprint scenarios (capitalizing on less intrusive and periodic concurrent GCs), we might
> want to have the heuristics that configures Shenandoah for low footprint, at the expense of throughput.
>
> We have "continuous" heuristics, but it only takes care about back-to-back cycles. We can improve
> "continuous" into "compact" heuristics, that does:
> a) Frequent cycles, based on the amount of allocated data (setting that back to 0 gives us our old
> full-crazy continuous);
> b) Denser compaction target, compacting the regions that usual heuristics would leave untouched;
> c) More frequent periodic GC to kick out lingering garbage;
> d) More prompt heap uncommitting;
>
> This heuristics is milder than "continuous", because it waits for some allocations to happen before
> acting. But it is more aggressive than other heuristics in compacting the heap. In the bursty
> scenarios, this gives us a very prompt footprint savings. E.g. allocating garbage in a single thread
> gets compacted to minimum after 35 seconds:
>
> [3.614s][info][gc,ergo] Concurrent marking triggered. Free: 3673M, Allocated: 418M, Alloc Threshold:
> 409M
> [3.615s][info][gc ] GC(15) Pause Init Mark (process refs) 0.673ms
> [3.616s][info][gc ] GC(15) Concurrent marking (process refs) 423M->424M(4096M) 0.781ms
> [3.617s][info][gc ] GC(15) Concurrent precleaning 424M->425M(4096M) 0.301ms
> [3.617s][info][gc,ergo] GC(15) CSet selection: actual free = 4090M; max cset = 3067M
> [3.617s][info][gc,ergo] GC(15) Total Garbage: 422M
> [3.617s][info][gc,ergo] GC(15) Immediate Garbage: 420M, 420 regions (99% of total)
> [3.618s][info][gc ] GC(15) Pause Final Mark (process refs) 0.941ms
> [3.618s][info][gc ] GC(15) Concurrent cleanup 425M->7M(4096M) 0.523ms
> [3.620s][info][gc ] GC(15) Concurrent cleanup 7M->10M(4096M) 1.159ms
> <allocations stopped here>
> [5.108s][info][gc ] Uncommitted 1566M. Heap: 4096M reserved, 444M committed, 170M used
> [7.623s][info][gc ] Uncommitted 2M. Heap: 4096M reserved, 442M committed, 170M used
> [8.128s][info][gc ] Uncommitted 14M. Heap: 4096M reserved, 428M committed, 170M used
> [8.656s][info][gc ] Uncommitted 257M. Heap: 4096M reserved, 171M committed, 170M used
> [33.625s][info][gc,ergo] Periodic GC triggered. Time since last GC: 30006 ms, Guaranteed Interval:
> 30000 ms
> [33.626s][info][gc ] GC(16) Pause Init Mark 0.890ms
> [33.627s][info][gc ] GC(16) Concurrent marking 170M->170M(4096M) 0.572ms
> [33.627s][info][gc,ergo] GC(16) CSet selection: actual free = 4091M; max cset = 3068M
> [33.627s][info][gc,ergo] GC(16) Total Garbage: 169M
> [33.627s][info][gc,ergo] GC(16) Immediate Garbage: 166M, 167 regions (97% of total)
> [33.627s][info][gc ] GC(16) Pause Final Mark 0.457ms
> [33.628s][info][gc ] GC(16) Concurrent cleanup 170M->3M(4096M) 0.279ms
> [33.628s][info][gc ] GC(16) Concurrent cleanup 3M->3M(4096M) 0.539ms
> [38.959s][info][gc ] Uncommitted 167M. Heap: 4096M reserved, 4M committed, 3M used
>
> In larger scenarios, like Wildfly microservice, we manage to keep the footprint very low, and pay
> for that with additional CPU churn -- but as soon as application stops allocating, we are back at idle:
> http://cr.openjdk.java.net/~shade/shenandoah/compact-heuristics/rss.pdf
> http://cr.openjdk.java.net/~shade/shenandoah/compact-heuristics/cpu.pdf
>
> Testing: hotspot_gc_shenandoah, RSS tests, etc
>
> Thanks,
> -Aleksey
>
More information about the shenandoah-dev
mailing list