RFR: Degenerating concurrent marking
Roman Kennke
rkennke at redhat.com
Fri Dec 16 14:55:06 UTC 2016
This patch implements what I call 'degenerating concurrent marking'.
If, during concurrent mark, we run out of memory, instead of stopping,
throwing away all marking data and doing a full-gc, it gracefully hands
over all existing marking work to the subsequent final-mark pause,
finishes marking there, and kicks of normal marking. The idea being
that in most cases, the OOM is not happening because we got into a bad
situation (fragmented heap or such) but only temporary alloc bursts or
such, *and* chances are high that we're almost done marking anyway.
I made it such that existing mark bitmaps, task queues, SATB buffers
and weakref-queues are left intact, if the heuristics decide to go into
degenerated concurrent marking, then the final-mark pause carries on
where concurrent marking left. Interestingly, the code for this is
mostly in place already ... in final marking we already finish off
marking in the way that we need.
I needed to tweak the termination protocol in the taskqueue for that,
and not clear task queues on cancellation. Instead I added a 'shortcut'
in the case we need to terminate without draining the task queues.
Please look at this carefully, I am not totally sure I got that right.
In addition, I also re-wrote adaptive heuristics. It will start out
with 10% free threshold (i.e. we start marking when 10% available space
is left), and lower that if we have 5 successful markings in a row, and
bump that up if we fail to complete concurrent marking. We limit the
free threshold 30<free_threshold<3. All parameters can be configured.
This adaptive heuristics work very well for me, and I'm tempted to make
this default soon. It makes much better use of headroom, which means
fewer GC cycles, and thus better throughput.
http://cr.openjdk.java.net/~rkennke/degen-marking/webrev.00/
Ok? Opinions?
Roman
More information about the shenandoah-dev
mailing list