RFR (S): Avoid evacuation if concurrent GC was cancelled
Roman Kennke
rkennke at redhat.com
Mon Dec 5 19:53:42 UTC 2016
Am Montag, den 05.12.2016, 20:14 +0100 schrieb Aleksey Shipilev:
> On 12/05/2016 07:44 PM, Roman Kennke wrote:
> > Am Montag, den 05.12.2016, 19:09 +0100 schrieb Aleksey Shipilev:
> > > Okay! How about this then?
> > > http://cr.openjdk.java.net/~shade/shenandoah/cancel-no-
> > > evac/webrev.02/
> >
> > Hmm, you still don't check for cancelled gc after final-mark pause.
> > Notice how initial-evacuation can, in theory, fail and cause full-
> > gc.
>
> Right. Oops, the code is hairy, and prone to mishaps like that.
>
> > Not your fault, but I find the use of both heap->cancelled_gc() and
> > should_terminate() confusing. Not sure if it can be consolidated
> > somehow? Not necessarily in this patch though.
>
> Yes, let's rehash ShenandoahConcurrentThread::run_service into two
> methods, so
> that code is cleaner and early returns make cancellation checks
> similar to our
> beloved ParallelTerminator:
> http://cr.openjdk.java.net/~shade/shenandoah/cancel-no-evac/webrev.
> 03/
>
> Still passes hotspot_gc_shenandoah, and jcstress is running.
Looks great!
> > The idea here is that if we fail during marking, in all likelyhood
> > we're
> > *almost* done with marking and don't necessarily need to make
> > everything again. Downside would be that the mark bitmap is
> > slightly
> > pessimistic because of SATB.
>
> No, I think Full GC should be our "last ditch" collection, and be
> able to
> recover from any legitimate heap situation. This mandates starting
> from scratch,
> to avoid spamming via e.g. SATB.
Yes ok. Future idea: also compact humongous objects ;-)
> We can probably do the "optimistic" STW
> collection that does reuse the concurrent mark data though.
Not exactly sure what you mean.?
Green light for the patch!
Roman
More information about the shenandoah-dev
mailing list