RFR: Refactor conc-mark-in-progress/satb-active flags to use gc-phase-in-progress

Roman Kennke rkennke at redhat.com
Mon Oct 2 08:49:48 UTC 2017


Am 02.10.2017 um 10:16 schrieb Aleksey Shipilev:
> On 09/29/2017 04:59 PM, Roman Kennke wrote:
>> http://cr.openjdk.java.net/~rkennke/satb-concmark-flag/webrev.04/
>   *) This still touches the G1 codepath in g1_pre_barrier_slow_id, please revert. Is this the
> optimization you wanted to upstream? Upstream it then, and it will come back to us via the regular
> channel;
>
>   *) I don't understand the change of is_concurrent_mark_in_progress() to static function. This is
> inconsistent with other flags in the heap. Is this for performance? Because in most uses I can see
> ShenandoahHeap* is already available. ShenandoahBarrierSet::enqueue might just use
> ShenandoahHeap::heap(), because it does not seem very heavy.
>
>   *) I wonder if changing the poll from SATB_MQ_set().is_active() to is_conc_mark_in_progress() is
> actually correct? Can we have the disparity between these two flags, causing us to lose values when
> SATB is still supposed to be active? E.g. some weird weak references / keep-alive barrier interaction?
>
> Thanks,
> -Aleksey
>
I agree that it has become a little messy.

The original intention of the exercise was, when partial GC arrives, we 
need SATB buffers active both during conc-mark and conc-partial. 
Likewise, we need write-barriers active both during evac and 
conc-partial. My idea was that instead of checking both flags (and 
loading both flags) in hot paths (i.e. if (conc_partial_active || 
satb_active) ) we could load it once and do a masking test  ( i.e. if 
(gc_phase_active & (CONC_MARK | CONC_PARTIAL) != 0) and similar for 
evac). Another intended side-effect of this is when we get to the GC 
interface, and the GC barrier gets to handle to complete store, for oop 
stores we'd end up with: 1 write barrier and 1 satb-barrier during 
concmark/evac or even 2 write-barrier and maybe satb-kindof-like 
barriers during conc-partial. In this big block, we'd then only need to 
load the flag once, and check it a bunch of times in a row, for 
different bits.

However, it turned out that the SATB barrier code has this satb_active 
flag checking everywhere. I see two ways out of it:

1. Refactor upstream SATB buffer code to be independent of the 
satb_active checking, and only do the satb_active checking in G1 paths, 
and do our own flag checking in our paths. This is probably the most 
cleanest solution, but requires significant upstream first work.

2. We could make GC phase flags independent from SATB-active and 
write-barrier-active flags. Each barrier would then check its own flags, 
and we need to turn on/off the correct barriers for each phase. 
GC-phase-active would not need to have anything in thread.hpp either. 
We'd loose the benefits of not having to load the flags 2x in 
oop-stores, but this is probably not very important. It's fairly clean 
solution (more or less what we have before I started this flag masking 
stuff, with a little bit of touch-up required here and there), and 
certainly the path of least resistence.

Opinions? Do you see other ways out? What should I do?

Roman



More information about the shenandoah-dev mailing list