RFR (M): 8034842: Parallelize the Free CSet phase in G1

Thomas Schatzl thomas.schatzl at oracle.com
Wed May 11 21:05:28 UTC 2016


Hi Derek,

On Wed, 2016-05-11 at 16:19 -0400, Derek White wrote:
> Hi Thomas,
> 
> The changes in webrev.3 look good.
> 
> I have some comments on bigger picture now. I have one possible bug,
> and several increasingly annoying "you did it this way, why didn't
> you do it this way instead" comments :-)
> 
> src/share/vm/gc/g1/g1CollectedHeap.cpp:
> 
> - Is _regions_freed ever set? Can use local_free_list.length()
> instead.

I will look into this.

> - Could add evacuation_info parameter to complete_work(), and remove
> the 
> _evacuation_info field from G1FreeCollectionSetTask.

Fine with me.

> - do_serial_work() is done in parallel with do_parallel_work(). Is
> that safe?  hr->rem_set()->clear_locked() or 
> _g1h->_hot_card_cache->reset_card_counts(hr) can be called before or 
> after _g1h->free_region(hr, &_local_free_list, true, true, true).

It is. The rem sets and the card counts are pretty independent data
structures and can be removed any time.

> Line 4928:
> - Is using the  _serial_work_claim mechanism safer than just testing
> for (worker_id == 0)? Because it's not simpler :-)

Because worker 0 might not be actually the first one reaching the
do_work() method, potentially unnecessarily delaying the entire process
until it finished spinning up. While we always need to wait for all
threads to spin up to end the task, using worker_id == 0 means that we
will at least need to wait for spin up of thread plus the serial work.

> - I was going to question sorting the regions into young and old
> lists, and managing two lists instead of having just one. But that
> does simplify the G1GCPhaseTimes recording.

This is the main reason.

I will provide a new webrev tomorrow.

Thanks,
  Thomas




More information about the hotspot-gc-dev mailing list