RFR: Parallelize safepoint cleanup
Robbin Ehn
robbin.ehn at oracle.com
Wed May 31 20:06:26 UTC 2017
Hi Roman, I agree that is really needed but:
On 05/31/2017 10:27 AM, Roman Kennke wrote:
> I realized that sharing workers with GC is not so easy.
>
> We need to be able to use the workers at a safepoint during concurrent
> GC work (which also uses the same workers). This does not only require
> that those workers be suspended, like e.g.
> SuspendibleThreadSet::yield(), but they need to be idle, i.e. have
> finished their tasks. This needs some careful handling to work without
> races: it requires a SuspendibleThreadSetJoiner around the corresponding
> run_task() call and also the tasks themselves need to join the STS and
> handle requests for safepoints not by yielding, but by leaving the task.
> This is far too peculiar for me to make the call to hook up GC workers
> for safepoint cleanup, and I thus removed those parts. I left the API in
> CollectedHeap in place. I think GC devs who know better about G1 and CMS
> should make that call, or else just use a separate thread pool.
>
> http://cr.openjdk.java.net/~rkennke/8180932/webrev.05/
> <http://cr.openjdk.java.net/%7Erkennke/8180932/webrev.05/>
>
> Is it ok now?
I still think you should put the "Parallel Safepoint Cleanup" workers inside Shenandoah,
so the SafepointSynchronizer only calls get_safepoint_workers, e.g.:
_cleanup_workers = heap->get_safepoint_workers();
_num_cleanup_workers = _cleanup_workers != NULL ? _cleanup_workers->total_workers() : 1;
ParallelSPCleanupTask cleanup(_cleanup_subtasks);
StrongRootsScope srs(_num_cleanup_workers);
if (_cleanup_workers != NULL) {
_cleanup_workers->run_task(&cleanup, _num_cleanup_workers);
} else {
cleanup.work(0);
}
That way you don't even need your new flags, but it will be up to the other GCs to make their worker available
or cheat with a separate workgang.
Otherwise this looking reasonable.
Thanks for doing this!
/Robbin
>
> Roman
>
> Am 30.05.2017 um 03:59 schrieb David Holmes:
>> Hi Roman,
>>
>> On 29/05/2017 6:23 PM, Roman Kennke wrote:
>>> Hi David,
>>>
>>> the patch applied cleanly, i.e. was not affected by any hotspot changes.
>>> Most of your suggestions applied to the GC part that I excluded from the
>>> scope of this change, except for the following:
>>>
>>>>> I introduced a dedicated worker thread pool in SafepointSynchronize.
>>>>> This is only initialized when -XX:+ParallelSafepointCleanup is
>>>>> enabled,
>>>>> and uses -XX:ParallelSafepointCleanupThreads=X threads, defaulting
>>>>> to 8
>>>>> threads (just a wild guess, open for discussion. With
>>>>> -XX:-ParallelSafepointCleanup turned off (the default) it will use the
>>>>> old serial safepoint cleanup processing (with optional GC hooks, see
>>>>> below).
>>>>
>>>> I think the default needs to be ergonomically set, as for other
>>>> "thread pools". Both flags should be experimental at this stage.
>>>
>>> I implemented this using the following:
>>>
>>> if (ParallelSafepointCleanup &&
>>> FLAG_IS_DEFAULT(ParallelSafepointCleanupThreads)) {
>>> // This will pick up ParallelGCThreads if set...
>>> FLAG_SET_DEFAULT(ParallelSafepointCleanupThreads,
>>> Abstract_VM_Version::parallel_worker_threads());
>>> }
>>>
>>> As noted, it will pick up ParallelGCThreads, which might be a reasonable
>>> number. Otherwise we'd need to re-work the logic in vm_version.cpp to
>>> calculate the number of worker threads without considering
>>> ParallelGCThreads.
>>
>> I think this is way too big a number of threads to consider for this
>> situation. GC has a huge amount of work to do, but the safepoint
>> cleanup should be much less work.
>>
>> I'm more inclined now to set a small hard-wired default initially -
>> say 4 - and for future work look at generalising the existing fixed GC
>> thread pool into a more general purpose, dynamic thread pool
>> (something similar to java.util.concurrent.ThreadPoolExecutor.) I note
>> others object to a dedicated thread pool for this, so generalising the
>> existing one seems to be a reasonable future direction, whilst
>> avoiding tying this to specific GCs.
>>
>> Thanks,
>> David
>> -----
>>
>>> http://cr.openjdk.java.net/~rkennke/8180932/webrev.02/
>>> <http://cr.openjdk.java.net/%7Erkennke/8180932/webrev.02/>
>>>
>>> What do you think?
>>>
>>> Roman
>>>
>
More information about the hotspot-gc-dev
mailing list