RFR: 8306738: Select num workers for safepoint ParallelCleanupTask

Aleksey Shipilev shade at openjdk.org
Tue Apr 25 12:30:09 UTC 2023


On Tue, 25 Apr 2023 11:46:45 GMT, Thomas Schatzl <tschatzl at openjdk.org> wrote:

>> It is a "known" issue that if the particular task does not select its number of threads properly before executing the task (or not at all like this task) that there may be perf issues.
>> 
>> If this is what you want I would suggest that `active_workers()` as limit preserves the current behavior best. Otherwise the task should calculate and set its optimal number of tasks itself as it does in this change.
>> 
>> Not sure if setting it to `SAFEPOINT_CLEANUP_NUM_TASKS` is a good idea if some/most of this work is trivial/empty but it's probably a good first guess.
>> 
>> The only GC that does not seem to set safepoint_workers to something non-default seems to be Shenandoah (the stw collectors use the "main" workerthreads they also use for evacuation, so this should be maxed out mostly, and ZGC explicitly sets them to max_workers at initialization). (via code inspection, so might be wrong)
>
> @xmas92: what is the reason for this change, did you see any situation where an inappropriate amount (too few) number of threads were used? Just curious.

BTW, as we are micro-optimizing here, then we can be quite a bit smarter about this:


class ParallelCleanupTask {

...
  size_t expected_workers() {
    // This worker does the light-weight tasks and guarantees progress
    size_t workers = 1;

    // If there are heavy-weight tasks pending, add more workers for them
    if (SymbolTable::needs_rehashing()) workers++;
    if (StringTable::needs_rehashing()) workers++;
    if (_do_lazy_roots)                 workers++;

    return workers;
  }


Maybe even only for `_do_lazy_roots`, as other tasks basically just notify various subsystems, and they might as well be driven by a single thread.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/13616#discussion_r1176436818


More information about the hotspot-runtime-dev mailing list