RFR (M): 8245721: Refactor the TaskTerminator

Thomas Schatzl thomas.schatzl at oracle.com
Fri Aug 14 10:22:23 UTC 2020


Hi Kim,

   sorry for that .1 webrev, it was sent out in a hurry and so it got 
messed up.

Here's what has originally been intended to be posted:

http://cr.openjdk.java.net/~tschatzl/8245721/webrev.2/ (full)
http://cr.openjdk.java.net/~tschatzl/8245721/webrev.1_to_2/ (diff)

I took some time because of vacations and some more perf testing that 
had to wait a bit due to the machine being occupied (no issues afaics).

We also talked about potential improvements/changes in this thread and 
also in private: I summarized the less crazy ones in 
https://bugs.openjdk.java.net/browse/JDK-8251833

Thanks,
   Thomas

On 27.07.20 11:20, Kim Barrett wrote:
>> On Jul 24, 2020, at 6:41 AM, Thomas Schatzl <thomas.schatzl at oracle.com> wrote:
>> On 08.07.20 17:31, Kim Barrett wrote:
>>> src/hotspot/share/gc/shared/taskTerminator.hpp
>>>   128       spin_context.hard_spin_limit = MIN2(2 * spin_context.hard_spin_limit,
>>>   129                                           (uint) WorkStealingHardSpins);
>>> In the new code, hard_spin_limit is always (re)initialized to
>>> WorkStealingHardSpins, so this does nothing.
>>> In the old code, the starting spin limit was
>>>    WorkStealingHardSpins >> WorkStealingSpinToYieldRatio
>>> and that value was captured in a variable for (hard_spin_start) for
>>> use in reinitializing the limit after each yield iteration.
>>> So this is a change, and I'm guessing not an intentional one.
>>
>> Fixed.
> 
> I think this wasn’t fixed; see below.
> 
>>
>> New webrev:
>> http://cr.openjdk.java.net/~tschatzl/8245721/webrev.1/ (full)
>> http://cr.openjdk.java.net/~tschatzl/8245721/webrev.0_to_1/ (diff)
>>
>> Testing:
>> tier1-5, various benchmarks
>>
>> Thomas
> 
> The webrev.0_to_1 doesn't seem to be webrev on the left, but rather some
> intermediate stage between webrev and webrev.1. For example, webrev.0_to_1
> has DelayContext (from webrev.1 change) rather than SpinContext (from webrev
> change).
> 
> ------------------------------------------------------------------------------
> src/hotspot/share/gc/shared/taskTerminator.cpp
>   119       delay_context._hard_spin_limit = MIN2(2 * delay_context._hard_spin_limit,
>   120                                           (uint) WorkStealingHardSpins);
>   121       for (uint j = 0; j < delay_context._hard_spin_limit; j++) {
> 
> [pre-existing]
> Why is hard_spin_limit incremented before doing the spin?
> 
> Line 120 is mis-indented.
> 
> ------------------------------------------------------------------------------
> src/hotspot/share/gc/shared/taskTerminator.cpp
>   115       delay_context._hard_spin_limit = WorkStealingHardSpins;
> 
> This is still resetting to WorkStealingHardSpins, rather than scaling that
> down by WorkStealingSpinToYieldRatio, making further executions of line 119
> do nothing.  So *not* fixed.
> 
> ------------------------------------------------------------------------------
> src/hotspot/share/gc/shared/taskTerminator.cpp
>   109     // Periodically call yield() instead spinning
> 
> Calls os::naked_yield() rather than yield().  Maybe just say "Periodically
> yield instead of spinning."
> 
> ------------------------------------------------------------------------------
> src/hotspot/share/gc/shared/taskTerminator.cpp
>   104 bool TaskTerminator::do_delay_step(DelayContext& delay_context) {
> 
> Function is returning bool indicating yields before sleep limit has been
> reached.  But call ignores the result
> 
>   167           do_delay_step(delay_context);
> 
> and there is a separate check in the the caller for yields before sleep
> limit being reached.
> 
> Related, the _yield_count seems to be double incremented.
>   108     delay_context._yield_count++;
>   162         ++delay_context._yield_count;
> 
> ------------------------------------------------------------------------------
> 
> I'm finding enough puzzling things to wonder if I'm actually looking at the
> code you intended to have reviewed, so I'm stopping for now, pending getting
> that cleared up.
> 




More information about the hotspot-gc-dev mailing list