RFR: 8340547: Starting many threads can delay safepoints [v4]

David Holmes dholmes at openjdk.org
Wed Sep 25 01:45:37 UTC 2024


On Tue, 24 Sep 2024 14:06:17 GMT, Oli Gillespie <ogillespie at openjdk.org> wrote:

>> Mitigate the impact of JVM_StartThread on safepoint synchronization, by adding a new ThreadStart_lock which limits the number of JVM_StartThread invocations competing for Threads_lock at any given time to 1.
>> This gives a VM thread trying to call a safepoint a much better chance of acquiring Threads_lock when there are many JVM_StartThread invocations in flight, at the cost of one extra lock/unlock for every new thread.
>> 
>> Can be disabled with new diagnostic flag `-XX:-UseExtraThreadStartLock`.
>> 
>> Before (ThreadStartTtsp.java is shared in JDK-8340547):
>> 
>> java -Xlog:safepoint ThreadStartTtsp.java | grep -o 'Reaching safepoint: [0-9]* ns'
>> Reaching safepoint: 1291591 ns
>> Reaching safepoint: 59962 ns
>> Reaching safepoint: 1958065 ns
>> Reaching safepoint: 14456666258 ns <-- 14 seconds!
>> ...
>> 
>> 
>> After:
>> 
>> java -Xlog:safepoint ThreadStartTtsp.java | grep -o 'Reaching safepoint: [0-9]* ns'
>> Reaching safepoint: 214269 ns
>> Reaching safepoint: 60253 ns
>> Reaching safepoint: 2040680 ns
>> Reaching safepoint: 3089284 ns
>> Reaching safepoint: 2998303 ns
>> Reaching safepoint: 4433713 ns <-- 4.4ms
>> Reaching safepoint: 3368436 ns
>> Reaching safepoint: 2986519 ns
>> Reaching safepoint: 3269102 ns
>> ...
>> 
>> 
>> 
>> **Alternatives**
>> 
>> I considered some other options for mitigating this. For example, could we reduce the time spent holding the lock in StartThread? Most of the time is spent managing the threads list for ThreadSMR support, and each time we add a thread to that list we need to copy the whole list and free every entry in the original, which is slow. But I didn't see an easy way to avoid this.
>> I also looked at some kind of signal from the VM thread that it is ready to start synchronizing that StartThread could check before trying to grab Threads_lock, but I didn't find anything better than this extra lock.
>
> Oli Gillespie has updated the pull request incrementally with one additional commit since the last revision:
> 
>   Also address Thread::exit

Implementation looks good. Lets see what the benchmarking shows. We can reason that for applications that create lots of threads quickly, this additional throttling can actually improve general throughput. But of course any individual start/exit is slowed down by the extra lock acquisition and release.

Thanks

src/hotspot/share/runtime/globals.hpp line 2003:

> 2001:   product(bool, UseThreadsLockThrottleLock, true, DIAGNOSTIC,               \
> 2002:           "Use an extra lock during Thread start and exit to alleviate"     \
> 2003:           "contention on threads lock.")                                    \

Suggestion:

          "contention on Threads_lock.")                                    \

src/hotspot/share/runtime/mutexLocker.hpp line 65:

> 63: extern Mutex*   RetData_lock;                    // a lock on installation of RetData inside method data
> 64: extern Monitor* VMOperation_lock;                // a lock on queue of vm_operations waiting to execute
> 65: extern Monitor* ThreadsLockThrottle_lock;        // used by Thread start/stop to reduce competition for Threads_lock,

Suggestion:

extern Monitor* ThreadsLockThrottle_lock;        // used by Thread start/exit to reduce competition for Threads_lock,

-------------

PR Review: https://git.openjdk.org/jdk/pull/21111#pullrequestreview-2326792129
PR Review Comment: https://git.openjdk.org/jdk/pull/21111#discussion_r1774287594
PR Review Comment: https://git.openjdk.org/jdk/pull/21111#discussion_r1774288261


More information about the hotspot-dev mailing list