The failure

David Holmes david.holmes at oracle.com
Wed Oct 24 07:46:33 UTC 2018


Thanks Robbin! So you're no allowed to request a VM operation if you 
hold a ThreadsListHandle ? I suppose that is no different to not being 
able to request a VM operation whilst holding the Threads_lock.

I suspect before ThreadSMR this may have been a case where we weren't 
ensuring a target thread could not terminate, and now with SMR we're 
ensuring that but potentially introducing a deadlock. I say potentially 
because obviously we don't deadlock every time we suspend threads.

Cheers,
David

On 24/10/2018 5:18 PM, Robbin Ehn wrote:
> Hi, truncate:
> 
> On 24/10/2018 02:00, serguei.spitsyn at oracle.com wrote:
>>>> One thing I noticed which Robbin should be able to expand upon is 
>>>> that Thread 101 is terminating and has called 
>>>> ThreadsSMRSupport::smr_delete and is blocked here:
>>>>
>>>>   // Wait for a release_stable_list() call before we check again. No
>>>>   // safepoint check, no timeout, and not as suspend equivalent flag
>>>>   // because this JavaThread is not on the Threads list.
>>>>   ThreadsSMRSupport::delete_lock()->wait(Mutex::_no_safepoint_check_flag, 
>>>>
>>>>                                          0, 
>>>> !Mutex::_as_suspend_equivalent_flag);
>>>>
>>>> As the comment says this thread is no longer on the Threads_list, 
>>>> but the VM_HandshakeAllThreads is not a safepoint operation and does 
>>>> not hold the Threads_lock, so is it possible this thread was 
>>>> captured by the JavaThreadIteratorWithHandle being used by 
>>>> VM_HandshakeAllThreads, before it got removed? If so we'd be hung 
>>>> waiting it for it handshake as it's not in a "safepoint-safe" or 
>>>> suspend-equivalent state.
> 
> In short:
> # VM Thread
> VM Thread is in a loop, takes Threads_lock, takes a new snapshot of the 
> Threads_list, scans the list and process handshakes on behalf of safe 
> threads.
> Releases snapshot and Threads_lock and checks if all handshakes are 
> completed
> 
> # An exiting thread
> A thread exiting thread removes it self from _THE_ threads list, but 
> must stick around if it is on any snapshots of alive. When it is no on 
> any list it will cancel the handshake.
> 
> Since VM thread during the handshake takes a new snapshot every 
> iteration any exiting can proceed since it will not be a the new 
> snapshot. Thus cancel the handshake and VM thread can exit the loop (if 
> this was the last handshake).
> 
> Constraint:
> If any thread grabs a snapshot of threads list and later tries to take a 
> lock which is 'used' by VM Thread or inside the handshake we can deadlock.
> 
> Considering that looking at e.g. : JvmtiEnv::SuspendThreadList
> Which calls VMThread::execute(&tsj); with a ThreadsListHandle alive, 
> this could deadlock AFAICT. Since the thread will rest on 
> VMOperationRequest_lock with a Threads list snapshot but VM thread 
> cannot finishes handshake until that snapshot is released.
> 
> I suggest first step is to add something like this patch below and fix 
> the obvious ones first.
> 
> Note, I have not verified that is the problem you are seeing, I'm saying 
> that this seem to be real issue. And considering how the stack traces 
> looks, it may be this.
> 
> You want me going through this, just assign a bug if there is one?
> 
> /Robbin
> 
> diff -r 622fd3608374 src/hotspot/share/runtime/thread.hpp
> --- a/src/hotspot/share/runtime/thread.hpp    Tue Oct 23 13:27:41 2018 
> +0200
> +++ b/src/hotspot/share/runtime/thread.hpp    Wed Oct 24 09:13:17 2018 
> +0200
> @@ -167,2 +167,6 @@
>     }
> + public:
> +  bool have_threads_list();
> + private:
> +
>     // This field is enabled via -XX:+EnableThreadSMRStatistics:
> diff -r 622fd3608374 src/hotspot/share/runtime/thread.inline.hpp
> --- a/src/hotspot/share/runtime/thread.inline.hpp    Tue Oct 23 13:27:41 
> 2018 +0200
> +++ b/src/hotspot/share/runtime/thread.inline.hpp    Wed Oct 24 09:13:17 
> 2018 +0200
> @@ -111,2 +111,6 @@
> 
> +inline bool Thread::have_threads_list() {
> +  return OrderAccess::load_acquire(&_threads_hazard_ptr) != NULL;
> +}
> +
>   inline void Thread::set_threads_hazard_ptr(ThreadsList* new_list) {
> diff -r 622fd3608374 src/hotspot/share/runtime/vmThread.cpp
> --- a/src/hotspot/share/runtime/vmThread.cpp    Tue Oct 23 13:27:41 2018 
> +0200
> +++ b/src/hotspot/share/runtime/vmThread.cpp    Wed Oct 24 09:13:17 2018 
> +0200
> @@ -608,2 +608,3 @@
>     if (!t->is_VM_thread()) {
> +    assert(t->have_threads_list(), "Deadlock if we have exiting threads 
> and if vm thread is running an VM op."); // fatal/guarantee
>       SkipGCALot sgcalot(t);    // avoid re-entrant attempts to gc-a-lot
> 
> 
> 
> 
> 
>>>>
>>>> David
>>>> -----
>>>>
>>>> On 24/10/2018 9:18 AM, serguei.spitsyn at oracle.com wrote:
>>>>> Please, skip it - sorry for the noise.
>>>>> It is hard to prove anything with current dump.
>>>>>
>>>>> Thanks,
>>>>> Serguei
>>>>>
>>>>> On 10/23/18 9:09 AM, serguei.spitsyn at oracle.com wrote:
>>>>>> Hi David and Robbin,
>>>>>>
>>>>>> I have an idea that needs to be checked.
>>>>>> It can be almost the same deadlock scenario that I've already 
>>>>>> explained but more sophisticated.
>>>>>> I suspect a scenario with JvmtiThreadState_lock that the flag 
>>>>>> Monitor::_safepoint_check_always does not help much.
>>>>>> It can be verified by checking what monitors are used by the 
>>>>>> blocked threads.
>>>>>>
>>>>>> Thanks,
>>>>>> Serguei
>>>>>>
>>>>>>
>>>>>> On 10/23/18 07:38, Robbin Ehn wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 10/23/18 10:34 AM, David Holmes wrote:
>>>>>>>> Hi Serguei,
>>>>>>>>
>>>>>>>> The VMThread is executing VM_HandshakeAllThreads which is not a 
>>>>>>>> safepoint operation. There's no real way to tell from the stacks 
>>>>>>>> what it's stuck on.
>>>>>>>
>>>>>>> I cannot find a thread that is not considered safepoint safe or 
>>>>>>> is_ext_suspended (thread 146). So the handshake should go 
>>>>>>> through. The handshake will log a warning after a while, is there 
>>>>>>> such warning from the handshake operation?
>>>>>>>
>>>>>>> There are several threads competing with e.g. Threads_lock, and 
>>>>>>> threads waiting for GC and several other VM ops, could it just be 
>>>>>>> really slow?
>>>>>>>
>>>>>>> /Robbin
>>>>>>>
>>>>>>>>
>>>>>>>> David
>>>>>>>>
>>>>>>>> On 23/10/2018 5:58 PM, serguei.spitsyn at oracle.com wrote:
>>>>>>>>> Hi David,
>>>>>>>>>
>>>>>>>>> You are right, thanks.
>>>>>>>>> It means, this deadlock needs more analysis.
>>>>>>>>> For completeness, the stack traces are in attachments.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Serguei
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 10/23/18 00:43, David Holmes wrote:
>>>>>>>>>> Hi Serguei,
>>>>>>>>>>
>>>>>>>>>> The JvmtiThreadState_lock is always acquired with safepoint 
>>>>>>>>>> checks enabled, so all JavaThreads blocked trying to acquire 
>>>>>>>>>> it will be _thread_blocked and so safepoint-safe and so won't 
>>>>>>>>>> be holding up the safepoint.
>>>>>>>>>>
>>>>>>>>>> David
>>>>>>>>>>
>>>>>>>>>> On 23/10/2018 5:21 PM, serguei.spitsyn at oracle.com wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> I've added the seviceability-dev mailing list.
>>>>>>>>>>> It can be interesting for the SVC folks. :)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 10/22/18 22:14, Leonid Mesnik wrote:
>>>>>>>>>>>> Hi
>>>>>>>>>>>>
>>>>>>>>>>>> Seems last version also crashes with 2 other different 
>>>>>>>>>>>> symptoms.
>>>>>>>>>>>> http://java.se.oracle.com:10065/mdash/jobs/lmesnik-ks8-20181021-0638-7157/results?search=status%3Afailed+AND+-state%3Ainvalid 
>>>>>>>>>>>> <http://java.se.oracle.com:10065/mdash/jobs/lmesnik-ks8-20181021-0638-7157/results?search=status:failed+AND+-state:invalid> 
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Also it might hangs with  stack attached.  Seems that test 
>>>>>>>>>>>> might be blocked because it invoke 2 jvmti methods. Can 
>>>>>>>>>>>> jvmti agent invoke jvmti methods from different threads?
>>>>>>>>>>>
>>>>>>>>>>> Yes, in general.
>>>>>>>>>>> However, you have to be careful when using debugging features.
>>>>>>>>>>> Below, one thread is enabling single stepping while another 
>>>>>>>>>>> thread is being suspended.
>>>>>>>>>>> Both are blocked at a safepoint which is Okay in general but 
>>>>>>>>>>> not Okay if they hold any lock.
>>>>>>>>>>> For instance, the thread #152 is holding the monitor 
>>>>>>>>>>> JvmtiThreadState.
>>>>>>>>>>>
>>>>>>>>>>> Also, I see a couple of more threads that are interesting as 
>>>>>>>>>>> well:
>>>>>>>>>>>
>>>>>>>>>>> Thread 159 (Thread 0x2ae40b78f700 (LWP 27962)):
>>>>>>>>>>> #0  0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () 
>>>>>>>>>>> from /lib64/libpthread.so.0
>>>>>>>>>>> #1  0x00002ae393ba8d63 in os::PlatformEvent::park 
>>>>>>>>>>> (this=this at entry=0x2ae3984c9100) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 
>>>>>>>>>>>
>>>>>>>>>>> #2  0x00002ae393b50920 in ParkCommon (timo=0, 
>>>>>>>>>>> ev=0x2ae3984c9100) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 
>>>>>>>>>>>
>>>>>>>>>>> #3  Monitor::ILock (this=0x2ae398024f10, Self=0x2ae3984c7800) 
>>>>>>>>>>> at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:461 
>>>>>>>>>>>
>>>>>>>>>>> #4  0x00002ae393b512c1 in lock 
>>>>>>>>>>> (Self=0x2ae3984c7is_ext_suspended800, this=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:910 
>>>>>>>>>>>
>>>>>>>>>>> #5  Monitor::lock (this=this at entry=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:919 
>>>>>>>>>>>
>>>>>>>>>>> #6  0x00002ae39350510c in MutexLocker (mutex=0x2ae398024f10, 
>>>>>>>>>>> this=<synthetic pointer>) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutexLocker.hpp:182 
>>>>>>>>>>>
>>>>>>>>>>> #7  ciEnv::cache_jvmti_state (this=this at entry=0x2ae40b78eb30) 
>>>>>>>>>>> at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/ci/ciEnv.cpp:229 
>>>>>>>>>>>
>>>>>>>>>>> #8  0x00002ae3935d3294 in 
>>>>>>>>>>> CompileBroker::invoke_compiler_on_method 
>>>>>>>>>>> (task=task at entry=0x2ae48800ff40) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/compiler/compileBroker.cpp:2084 
>>>>>>>>>>>
>>>>>>>>>>> #9  0x00002ae3935d4f48 in CompileBroker::compiler_thread_loop 
>>>>>>>>>>> () at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/compiler/compileBroker.cpp:1798 
>>>>>>>>>>>
>>>>>>>>>>> #10 0x00002ae393d7338a in JavaThread::thread_main_inner 
>>>>>>>>>>> (this=this at entry=0x2ae3984c7800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1795 
>>>>>>>>>>>
>>>>>>>>>>> #11 0x00002ae393d736c6 in JavaThread::run 
>>>>>>>>>>> (this=0x2ae3984c7800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1775 
>>>>>>>>>>>
>>>>>>>>>>> #12 0x00002ae393ba0070 in thread_native_entry 
>>>>>>>>>>> (thread=0x2ae3984c7800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 
>>>>>>>>>>>
>>>>>>>>>>> #13 0x00002ae3927b1e25 in start_thread () from 
>>>>>>>>>>> /lib64/libpthread.so.0
>>>>>>>>>>> #14 0x00002ae392cc234d in clone () from /lib64/libc.so.6
>>>>>>>>>>>
>>>>>>>>>>> Thread 158 (Thread 0x2ae40b890700 (LWP 27963)):
>>>>>>>>>>> #0  0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () 
>>>>>>>>>>> from /lib64/libpthread.so.0
>>>>>>>>>>> #1  0x00002ae393ba8d63 in os::PlatformEvent::park 
>>>>>>>>>>> (this=this at entry=0x2ae3984cbb00) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 
>>>>>>>>>>>
>>>>>>>>>>> #2  0x00002ae393b50920 in ParkCommon (timo=0, 
>>>>>>>>>>> ev=0x2ae3984cbb00) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 
>>>>>>>>>>>
>>>>>>>>>>> #3  Monitor::ILock (this=0x2ae398024f10, Self=0x2ae3984ca800) 
>>>>>>>>>>> at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:461 
>>>>>>>>>>>
>>>>>>>>>>> #4  0x00002ae393b512c1 in lock (Self=0x2ae3984ca800, 
>>>>>>>>>>> this=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:910 
>>>>>>>>>>>
>>>>>>>>>>> #5  Monitor::lock (this=this at entry=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:919 
>>>>>>>>>>>
>>>>>>>>>>> #6  0x00002ae39350510c in MutexLocker (mutex=0x2ae398024f10, 
>>>>>>>>>>> this=<synthetic pointer>) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutexLocker.hpp:182 
>>>>>>>>>>>
>>>>>>>>>>> #7  ciEnv::cache_jvmti_state (this=this at entry=0x2ae40b88fb30) 
>>>>>>>>>>> at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/ci/ciEnv.cpp:229 
>>>>>>>>>>>
>>>>>>>>>>> #8  0x00002ae3935d3294 in 
>>>>>>>>>>> CompileBroker::invoke_compiler_on_method 
>>>>>>>>>>> (task=task at entry=0x2ae49c00a670) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/compiler/compileBroker.cpp:2084 
>>>>>>>>>>>
>>>>>>>>>>> #9  0x00002ae3935d4f48 in CompileBroker::compiler_thread_loop 
>>>>>>>>>>> () at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/compiler/compileBroker.cpp:1798 
>>>>>>>>>>>
>>>>>>>>>>> #10 0x00002ae393d7338a in JavaThread::thread_main_inner 
>>>>>>>>>>> (this=this at entry=0x2ae3984ca800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1795 
>>>>>>>>>>>
>>>>>>>>>>> #11 0x00002ae393d736c6 in JavaThread::run 
>>>>>>>>>>> (this=0x2ae3984ca800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1775 
>>>>>>>>>>>
>>>>>>>>>>> #12 0x00002ae393ba0070 in thread_native_entry 
>>>>>>>>>>> (thread=0x2ae3984ca800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 
>>>>>>>>>>>
>>>>>>>>>>> #13 0x00002ae3927b1e25 in start_thread () from 
>>>>>>>>>>> /lib64/libpthread.so.0
>>>>>>>>>>> #14 0x00002ae392cc234d in clone () from /lib64/libc.so.6
>>>>>>>>>>>
>>>>>>>>>>> Thread 51 (Thread 0x2ae49549b700 (LWP 29678)):
>>>>>>>>>>> #0  0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () 
>>>>>>>>>>> from /lib64/libpthread.so.0
>>>>>>>>>>> #1  0x00002ae393ba8d63 in os::PlatformEvent::park 
>>>>>>>>>>> (this=this at entry=0x2ae460061c00) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 
>>>>>>>>>>>
>>>>>>>>>>> #2  0x00002ae393b50920 in ParkCommon (timo=0, 
>>>>>>>>>>> ev=0x2ae460061c00) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 
>>>>>>>>>>>
>>>>>>>>>>> #3  Monitor::ILock (this=0x2ae398024f10, Self=0x2ae4600c2800) 
>>>>>>>>>>> at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:461 
>>>>>>>>>>>
>>>>>>>>>>> #4  0x00002ae393b512c1 in lock (Self=0x2ae4600c2800, 
>>>>>>>>>>> this=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:910 
>>>>>>>>>>>
>>>>>>>>>>> #5  Monitor::lock (this=this at entry=0x2ae398024f10) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:919 
>>>>>>>>>>>
>>>>>>>>>>> #6  0x00002ae393999682 in MutexLocker (mutex=0x2ae398024f10, 
>>>>>>>>>>> this=<synthetic pointer>) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutexLocker.hpp:182 
>>>>>>>>>>>
>>>>>>>>>>> #7  thread_started (thread=0x2ae4600c2800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEventController.cpp:668 
>>>>>>>>>>>
>>>>>>>>>>> #8  JvmtiEventController::thread_started 
>>>>>>>>>>> (thread=0x2ae4600c2800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEventController.cpp:1027 
>>>>>>>>>>>
>>>>>>>>>>> #9  0x00002ae39399f3a0 in JvmtiExport::post_thread_start 
>>>>>>>>>>> (thread=thread at entry=0x2ae4600c2800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiExport.cpp:1395 
>>>>>>>>>>>
>>>>>>>>>>> #10 0x00002ae393d737d8 in JavaThread::run 
>>>>>>>>>>> (this=0x2ae4600c2800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1764 
>>>>>>>>>>>
>>>>>>>>>>> #11 0x00002ae393ba0070 in thread_native_entry 
>>>>>>>>>>> (thread=0x2ae4600c2800) at 
>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 
>>>>>>>>>>>
>>>>>>>>>>> #12 0x00002ae3927b1e25 in start_thread () from 
>>>>>>>>>>> /lib64/libpthread.so.0
>>>>>>>>>>> #13 0x00002ae392cc234d in clone () from /lib64/libc.so.6
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> These two thread are blocked on the monitor 
>>>>>>>>>>> JvmtiThreadState_lock in the function 
>>>>>>>>>>> ciEnv::cache_jvmti_state().
>>>>>>>>>>> Also, there are many threads (like #51) that are executing 
>>>>>>>>>>> JvmtiExport::post_thread_start and blocked on the same monitor.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Now, the question is why this safepoint can not start?
>>>>>>>>>>> What thread is blocking it? Or in reverse, what thread this 
>>>>>>>>>>> safepoint is waiting for?
>>>>>>>>>>>
>>>>>>>>>>> I think, this safepoint operation is waiting for all threads 
>>>>>>>>>>> that are blocked on the JvmtiThreadState_lock.
>>>>>>>>>>>
>>>>>>>>>>> Conclusion:
>>>>>>>>>>>
>>>>>>>>>>> The deadlock is:
>>>>>>>>>>>
>>>>>>>>>>> Thread #152:
>>>>>>>>>>>    - grabbed the monitor JvmtiThreadState_lock
>>>>>>>>>>>    - blocked in the VM_GetCurrentLocation in the function 
>>>>>>>>>>> JvmtiEnvThreadState::reset_current_location()
>>>>>>>>>>>
>>>>>>>>>>> Many other threads:
>>>>>>>>>>>    - blocked on the monitor JvmtiThreadState_lock
>>>>>>>>>>>    - can not reach the blocked at a safepoint state (all 
>>>>>>>>>>> threads have to reach this state for this safepoint to happen)
>>>>>>>>>>>
>>>>>>>>>>> It seems to me, this is a bug which has to be filed.
>>>>>>>>>>>
>>>>>>>>>>> My guess is that this will stop to reproduce after if you 
>>>>>>>>>>> turn off the single stepping for thread #152.
>>>>>>>>>>> Please, let me know about the results.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Assuming that crashes look like VM bugs I think it make 
>>>>>>>>>>>> sense to integrate jvmti changes but *don't* enabled jvmti 
>>>>>>>>>>>> module by default.
>>>>>>>>>>>
>>>>>>>>>>> This one is a deadlock.
>>>>>>>>>>> However, the root cause is a race condition that can 
>>>>>>>>>>> potentially result in both deadlocks and crashes.
>>>>>>>>>>> So, I'm curious if you observed crashes as well.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> And add to more tests with jvmti enabled.
>>>>>>>>>>>> So anyone could easily run them to reproduce crashes.  This 
>>>>>>>>>>>> test would be out of CI to don't introduce any bugs. Does it 
>>>>>>>>>>>> make sense?
>>>>>>>>>>>>
>>>>>>>>>>>> Consider hang - I think that it might be product bug since I 
>>>>>>>>>>>> don't see any locking on my monitors. But I am not sure. Is 
>>>>>>>>>>>> it possible that any my code jvmti agent prevent VM to get 
>>>>>>>>>>>> into safepoint?
>>>>>>>>>>>> Could we discuss it tomorrow or his week when you have a time?
>>>>>>>>>>>
>>>>>>>>>>> Yes, of course.
>>>>>>>>>>> Let's find some time tomorrow.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Any suggestion how to diagnose deadlock would be great.
>>>>>>>>>>>
>>>>>>>>>>> Analysis of stack traces is needed.
>>>>>>>>>>> It is non-trivial in this particular case as there are so 
>>>>>>>>>>> many threads executed at the same time.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Part of stack trace with 2 my threads only:
>>>>>>>>>>>>
>>>>>>>>>>>> Thread 136 (Thread 0x2ae494100700 (LWP 28023)):
>>>>>>>>>>>> #0  0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () 
>>>>>>>>>>>> from /lib64/libpthread.so.0
>>>>>>>>>>>> #1  0x00002ae393ba8d63 in os::PlatformEvent::park 
>>>>>>>>>>>> (this=this at entry=0x2ae454005800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 
>>>>>>>>>>>>
>>>>>>>>>>>> #2  0x00002ae393b50cf8 in ParkCommon (timo=0, 
>>>>>>>>>>>> ev=0x2ae454005800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 
>>>>>>>>>>>>
>>>>>>>>>>>> #3  Monitor::IWait (this=this at entry=0x2ae398023c10, 
>>>>>>>>>>>> Self=Self at entry=0x2ae454004800, timo=timo at entry=0) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:76\ 
>>>>>>>>>>>>
>>>>>>>>>>>> 8
>>>>>>>>>>>> #4  0x00002ae393b51f2e in Monitor::wait 
>>>>>>>>>>>> (this=this at entry=0x2ae398023c10, 
>>>>>>>>>>>> no_safepoint_check=<optimized out>, timeout=timeout at entry=0, 
>>>>>>>>>>>> as_suspend_equivalent=as_suspend_equivalent at en\
>>>>>>>>>>>> try=false) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:1106 
>>>>>>>>>>>>
>>>>>>>>>>>> #5  0x00002ae393de7867 in VMThread::execute 
>>>>>>>>>>>> (op=op at entry=0x2ae4940ffb10) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:657 
>>>>>>>>>>>>
>>>>>>>>>>>> #6  0x00002ae393d6a3bd in JavaThread::java_suspend 
>>>>>>>>>>>> (this=this at entry=0x2ae3985f2000) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:2321 
>>>>>>>>>>>>
>>>>>>>>>>>> #7  0x00002ae3939ad7e1 in JvmtiSuspendControl::suspend 
>>>>>>>>>>>> (java_thread=java_thread at entry=0x2ae3985f2000) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:8\ 
>>>>>>>>>>>>
>>>>>>>>>>>> 47
>>>>>>>>>>>> #8  0x00002ae3939887ae in JvmtiEnv::SuspendThread 
>>>>>>>>>>>> (this=this at entry=0x2ae39801b270, java_thread=0x2ae3985f2000) 
>>>>>>>>>>>> at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiE\ 
>>>>>>>>>>>>
>>>>>>>>>>>> nv.cpp:955
>>>>>>>>>>>> #9  0x00002ae39393a8c6 in jvmti_SuspendThread 
>>>>>>>>>>>> (env=0x2ae39801b270, thread=0x2ae49929fdf8) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/hotspot/variant-server/gensrc/jvmtifiles\ 
>>>>>>>>>>>>
>>>>>>>>>>>> /jvmtiEnter.cpp:527
>>>>>>>>>>>> #10 0x00002ae394d973ee in agent_sampler 
>>>>>>>>>>>> (jvmti=0x2ae39801b270, env=<optimized out>, p=<optimized 
>>>>>>>>>>>> out>) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/closed/test/hotspot/jtreg/applications/kitc\ 
>>>>>>>>>>>>
>>>>>>>>>>>> hensink/process/stress/modules/libJvmtiStressModule.c:274
>>>>>>>>>>>> #11 0x00002ae3939ab24d in call_start_function 
>>>>>>>>>>>> (this=0x2ae454004800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:85 
>>>>>>>>>>>>
>>>>>>>>>>>> #12 JvmtiAgentThread::start_function_wrapper 
>>>>>>>>>>>> (thread=0x2ae454004800, __the_thread__=<optimized out>) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:79 
>>>>>>>>>>>>
>>>>>>>>>>>> #13 0x00002ae393d7338a in JavaThread::thread_main_inner 
>>>>>>>>>>>> (this=this at entry=0x2ae454004800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1795 
>>>>>>>>>>>>
>>>>>>>>>>>> #14 0x00002ae393d736c6 in JavaThread::run 
>>>>>>>>>>>> (this=0x2ae454004800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1775 
>>>>>>>>>>>>
>>>>>>>>>>>> #15 0x00002ae393ba0070 in thread_native_entry 
>>>>>>>>>>>> (thread=0x2ae454004800) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 
>>>>>>>>>>>>
>>>>>>>>>>>> #16 0x00002ae3927b1e25 in start_thread () from 
>>>>>>>>>>>> /lib64/libpthread.so.0
>>>>>>>>>>>> #17 0x00002ae392cc234d in clone () from /lib64/libc.so.6
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thread 152 (Thread 0x2ae427060700 (LWP 27995)):
>>>>>>>>>>>> #0  0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () 
>>>>>>>>>>>> from /lib64/libpthread.so.0
>>>>>>>>>>>> #1  0x00002ae393ba8d63 in os::PlatformEvent::park 
>>>>>>>>>>>> (this=this at entry=0x2ae3985e7400) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 
>>>>>>>>>>>>
>>>>>>>>>>>> #2  0x00002ae393b50cf8 in ParkCommon (timo=0, 
>>>>>>>>>>>> ev=0x2ae3985e7400) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 
>>>>>>>>>>>>
>>>>>>>>>>>> #3  Monitor::IWait (this=this at entry=0x2ae398023c10, 
>>>>>>>>>>>> Self=Self at entry=0x2ae3985e6000, timo=timo at entry=0) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:76\ 
>>>>>>>>>>>>
>>>>>>>>>>>> 8
>>>>>>>>>>>> #4  0x00002ae393b51f2e in Monitor::wait 
>>>>>>>>>>>> (this=this at entry=0x2ae398023c10, 
>>>>>>>>>>>> no_safepoint_check=<optimized out>, timeout=timeout at entry=0, 
>>>>>>>>>>>> as_suspend_equivalent=as_suspend_equivalent at en\
>>>>>>>>>>>> try=false) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:1106 
>>>>>>>>>>>>
>>>>>>>>>>>> #5  0x00002ae393de7867 in VMThread::execute 
>>>>>>>>>>>> (op=0x2ae42705f500) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:657 
>>>>>>>>>>>>
>>>>>>>>>>>> #6  0x00002ae3939965f3 in 
>>>>>>>>>>>> JvmtiEnvThreadState::reset_current_location 
>>>>>>>>>>>> (this=this at entry=0x2ae6bc000d80, 
>>>>>>>>>>>> event_type=event_type at entry=JVMTI_EVENT_SINGLE_STEP, 
>>>>>>>>>>>> enabled=enabled at entry=tr\
>>>>>>>>>>>> ue) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEnvThreadState.cpp:312 
>>>>>>>>>>>>
>>>>>>>>>>>> #7  0x00002ae393997acf in recompute_env_thread_enabled 
>>>>>>>>>>>> (state=0x2ae6bc000cd0, ets=0x2ae6bc000d80) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEventControlle\ 
>>>>>>>>>>>>
>>>>>>>>>>>> r.cpp:490
>>>>>>>>>>>> #8  JvmtiEventControllerPrivate::recompute_thread_enabled 
>>>>>>>>>>>> (state=state at entry=0x2ae6bc000cd0) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEventController.cpp\ 
>>>>>>>>>>>>
>>>>>>>>>>>> :523
>>>>>>>>>>>> #9  0x00002ae393998168 in 
>>>>>>>>>>>> JvmtiEventControllerPrivate::recompute_enabled () at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEventController.cpp:598 
>>>>>>>>>>>>
>>>>>>>>>>>> #10 0x00002ae39399a244 in set_user_enabled (enabled=true, 
>>>>>>>>>>>> event_type=JVMTI_EVENT_SINGLE_STEP, thread=0x0, 
>>>>>>>>>>>> env=0x2ae39801b270) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/sha\
>>>>>>>>>>>> re/prims/jvmtiEventController.cpp:818
>>>>>>>>>>>> #11 JvmtiEventController::set_user_enabled 
>>>>>>>>>>>> (env=0x2ae39801b270, thread=0x0, 
>>>>>>>>>>>> event_type=JVMTI_EVENT_SINGLE_STEP, enabled=<optimized out>) 
>>>>>>>>>>>> at /scratch/lmesnik/ws/hs-bigapps/open/src/\
>>>>>>>>>>>> hotspot/share/prims/jvmtiEventController.cpp:963
>>>>>>>>>>>> #12 0x00002ae393987d2d in JvmtiEnv::SetEventNotificationMode 
>>>>>>>>>>>> (this=this at entry=0x2ae39801b270, 
>>>>>>>>>>>> mode=mode at entry=JVMTI_ENABLE, 
>>>>>>>>>>>> event_type=event_type at entry=JVMTI_EVENT_SINGLE_STEP, eve\
>>>>>>>>>>>> nt_thread=event_thread at entry=0x0) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEnv.cpp:543 
>>>>>>>>>>>>
>>>>>>>>>>>> #13 0x00002ae3939414eb in jvmti_SetEventNotificationMode 
>>>>>>>>>>>> (env=0x2ae39801b270, mode=mode at entry=JVMTI_ENABLE, 
>>>>>>>>>>>> event_type=event_type at entry=JVMTI_EVENT_SINGLE_STEP, 
>>>>>>>>>>>> event_thread=event_\
>>>>>>>>>>>> thread at entry=0x0) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/hotspot/variant-server/gensrc/jvmtifiles/jvmtiEnter.cpp:5389 
>>>>>>>>>>>>
>>>>>>>>>>>> #14 0x00002ae394d97989 in enable_events () at 
>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/closed/test/hotspot/jtreg/applications/kitchensink/process/stress/modules/libJvmtiStressModule.c:519 
>>>>>>>>>>>>
>>>>>>>>>>>> #15 0x00002ae394d98070 in 
>>>>>>>>>>>> Java_applications_kitchensink_process_stress_modules_JvmtiStressModule_startIteration 
>>>>>>>>>>>> (env=<optimized out>, this=<optimized out>) at 
>>>>>>>>>>>> /scratch/lmesnik/ws/h\
>>>>>>>>>>>> s-bigapps/closed/test/hotspot/jtreg/applications/kitchensink/process/stress/modules/libJvmtiStressModule.c:697 
>>>>>>>>>>>>
>>>>>>>>>>>> #16 0x00002ae3a43ef257 in ?? ()
>>>>>>>>>>>> #17 0x00002ae3a43eede1 in ?? ()
>>>>>>>>>>>> #18 0x00002ae42705f878 in ?? ()
>>>>>>>>>>>> #19 0x00002ae40ad334e0 in ?? ()
>>>>>>>>>>>> #20 0x00002ae42705f8e0 in ?? ()
>>>>>>>>>>>> #21 0x00002ae40ad33c68 in ?? ()
>>>>>>>>>>>> #22 0x0000000000000000 in ?? ()
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Serguei
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Leonid
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> On Oct 9, 2018, at 4:52 PM, serguei.spitsyn at oracle.com 
>>>>>>>>>>>>> <mailto:serguei.spitsyn at oracle.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Leonid,
>>>>>>>>>>>>>
>>>>>>>>>>>>> There is an existing bug:
>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8043571
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Serguei
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 10/9/18 16:11, Leonid Mesnik wrote:
>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> During fixing kitchensink I get
>>>>>>>>>>>>>> assert(_cur_stack_depth == count_frames()) failed: 
>>>>>>>>>>>>>> cur_stack_depth out of sync
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Do you know if i might be bug in my jvmti agent?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Leonid
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> #
>>>>>>>>>>>>>> # A fatal error has been detected by the Java Runtime 
>>>>>>>>>>>>>> Environment:
>>>>>>>>>>>>>> #
>>>>>>>>>>>>>> #  Internal Error 
>>>>>>>>>>>>>> (/scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiThreadState.cpp:277), 
>>>>>>>>>>>>>> pid=13926, tid=13962
>>>>>>>>>>>>>> #  assert(_cur_stack_depth == count_frames()) failed: 
>>>>>>>>>>>>>> cur_stack_depth out of sync
>>>>>>>>>>>>>> #
>>>>>>>>>>>>>> # JRE version: Java(TM) SE Runtime Environment (12.0) 
>>>>>>>>>>>>>> (fastdebug build 
>>>>>>>>>>>>>> 12-internal+0-2018-10-08-2342517.lmesnik.hs-bigapps)
>>>>>>>>>>>>>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (fastdebug 
>>>>>>>>>>>>>> 12-internal+0-2018-10-08-2342517.lmesnik.hs-bigapps, mixed 
>>>>>>>>>>>>>> mode, tiered, compressed oops, g1 gc, linux-amd64)
>>>>>>>>>>>>>> # Core dump will be written. Default location: Core dumps 
>>>>>>>>>>>>>> may be processed with "/usr/libexec/abrt-hook-ccpp %s %c 
>>>>>>>>>>>>>> %p %u %g %t e %P %I %h" (or dumping to 
>>>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSanity_java/scratch/0/core.13926) 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> #
>>>>>>>>>>>>>> # If you would like to submit a bug report, please visit:
>>>>>>>>>>>>>> # http://bugreport.java.com/bugreport/crash.jsp
>>>>>>>>>>>>>> #
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ---------------  S U M M A R Y ------------
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Command Line: -XX:MaxRAMPercentage=2 
>>>>>>>>>>>>>> -XX:MaxRAMPercentage=50 -XX:+CrashOnOutOfMemoryError 
>>>>>>>>>>>>>> -Djava.net.preferIPv6Addresses=false -XX:-PrintVMOptions 
>>>>>>>>>>>>>> -XX:+DisplayVMOutputToStderr -XX:+UsePerfData 
>>>>>>>>>>>>>> -Xlog:gc*,gc+heap=debug:gc.log:uptime,timemillis,level,tags -XX:+DisableExplicitGC 
>>>>>>>>>>>>>> -XX:+PrintFlagsFinal -XX:+StartAttachListener 
>>>>>>>>>>>>>> -XX:NativeMemoryTracking=detail -XX:+FlightRecorder 
>>>>>>>>>>>>>> --add-exports=java.base/java.lang=ALL-UNNAMED 
>>>>>>>>>>>>>> --add-opens=java.base/java.lang=ALL-UNNAMED 
>>>>>>>>>>>>>> --add-exports=java.xml/com.sun.org.apache.xerces.internal.parsers=ALL-UNNAMED 
>>>>>>>>>>>>>> --add-exports=java.xml/com.sun.org.apache.xerces.internal.util=ALL-UNNAMED 
>>>>>>>>>>>>>> -Djava.io.tmpdir=/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSanity_java/scratch/0/java.io.tmpdir 
>>>>>>>>>>>>>> -Duser.home=/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSanity_java/scratch/0/user.home 
>>>>>>>>>>>>>> -agentpath:/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/images/test/hotspot/jtreg/native/libJvmtiStressModule.so 
>>>>>>>>>>>>>> applications.kitchensink.process.stress.Main 
>>>>>>>>>>>>>> /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSanity_java/scratch/0/kitchensink.final.properties 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Host: scaaa118.us.oracle.com 
>>>>>>>>>>>>>> <http://scaaa118.us.oracle.com>, Intel(R) Xeon(R) CPU 
>>>>>>>>>>>>>> E5-2690 0 @ 2.90GHz, 32 cores, 235G, Oracle Linux Server 
>>>>>>>>>>>>>> release 7.3
>>>>>>>>>>>>>> Time: Tue Oct  9 16:06:07 2018 PDT elapsed time: 31 
>>>>>>>>>>>>>> seconds (0d 0h 0m 31s)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ---------------  T H R E A D  ---------------
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Current thread (0x00002af3dc6ac800):  VMThread "VM Thread" 
>>>>>>>>>>>>>> [stack: 0x00002af44f10a000,0x00002af44f20a000] [id=13962] 
>>>>>>>>>>>>>> _threads_hazard_ptr=0x00002af4ac090eb0, 
>>>>>>>>>>>>>> _nested_threads_hazard_ptr_cnt=0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Stack: [0x00002af44f10a000,0x00002af44f20a000], 
>>>>>>>>>>>>>>  sp=0x00002af44f208720,  free space=1017k
>>>>>>>>>>>>>> Native frames: (J=compiled Java code, A=aot compiled Java 
>>>>>>>>>>>>>> code, j=interpreted, Vv=VM code, C=native code)
>>>>>>>>>>>>>> V  [libjvm.so+0x18c4923]  VMError::report_and_die(int, 
>>>>>>>>>>>>>> char const*, char const*, __va_list_tag*, Thread*, 
>>>>>>>>>>>>>> unsigned char*, void*, void*, char const*, int, unsigned 
>>>>>>>>>>>>>> long)+0x2c3
>>>>>>>>>>>>>> V  [libjvm.so+0x18c56ef]  VMError::report_and_die(Thread*, 
>>>>>>>>>>>>>> void*, char const*, int, char const*, char const*, 
>>>>>>>>>>>>>> __va_list_tag*)+0x2f
>>>>>>>>>>>>>> V  [libjvm.so+0xb55aa0]  report_vm_error(char const*, int, 
>>>>>>>>>>>>>> char const*, char const*, ...)+0x100
>>>>>>>>>>>>>> V  [libjvm.so+0x11f2cfe] 
>>>>>>>>>>>>>>  JvmtiThreadState::cur_stack_depth()+0x14e
>>>>>>>>>>>>>> V  [libjvm.so+0x11f3257] 
>>>>>>>>>>>>>>  JvmtiThreadState::update_for_pop_top_frame()+0x27
>>>>>>>>>>>>>> V  [libjvm.so+0x119af99] 
>>>>>>>>>>>>>>  VM_UpdateForPopTopFrame::doit()+0xb9
>>>>>>>>>>>>>> V  [libjvm.so+0x1908982]  VM_Operation::evaluate()+0x132
>>>>>>>>>>>>>> V  [libjvm.so+0x19040be] 
>>>>>>>>>>>>>>  VMThread::evaluate_operation(VM_Operation*) [clone 
>>>>>>>>>>>>>> .constprop.51]+0x18e
>>>>>>>>>>>>>> V  [libjvm.so+0x1904960]  VMThread::loop()+0x4c0
>>>>>>>>>>>>>> V  [libjvm.so+0x1904f53]  VMThread::run()+0xd3
>>>>>>>>>>>>>> V  [libjvm.so+0x14e8300]  thread_native_entry(Thread*)+0x100
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> VM_Operation (0x00002af4d8502910): UpdateForPopTopFrame, 
>>>>>>>>>>>>>> mode: safepoint, requested by thread 0x00002af4dc008800
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>
>>


More information about the serviceability-dev mailing list