RFR: 8304824: NMT should not use ThreadCritical

Thomas Stuefe stuefe at openjdk.org
Wed Sep 11 06:13:06 UTC 2024


On Sat, 7 Sep 2024 17:29:41 GMT, Robert Toyonaga <duke at openjdk.org> wrote:

>>> Thank you Robert.
>>> 
>>> I think that the `MemoryFileTracker`'s locker should probably also be replaced with this semaphore, as in the future we have plans to have a globally shared `NativeCallStackStorage`.
>> 
>> +1
>
>> Thank you Robert.
>> 
>> I think that the `MemoryFileTracker`'s locker should probably also be replaced with this semaphore, as in the future we have plans to have a globally shared `NativeCallStackStorage`.
> 
> Hi @jdksjolen. No problem! Ok I've replaced it.

> > @roberttoyonaga Why don't we use a normal hotspot mutex here?
> 
> Hi @tstuefe. I tried using a regular mutex along with `MutexLocker`, but it seems like the mutex isn't initialized early enough to be used during VM init. After building I get:
> 
> `assert(NMT_lock != nullptr) failed: not initialized!`
> 
> ```
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
> V  [libjvm.so+0x178448c]  VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x948  (nmtCommon.hpp:137)
> V  [libjvm.so+0x1783aed]  (vmError.cpp:1611)
> V  [libjvm.so+0xa8e2d2]  report_vm_status_error(char const*, int, char const*, int, char const*)+0x0  (debug.cpp:193)
> V  [libjvm.so+0x8cb317]  NMTUtil::nmt_lock()+0x69  (nmtCommon.hpp:137)
> V  [libjvm.so+0x13a0459]  MemTracker::record_virtual_memory_reserve(void*, unsigned long, NativeCallStack const&, MEMFLAGS)+0x36  (memTracker.hpp:128)
> V  [libjvm.so+0x139d292]  os::reserve_memory(unsigned long, bool, MEMFLAGS)+0x80  (os.cpp:1877)
> V  [libjvm.so+0xa8fc31]  initialize_assert_poison()+0x1f  (debug.cpp:712)
> V  [libjvm.so+0x16e5720]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x1be  (threads.cpp:489)
> V  [libjvm.so+0xf62555]  JNI_CreateJavaVM_inner(JavaVM_**, void**, void*)+0xbd  (jni.cpp:3582)
> V  [libjvm.so+0xf629e3]  JNI_CreateJavaVM+0x32  (jni.cpp:3673)
> C  [libjli.so+0x8c04]  InitializeJVM+0x14d  (java.c:1592)
> C  [libjli.so+0x5337]  JavaMain+0xe8  (java.c:505)
> C  [libjli.so+0xc61e]  ThreadJavaMain+0x27  (java_md.c:653)
> C  [libc.so.6+0x97507]  start_thread+0x377
> ```

Hi @roberttoyonaga,

sorry for the sluggish responses, I am swamped atm. Thanks a lot for your work here.

I dislike the self-written lock because of duplication and because Mutex gives us infrastructure benefits we don't have here. E.g. locking monitoring, deadlock detection, and things like `assert_lock_strong`.

You run into this problem at the initialization phase. I expected that, and here is where things gets interesting - here and in deadlock detection.

I see two better solutions:

1) During initialization, we don't need a lock. We are single-threaded (note that it may be different threads at different times, since different threads may handle CreateJavaVM and the initial invocation, but only one thread will run simultaneously). In any case, before the mutex system is initialized and the Mutexes are created, we don't need to lock.

So, you could use Mutex and MutexLocker but short-circuit locking if the mutex is still zero. This should work and should be lock/unlock balanced too. I believe MutexLocker already handles this case under the hood. You can call it with a nullptr mutex, and it will not lock. So you don't even have to do anything. Just don't assert.

2) That place where we mmap is a rare (and I think probably the only one) case where we mmap before mutex initialization. Pretty sure at least. This is just for poisoning the assertion page. This code could be moved downward a bit, since the assert poisoning is not essential for getting assertions to work, just a nice to have for added error info. That info does not really matter here, its more for assertion in compiled code. 


I prefer (1), since then, locking would work as it does in all other places in hotspot.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/20852#issuecomment-2342703755


More information about the serviceability-dev mailing list