build openjdk with fsanitizer=thread
Jie He
Jie.He at arm.com
Tue Feb 25 06:19:10 UTC 2020
Hi Dmitry
Yes, so I don't think the first 2 classes of warnings are data race, they are out of scope of tsan.
Currently, I'm not sure if the second thread is JIT thread.
But seems the second thread knows a tsan_read8 behavior happened at least in IR level.
like the following tsan reports, I have changed the history_size to 4:
WARNING: ThreadSanitizer: data race (pid=9726)
Write of size 8 at 0x7b1800003ab0 by thread T1:
#0 ChunkPool::free(Chunk*) /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:93:16 (libjvm.so+0x7e6fe0)
#1 Chunk::operator delete(void*) /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:207:54 (libjvm.so+0x7e47f4)
#2 Chunk::chop() /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:225:5 (libjvm.so+0x7e4994)
#3 Arena::destruct_contents() /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:319:11 (libjvm.so+0x7e5274)
#4 Arena::~Arena() /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:283:3 (libjvm.so+0x7e52ea)
#5 ResourceArea::~ResourceArea() /home/wave/workspace/jdk_master/src/hotspot/share/memory/resourceArea.hpp:44:7 (libjvm.so+0xae9d18)
#6 Thread::~Thread() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:449:3 (libjvm.so+0x1e2214a)
#7 JavaThread::~JavaThread() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:1903:1 (libjvm.so+0x1e27af4)
#8 JavaThread::~JavaThread() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:1856:27 (libjvm.so+0x1e27b3c)
#9 ThreadsSMRSupport::smr_delete(JavaThread*) /home/wave/workspace/jdk_master/src/hotspot/share/runtime/threadSMR.cpp:1027:3 (libjvm.so+0x1e47408)
#10 JavaThread::smr_delete() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:208:5 (libjvm.so+0x1e20e73)
#11 jni_DetachCurrentThread /home/wave/workspace/jdk_master/src/hotspot/share/prims/jni.cpp:4173:11 (libjvm.so+0x137ae7a)
#12 JavaMain /home/wave/workspace/jdk_master/src/java.base/share/native/libjli/java.c:560:5 (libjli.so+0x67e9)
Previous read of size 8 at 0x7b1800003ab0 by thread T14:
[failed to restore the stack]
Location is heap block of size 88 at 0x7b1800003a80 allocated by thread T1:
#0 malloc <null> (java+0x421ee7)
#1 os::malloc(unsigned long, MemoryType, NativeCallStack const&) /home/wave/workspace/jdk_master/src/hotspot/share/runtime/os.cpp:714:18 (libjvm.so+0x19d7e62)
#2 AllocateHeap(unsigned long, MemoryType, NativeCallStack const&, AllocFailStrategy::AllocFailEnum) /home/wave/workspace/jdk_master/src/hotspot/share/memory/allocation.cpp:42:21 (libjvm.so+0x7bfc2d)
#3 AllocateHeap(unsigned long, MemoryType, AllocFailStrategy::AllocFailEnum) /home/wave/workspace/jdk_master/src/hotspot/share/memory/allocation.cpp:52:10 (libjvm.so+0x7bfd34)
#4 CHeapObj<(MemoryType)8>::operator new(unsigned long) /home/wave/workspace/jdk_master/src/hotspot/share/memory/allocation.hpp:193:19 (libjvm.so+0x7e6799)
#5 ChunkPool::initialize() /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:135 (libjvm.so+0x7e6799)
#6 chunkpool_init() /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:154:3 (libjvm.so+0x7e4441)
#7 vm_init_globals() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/init.cpp:102:3 (libjvm.so+0x11c9b6a)
#8 Threads::create_vm(JavaVMInitArgs*, bool*) /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:3846:3 (libjvm.so+0x1e3108c)
#9 JNI_CreateJavaVM_inner(JavaVM_**, void**, void*) /home/wave/workspace/jdk_master/src/hotspot/share/prims/jni.cpp:3852:12 (libjvm.so+0x1379e74)
#10 JNI_CreateJavaVM /home/wave/workspace/jdk_master/src/hotspot/share/prims/jni.cpp:3935:14 (libjvm.so+0x1379d0f)
#11 InitializeJVM /home/wave/workspace/jdk_master/src/java.base/share/native/libjli/java.c:1538:9 (libjli.so+0x6974)
Thread T1 (tid=9728, running) created by main thread at:
#0 pthread_create <null> (java+0x4233d5)
#1 CallJavaMainInNewThread /home/wave/workspace/jdk_master/src/java.base/unix/native/libjli/java_md_solinux.c:754:9 (libjli.so+0xb53f)
Thread T14 (tid=9742, running) created by thread T1 at:
#0 pthread_create <null> (java+0x4233d5)
#1 os::create_thread(Thread*, os::ThreadType, unsigned long) /home/wave/workspace/jdk_master/src/hotspot/os/linux/os_linux.cpp:926:15 (libjvm.so+0x19e4413)
#2 WatcherThread::WatcherThread() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:1375:7 (libjvm.so+0x1e25399)
#3 WatcherThread::start() /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:1514:9 (libjvm.so+0x1e2598f)
#4 Threads::create_vm(JavaVMInitArgs*, bool*) /home/wave/workspace/jdk_master/src/hotspot/share/runtime/thread.cpp:4105:7 (libjvm.so+0x1e31bb1)
#5 JNI_CreateJavaVM_inner(JavaVM_**, void**, void*) /home/wave/workspace/jdk_master/src/hotspot/share/prims/jni.cpp:3852:12 (libjvm.so+0x1379e74)
#6 JNI_CreateJavaVM /home/wave/workspace/jdk_master/src/hotspot/share/prims/jni.cpp:3935:14 (libjvm.so+0x1379d0f)
#7 InitializeJVM /home/wave/workspace/jdk_master/src/java.base/share/native/libjli/java.c:1538:9 (libjli.so+0x6974)
SUMMARY: ThreadSanitizer: data race /home/wave/workspace/jdk_master/src/hotspot/share/memory/arena.cpp:93:16 in ChunkPool::free(Chunk*)
And the openjdk code is below, you could see there is a ThreadCritical which derives from pthread_mutex and implements lock/unlock in the ctor/dtor:
// Return a chunk to the pool
void free(Chunk* chunk) {
assert(chunk->length() + Chunk::aligned_overhead_size() == _size, "bad size");
ThreadCritical tc;
_num_used--;
// Add chunk to list
chunk->set_next(_first);
92: _first = chunk;
93: _num_chunks++;
}
-----Original Message-----
From: Dmitry Vyukov <dvyukov at google.com>
Sent: Tuesday, February 25, 2020 1:30 PM
To: Jie He <Jie.He at arm.com>
Cc: tsan-dev at openjdk.java.net; nd <nd at arm.com>; thread-sanitizer <thread-sanitizer at googlegroups.com>
Subject: Re: build openjdk with fsanitizer=thread
On Tue, Feb 25, 2020 at 6:20 AM Jie He <Jie.He at arm.com> wrote:
>
> Hi
>
> I built openjdk with enabling fanitizer=thread recently, and got a lot of warning by tsan even a helloworld case.
>
> Then I investigated around more than 15 warnings, found they could be divided into 3 classes:
>
>
> 1. Benign races, commonly, there is a comment to indicate why it is safe in MP.
+thread-sanitizer mailing list
Hi Jie,
C++ standard still calls this data race and renders behavior of the
program as undefined. Comments don't fix bugs ;)
> 2. Runtime atomic implementation, in x86, the atomic load and store will be translated to platformload/store.
I assume here platformload/store are implemented as plain loads and stores. These may need to be changed at least in tsan build (but maybe in all builds, because see the previous point). I am not aware of the openjdk portability requirements, but today the __atomic_load/store_n intrinsics may be a good choice.
> 3. Runtime function implement protected by MutexLocker/ThreadCritical, which finally implemented by pthread_mutex.
>
> For 3, I couldn't understand why tsan couldn't recognize that it's safe and protected by a lock. In TSAN document, it said pthread functions are supported.
> So I tried to add annotation(ANNOTATION_RWCLOCK_ACQUIRED/RELEASED) to mark, then got the warning, double of lock, it seems tsan knows MutexLocker is a lock.
>
> Or because one of the conflicting threads lost its stack, in this kind of warning, there is one out of the two threads fails to restore its stack.
> It may result that tsan only knows the thread calls read/write, but doesn't know the memory operation is protected by a lock.
> These threads couldn't restore the stack are JIT threads/Java threads? I need to fix the tsan symbolizer function first for this situation?
Yes, tsan understands pthread mutex natively, no annotations required.
You may try to increase history_size flag to get the second stack:
https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
"failed to restore stack trace" should not lead to false positives either.
Please post several full tsan reports and links to the corresponding source code.
More information about the tsan-dev
mailing list