Parallel GC Thread crash

Stefan Karlsson stefan.karlsson at oracle.com
Tue Feb 4 10:47:32 UTC 2020


Hi Sundar,

The GC crashes when it encounters something bad on the stack:
 > V  [libjvm.so+0xc6bf0b]  OopMapSet::oops_do(frame const*, RegisterMap
 > const*, OopClosure*)+0x2eb
 > V  [libjvm.so+0x765489]  frame::oops_do_internal(OopClosure*,

This is probably not a GC bug. It's more likely that this is caused by 
the JIT compiler. I see in your hotspot-runtime-dev thread, that you 
also get crashes in other compiler related areas.

If you want to rule out the GC, you can run with -XX:+VerifyBeforeGC and 
-XX:+VerifyAfterGC, and see if this asserts before the GC has started 
running.

StefanK

On 2020-02-04 04:38, Sundara Mohan M wrote:
> Hi,
>     I am seeing following crashes frequently on our servers
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00007fca3281d311, pid=103575, tid=108299
> #
> # JRE version: OpenJDK Runtime Environment (13.0.1+9) (build 13.0.1+9)
> # Java VM: OpenJDK 64-Bit Server VM (13.0.1+9, mixed mode, tiered, parallel
> gc, linux-amd64)
> # Problematic frame:
> # V  [libjvm.so+0xcd3311]  PCMarkAndPushClosure::do_oop(oopDesc**)+0x51
> #
> # No core dump will be written. Core dumps have been disabled. To enable
> core dumping, try "ulimit -c unlimited" before starting Java again
> #
> # If you would like to submit a bug report, please visit:
> #   https://github.com/AdoptOpenJDK/openjdk-build/issues
> #
> 
> 
> ---------------  T H R E A D  ---------------
> 
> Current thread (0x00007fca2c051000):  GCTaskThread "ParGC Thread#8" [stack:
> 0x00007fca30277000,0x00007fca30377000] [id=108299]
> 
> Stack: [0x00007fca30277000,0x00007fca30377000],  sp=0x00007fca30374890,
>   free space=1014k
> Native frames: (J=compiled Java code, A=aot compiled Java code,
> j=interpreted, Vv=VM code, C=native code)
> V  [libjvm.so+0xcd3311]  PCMarkAndPushClosure::do_oop(oopDesc**)+0x51
> V  [libjvm.so+0xc6bf0b]  OopMapSet::oops_do(frame const*, RegisterMap
> const*, OopClosure*)+0x2eb
> V  [libjvm.so+0x765489]  frame::oops_do_internal(OopClosure*,
> CodeBlobClosure*, RegisterMap*, bool)+0x99
> V  [libjvm.so+0xf68b17]  JavaThread::oops_do(OopClosure*,
> CodeBlobClosure*)+0x187
> V  [libjvm.so+0xcce2f0]  ThreadRootsMarkingTask::do_it(GCTaskManager*,
> unsigned int)+0xb0
> V  [libjvm.so+0x7f422b]  GCTaskThread::run()+0x1eb
> V  [libjvm.so+0xf707fd]  Thread::call_run()+0x10d
> V  [libjvm.so+0xc875b7]  thread_native_entry(Thread*)+0xe7
> 
> JavaThread 0x00007fb85c004800 (nid = 111387) was being processed
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> v  ~RuntimeStub::_new_array_Java
> J 225122 c2
> ch.qos.logback.classic.spi.ThrowableProxy.<init>(Ljava/lang/Throwable;)V
> (207 bytes) @ 0x00007fca21f1a5d8 [0x00007fca21f17f20+0x00000000000026b8]
> J 62342 c2 webservice.exception.ExceptionLoggingWrapper.execute()V (1004
> bytes) @ 0x00007fca20f0aec8 [0x00007fca20f07f40+0x0000000000002f88]
> J 225129 c2
> webservice.exception.mapper.AbstractExceptionMapper.toResponse(Lbeans/exceptions/mapper/V3ErrorCode;Ljava/lang/Exception;)Ljavax/ws/rs/core/Response;
> (105 bytes) @ 0x00007fca1da512ac [0x00007fca1da51100+0x00000000000001ac]
> J 131643 c2
> webservice.exception.mapper.RequestBlockedExceptionMapper.toResponse(Ljava/lang/Exception;)Ljavax/ws/rs/core/Response;
> (9 bytes) @ 0x00007fca20ce6190 [0x00007fca20ce60c0+0x00000000000000d0]
> J 55114 c2
> webservice.filters.ResponseSerializationWorker.processException()Ljava/io/InputStream;
> (332 bytes) @ 0x00007fca2051fe64 [0x00007fca2051f820+0x0000000000000644]
> J 57859 c2 webservice.filters.ResponseSerializationWorker.execute()Z (272
> bytes) @ 0x00007fca1ef2ed18 [0x00007fca1ef2e140+0x0000000000000bd8]
> J 16114% c2
> com.lafaspot.common.concurrent.internal.WorkerManagerOneThread.call()Lcom/lafaspot/common/concurrent/internal/WorkerManagerState;
> (486 bytes) @ 0x00007fca1ced465c [0x00007fca1ced4200+0x000000000000045c]
> j
>   com.lafaspot.common.concurrent.internal.WorkerManagerOneThread.call()Ljava/lang/Object;+1
> J 11639 c2 java.util.concurrent.FutureTask.run()V java.base at 13.0.1 (123
> bytes) @ 0x00007fca1cd00858 [0x00007fca1cd007c0+0x0000000000000098]
> J 7560 c1
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
> java.base at 13.0.1 (187 bytes) @ 0x00007fca15b23f54
> [0x00007fca15b23160+0x0000000000000df4]
> J 5143 c1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V
> java.base at 13.0.1 (9 bytes) @ 0x00007fca15b39abc
> [0x00007fca15b39a40+0x000000000000007c]
> J 4488 c1 java.lang.Thread.run()V java.base at 13.0.1 (17 bytes) @
> 0x00007fca159fc174 [0x00007fca159fc040+0x0000000000000134]
> v  ~StubRoutines::call_stub
> 
> siginfo: si_signo: 11 (SIGSEGV), si_code: 128 (SI_KERNEL), si_addr:
> 0x0000000000000000
> 
> Register to memory mapping:
> ...
> 
> Can someone shed more info on when this can happen? I am seeing this on
> multiple servers with Java 13.0.1+9 on RHEL6 servers.
> 
> There was another thread in hotspot runtime where David Holmes pointed this
>> siginfo: si_signo: 11 (SIGSEGV), si_code: 128 (SI_KERNEL), si_addr:
> 0x0000000000000000
> 
>> This seems it may be related to:
>> https://bugs.openjdk.java.net/browse/JDK-8004124
> 
> Just wondering if this is same or something to do with GC specific.
> 
> 
> 
> TIA
> Sundar
> 



More information about the hotspot-gc-dev mailing list