RFR (S) 8074292: nsk/jdb/kill/kill001: generateOopMap.cpp assert(bb->is_reachable()) failed

coleen.phillimore at oracle.com coleen.phillimore at oracle.com
Tue Apr 14 12:36:13 UTC 2020



On 4/13/20 10:49 PM, David Holmes wrote:
> Hi Coleen,
>
> On 14/04/2020 12:34 am, coleen.phillimore at oracle.com wrote:
>> Summary: Do not install async exceptions at_safepoint for each bytecode.
>
> I'm still not certain that we have to go this far to solve this 
> problem, but it does sound like a relatively simple solution provided 
> there are no unintended consequences.
>
>> See CR for a lot more details.  This change calls a new 
>> InterpreterRuntime::at_safepoint_async_safe() which installs the 
>> async exception in the interpreter at backward branches and returns.  
>> This uses safepoint polling code in the interpreter for each 
>> platform.  These changes (cross) compile on platforms that Oracle 
>> doesn't support but I don't know if they work.
>>
>> I'm not convinced the platform specific changes are necessary, 
>> because calls to the runtime from many bytecodes will install the 
>> async exception, so it's essentially installed "enough" for this 
>> deprecated feature.  I tested the changes with *and* without the 
>> platform specific changes with no failure, which included the jdb, 
>> jdi and jvmti serviceability tests.
>
> I don't understand what you mean here. If the whole basis of this fix 
> is "don't install async exceptions other than at backward branches and 
> returns" then how is that implemented without the changes in the 
> interpreter code?
>
> If this can be fixed just by adjusting the actual monitor code then I 
> would much prefer that. It took me a while to get my head around the 
> dispatch changes in interpreter code and even then I don't see how 
> those changes only impact backward branches and returns ??

You have to read the comments in the bug again.  There *is* special code 
to not install the async exception in the monitorexit code. That is not 
enough to prevent this bug.  You can also read the old bug report you 
linked to this one.

The interpreter code dispatch_next passes "true" if it's a backwards 
branch, that's how it can tell.

My point was that there are enough code paths that install async 
exceptions *other than monitorenter and monitorexit* that maybe it's not 
necessary to install them at backwards branches and returns.  I suppose 
someone could construct a test case to show otherwise.
>
>>
>> This change also makes InterpreterRuntime::monitorexit a JRT_LEAF 
>> bytecode. The code to check for exceptions is outside the runtime 
>> call. I ran the JCK vm and lang tests on this change with no failure.
>>
>> Tested with tier1-6.
>>
>> open webrev at 
>> http://cr.openjdk.java.net/~coleenp/2020/8074292.01/webrev
>> bug link https://bugs.openjdk.java.net/browse/JDK-8074292
>
> ./cpu/x86/interp_masm_x86.cpp
>
> It took me a long time to figure out how the new logic worked compared 
> to the old logic. Even then I'm unclear about the effective recursive 
> dispatch path: dispatch_base(generate_poll=true) -> dispatch_via -> 
> dispatch_base(generate_poll=false) - does it work okay with 
> VerifyActivationFrameSize? It seems a rather convoluted way to 
> effectively just execute:
>
>  858   lea(rscratch1, ExternalAddress((address)table));
>  859   jmp(Address(rscratch1, rbx, Address::times_8));

I could test it with VerifyActivationFrameSize.  Or I could just add the 
two instructions per platform.  I mostly copied the code in 
generate_safept_entry_for.  It might be better to just copy the 
instructions.

>
> ---
>
> src/hotspot/share/interpreter/interpreterRuntime.cpp
>
> How were you able to drop this code:
>
> 791   if (elem == NULL || h_obj()->is_unlocked()) {
> 792 THROW(vmSymbols::java_lang_IllegalMonitorStateException());
> 793   }
>
> ?
>  and this:

The caller throws the exception for these cases.

>
> 798 #ifdef ASSERT
> 799 thread->last_frame().interpreter_frame_verify_monitor(elem);
> 800 #endif
>

This seemed redundant.

Coleen
> ?
>
> Thanks,
> David
>
>> Thanks,
>> Coleen
>>



More information about the serviceability-dev mailing list