RFR(m): 8221734: Deoptimize with handshakes

David Holmes david.holmes at oracle.com
Thu May 9 04:55:49 UTC 2019


Hi Robbin,

On 8/05/2019 2:01 am, Robbin Ehn wrote:
> Hi David,
> 
> On 5/7/19 4:49 AM, David Holmes wrote:
>> Hi Robbin,
>>
>> I took a look at this and it is a lot more complex than I had 
>> expected. There are some interconnections that I'm not understanding 
>> here. From existing code why does deopt need to revoke biases? I don't 
>> see how deopt changes anything in respect to monitor ownwership. Then 
>> in the new code why do you have to inflate monitors to do deopt? If 
>> this is truly new behaviour and I didn't just miss where this happens 
>> in existing code, then how does this impact the number of 
>> ObjectMonitors in existence and the monitor deflation process?
> 
> When going to interpreter we expand stack, which means moving anything 
> on stack.
> When moving locks around it might require an inflation, we therefore 
> revoke the
> biases before. The code revoking is seemingly from duke, so not that new 
> I think :)

Some bits here I'm not familiar enough with. If we've already gone to 
stack-lock then I would have expected the bias to already have been 
revoked. If we're biased only then there's nothing on the stack so 
nothing to move and so no reason to revoke the bias. Maybe it's just 
considered simpler to revoke and inflate no matter what.

>>
>> More comments below ...
>>
>> On 3/05/2019 8:31 pm, Robbin Ehn wrote:
>>> Hi, please see this update:
>>>
>>> Inc:
>>> http://cr.openjdk.java.net/~rehn/8221734/v2/inc/webrev/index.html
>>> Full:
>>> http://cr.openjdk.java.net/~rehn/8221734/v2/webrev/
>>
>> src/hotspot/share/code/codeCache.cpp
>>
>> This comment block no longer applies give there is no safepoint op now:
>>
>> 1190   // CodeCache can only be updated by a thread_in_VM and they 
>> will all be
>> 1191   // stopped during the safepoint so CodeCache will be safe to 
>> update without
>> 1192   // holding the CodeCache_lock.
>>
>> The same comment block here:
>>
>> 1208   // CodeCache can only be updated by a thread_in_VM and they 
>> will all be
>> 1209   // stopped dring the safepoint so CodeCache will be safe to 
>> update without
>> 1210   // holding the CodeCache_lock.
>>
>> is already incorrect if not actually at a safepoint. It makes the 
>> relationship between the Compile_lock, CodeCache_lock and being at a 
>> safepoint rather confusing.
> 
> Yes, these area is filled with 'wired' comments.
> mark_for_deoptimization takes CodeCache_lock unconditionally.
> I just removed these two.
> 
>>
>> ---
>>
>> src/hotspot/share/code/nmethod.cpp
>>
>> The comment here:
>>
>> 1119   // If _method is already NULL the Method* is about to be unloaded,
>> 1120   // so we don't have to break the cycle. Note that it is 
>> possible to
>> 1121   // have the Method* live here, in case we unload the nmethod 
>> because
>> 1122   // it is pointing to some oop (other than the Method*) being 
>> unloaded.
>>
>> no longer fits the code given we no longer skip the _method==NULL 
>> case. Further, the trailing comment here:
>>
>> 1123   Method::unlink_code(_method, this); // Break a cycle
>>
>> seems unnecessary given we preceded this with 4 lines of commentary 
>> already!
> 
> Updated.
> 
>>
>> Again this comment block:
>>
>> 1205 void nmethod::unlink_from_method() {
>> 1206   // We need to check if both the _code and 
>> _from_compiled_code_entry_point
>> 1207   // refer to this nmethod because there is a race in setting 
>> these two fields
>> 1208   // in Method* as seen in bugid 4947125.
>> 1209   // If the vep() points to the zombie nmethod, the memory for 
>> the nmethod
>> 1210   // could be flushed and the compiler and vtable stubs could 
>> still call
>> 1211   // through it.
>> 1212   Method::unlink_code(method(), this);
>>
>> seems meaningless with the code change you've applied.
> 
> Moved it to Method::unlink_code.
> 
>>
>> ---
>>
>> src/hotspot/share/code/nmethod.hpp
>>
>> !   void unlink_from_method();
>>
>> Now this doesn't take the acquire_lock parameter it would be useful to 
>> document what the locking expectations are: must this be called with a 
>> given lock held, or will it always acquire a given lock if needed?
> 
> Added comment.
> 
>>
>> ---
>>
>> src/hotspot/share/oops/method.cpp
>>
>> + void Method::unlink_code(Method *method, CompiledMethod *compare) {
>> + void Method::unlink_code(Method *method) {
>>
>> I'm not sure making these methods static just so the NULL check can be 
>> internalized is the best way to deal with this. Now you can't tell 
>> when NULL is expected and when it is an error. IMHO it is better to 
>> keep these as instance methods with a NULL check at the callsite if 
>> needed (or an assert if NULL is not expected).
> 
> All places was NULL checked before calling old clear_code() and in no 
> place was that asserted.

I'd still prefer an external NULL check and a call to an instance 
method, then using a static method just to internalize the NULL check. 
That's not a style of API that I like - if you push on it hard enough 
you end up with only public static methods that do a NULL check then 
dispatch to private instance methods.

> 
>>
>> ---
>>
>>   src/hotspot/share/runtime/biasedLocking.cpp
>>   src/hotspot/share/runtime/biasedLocking.hpp
>>
>> This seems like it would be better done after we have switched 
>> biased-locking revocation to use handshakes instead of safepoints. 
>> Otherwise we seem to be doing a partial conversion as a side-effect of 
>> this bug and it's far from obvious that it is complete/correct.
> 
> This is not the same, we are in a handshake and we want to revoke the 
> locks we
> find on our stack. Doing revoke with handshake is when you are revoking 
> another
> threads bias. This is a thread local operation.
> As I said to Patricio the only thing I changed is that we can't safepoint.
> So it's the same code as before to revoke, just minus the safepoints
> possibilities.

I'll wait to see new webrev.

>>
>> ---
>>
>> src/hotspot/share/runtime/synchronizer.cpp
>>
>> These changes have me worried:
>>
>>     assert(Universe::verify_in_progress() ||
>> !          !SafepointSynchronize::is_at_safepoint(), "invariant");
>>
>> becomes:
>>
>>     assert(Universe::verify_in_progress() ||
>> !          !Universe::heap()->is_gc_active(), "invariant");
>>
>> I don't see an immediate equivalence between not being at a safepoint 
>> and not having a GC active. If I'm not at a safepoint now and the code 
>> following doesn't do a safepoint check then we remain outside of a 
>> safepoint. What is the same reasoning for a GC being active?
> 
> When we use the handshake fallback path, handshaking on a platform not
> supporting handshakes, the handshake is performed with a safepoint
> (will go away in JDK 14).
> 
> "calling hashCode() or any of its internal aliases may result in changes
> to an object's markword if we need to assign a hashCode. That is,
> hashCode() can be a heap mutator. Given that, it seems unwise to call
> hashCode while at a safepoint. "
> 
> Since the JavaThread inflates on demand I removed the eagerly inflation.
> Inflation is thus done by the JavaThread, I revert this.
> 
>>
>> !         ResourceMark rm(Self);
>> !         ResourceMark rm;
>>
>> Are you suggesting the current thread is not Self? If that is the case 
>> then there should be numerous asserts earlier on to ensure we can't 
>> follow any code paths that expect that Self is the current thread! But 
>> I'm concerned that we've introduced a new way for a third-party thread 
>> to introduce monitor inflation almost independent of the threads using 
>> the monitor.
> 
> Reverted this because of above reason.
> (when VM thread executed the handshake on behalf of the JavaThread it 
> did the
> inflation, so Self was not Thread::current())

I'm happy to hear that is reverted. I'm not at all sure that inflation 
by a third-party thread would work correctly - and I very much expect it 
might complicate the async monitor deflation work.

Thanks,
David
-----

> I'll send update to RFR mail.
> 
> Thanks, Robbin
> 
>>
>> Thanks,
>> David
>> -----
>>
>>>
>>> # Note
>>> http://cr.openjdk.java.net/~rehn/8221734/v2/inc/webrev/src/hotspot/share/runtime/biasedLocking.cpp.sdiff.html 
>>> line 630
>>> This is revert to the original, I accidental had left in a temporary 
>>> test change, as you can see here in full diff:
>>> http://cr.openjdk.java.net/~rehn/8221734/v2/webrev/src/hotspot/share/runtime/biasedLocking.cpp.sdiff.html 
>>>
>>>
>>> I think I manage to address all review comments.
>>>
>>> Dean can you please cast an extra eye on:
>>> http://cr.openjdk.java.net/~rehn/8221734/v2/inc/webrev/src/hotspot/share/oops/method.cpp.sdiff.html 
>>>
>>> This OR should be correct.
>>>
>>> Dan please do the same on the biased locking changes.
>>>
>>> I left out the merge with MutexLocker changes, since it was not 
>>> interesting.
>>> There were some conflicts with JVMCI changes, so incremental contains 
>>> some parts of that merge.
>>>
>>> Passes t1-5 and local testing.
>>> I'll continue with some additional testing.
>>>
>>> Thanks, Robbin
>>>
>>> On 4/25/19 2:05 PM, Robbin Ehn wrote:
>>>> Hi all, please review.
>>>>
>>>> Let's deopt with handshakes.
>>>> Removed VM op Deoptimize, instead we handshake.
>>>> Locks needs to be inflate since we are not in a safepoint.
>>>>
>>>> Goes on top of:
>>>> https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-April/033491.html 
>>>>
>>>>
>>>> Code:
>>>> http://cr.openjdk.java.net/~rehn/8221734/v1/webrev/index.html
>>>> Issue:
>>>> https://bugs.openjdk.java.net/browse/JDK-8221734
>>>>
>>>> Passes t1-7 and multiple t1-5 runs.
>>>>
>>>> A few startup benchmark see a small speedup.
>>>>
>>>> Thanks, Robbin


More information about the hotspot-compiler-dev mailing list