RFR(m): 8221734: Deoptimize with handshakes
Robbin Ehn
robbin.ehn at oracle.com
Fri Apr 26 12:50:25 UTC 2019
Hi Dan, thanks for looking at this!
On 4/25/19 10:43 PM, Daniel D. Daugherty wrote:
>> Code:
>> http://cr.openjdk.java.net/~rehn/8221734/v1/webrev/index.html
>
> Robbin, you'll have to merge with Coleen's recent MutexLocker fix (8222518).
Yes, done!
> src/hotspot/share/aot/aotCompiledMethod.cpp
> L163: bool AOTCompiledMethod::make_not_entrant_helper(int new_state) {
> L207: bool AOTCompiledMethod::make_entrant() {
> So the Compiler team is on board with switching from the
> Patching_lock to the CompiledMethod_lock?
Patching_lock originally intended to protect code patching AFAIK, is now used
for a lot of different unrelated cases. To iterate the compiled methods we need
to hold CodeCache_lock, which is a leaf lock. Sometimes we take CodeCache_lock
while holding Patching_lock, so we can't take Patching_lock while iterating. By
having a new leaf lock for the compiledmethod we can update them while iterating
over them.
> src/hotspot/share/gc/z/zBarrierSetNMethod.cpp
> No comments (copyright year needs update).
Fixed.
>
> src/hotspot/share/gc/z/zNMethod.cpp
> No comments (copyright year needs update).
Fixed.
>
> src/hotspot/share/oops/markOop.hpp
> L180: bool biased_locker_is(JavaThread* thread) const {
> Is this for a different project?
>
> L181: if (!has_bias_pattern()) {
> L182: return false;
> L183: }
> L184: // If current thread is not the owner it can be unbiased at anytime.
> L185: JavaThread* jt = (JavaThread*) ((intptr_t) (mask_bits(value(),
> ~(biased_lock_mask_in_place | age_mask_in_place | epoch_mask_in_place))));
> So you don't want to use this:
>
> JavaThread* biased_locker() const {
> assert(has_bias_pattern(), "should not call this otherwise");
> return (JavaThread*) ((intptr_t) (mask_bits(value(),
> ~(biased_lock_mask_in_place | age_mask_in_place | epoch_mask_in_place))));
> }
>
> because of the assert().
>
> I think biased_locker() and biased_locker_is() both need to work
> with a copy of the markOop so that it can't change dynamically.
> Something like:
>
> JavaThread* biased_locker() const {
> markOop copy = this;
> assert(copy.has_bias_pattern(), "should not call this otherwise");
> return (JavaThread*) ((intptr_t) (copy.mask_bits(value(),
> ~(biased_lock_mask_in_place | age_mask_in_place | epoch_mask_in_place))));
> }
>
> bool biased_locker_is(JavaThread* thread) const {
> markOop copy = this;
> if (!copy.has_bias_pattern()) {
> return false;
> }
> return copy.biased_locker();
> }
Yes, thanks I did a slightly different fix, since we already have a copy.
>
> src/hotspot/share/oops/method.cpp
> old L104: clear_code(false /* don't need a lock */); // from_c/from_i get
> set to c2i/i2i
> Is the comment after '//' not useful?
Added it back.
>
> src/hotspot/share/runtime/biasedLocking.cpp
> src/hotspot/share/runtime/biasedLocking.hpp
> Hmmm... More Biased Locking changes. I didn't take a close
> look at these.
Yes, please.
>
> src/hotspot/share/runtime/mutex.hpp
> old L65: special = tty + 1,
> new L65: special = tty + 2,
> Why?
CompiledMethod_lock must be under CodeCache_lock.
There is no easy way to push locks up, without just hoping testing asserts.
This way I only need to look at the new lock.
Also the compiler locks are very coarse grained, there should be more locks in
here :)
>
> src/hotspot/share/runtime/mutexLocker.cpp
> No comments.
>
> src/hotspot/share/runtime/mutexLocker.hpp
> L34: extern Mutex* Patching_lock; // a lock used to
> guard code patching of compiled code
> L35: extern Mutex* CompiledMethod_lock;
> A comment is traditional here...
>
Fixed.
> src/hotspot/share/runtime/synchronizer.cpp
> old L1317: !SafepointSynchronize::is_at_safepoint(), "invariant");
> new L1317: !Universe::heap()->is_gc_active(), "invariant");
> Why?
If we use handshake fallback path (obsolete in JDK 14) we execute the handshake
inside a safepoint. Thus when inflating we can be at a safepoint.
>
> L1446: ResourceMark rm;
> L1496: ResourceMark rm;
> Why drop 'Self'? That makes the ResourceMark more expensive..
During the handshake when the VM thread executes the handshake on behalf of the
JavaThread, thus inflating the monitor for that thread, meaning Self is not
Thread::current().
> src/hotspot/share/services/dtraceAttacher.cpp
> No comments (copyright year needs update).
Fixed.
>
>
> I don't think I've found anything that's "must fix". Please let me know
> if I should re-review:
>
> src/hotspot/share/runtime/biasedLocking.cpp
> src/hotspot/share/runtime/biasedLocking.hpp
> src/hotspot/share/runtime/deoptimization.cpp
>
> because the Biased Locking changes are critical for this project.
Yes, the inflation part is trickiest.
More eyes is better!
I have some small changes to biasedLocking.[h|c}pp which I'll send through some
tiers of testing.
Please hold of re-exmine that until I'll send out the updated and tested code.
Thanks!
/Robbin
>
> Dan
>
>
>> Issue:
>> https://bugs.openjdk.java.net/browse/JDK-8221734
>>
>> Passes t1-7 and multiple t1-5 runs.
>>
>> A few startup benchmark see a small speedup.
>>
>> Thanks, Robbin
>
More information about the hotspot-compiler-dev
mailing list