JNI-performance - Is it really that fast?
Dave Dice
Dave.Dice at Sun.COM
Tue Mar 25 22:18:26 UTC 2008
>
>> Biased locking is enabled 4 seconds after startup.
> Thanks a lot Christian and greetings from Austria!
> With your suggested flag the synchronization-overhead with
> BiasedLocking shrinks to about 10-20% on my Dual-Core machine down
> from a few hundred percent.
> Do you know wether ReentrantLock could also be optimized to use
> BiasedLocking? In my use-case most likely one thread will
> aquire/release the lock again and again, maybe from time to time
> another thread will aquire it, but rather seldom.
In theory biased locking could be applied to the j.u.c lock family,
but we have no current plans to do so. I've discussed it with Doug
Lea and neither of us feel it's a good fit. The j.u.c operators are a
good choice when "synchronized" doesn't fit the bill, such as when you
might need timed waits, trylock, hand-over-hand "coupled" locking,
etc. ReentrantLock also tends to be used in situations where the
programmer is sure multiple threads are actively coordinating their
operation, meaning that ReentrantLock would benefit little from biased
locking. For most synchronization -- contended or uncontended --
you're better off with synchronized as you get the benefits of biased
locking, adaptive spinning, potential lock elision via escape
analysis, and in the future, hardware transactional lock elision (http://blogs.sun.com/dave/entry/rock_style_transactional_memory_lock
).
In your case if the lock is ever shared -- that is, locked by multiple
threads during its lifetime -- then biased locking probably won't
provide the latency reduction benefit you're after. The object will
likely become unbiased at some point. I suspect that sharing will
ultimately occur in your case, but be infrequent, correct?
Regards
Dave
p.s., even if a JVM were to use aggressive static escape analysis
there'll still be lots of cases in common usage where the JIT can't
conservatively prove non-escape. And of course it turn it won't be
able to elide the lock. Biased locking works well in those cases. In
a sense we can bet against the object ever being shared, but if wrong
the JVM can detect and recover safely making it a nice dynamic adjunct
to static analysis. This is important as we see lots of
"precautionary" locking in the standard libraries such as hashtable or
vector. It's often the case that a coarsely-synchronized collection
instance will be accessed by only one thread.
>
>
>> Our per primitive cost is still mostly
>> consists of jni overhead for small primitives (think
>> fillRect(1x1)).
> For my fillRect(1x1) test the locking of AWT's ReentrantLock was far
> more expensive than the JNI overhead even with almost no contention.
> That was for a VolatileImage, on a Dual-Core machine, on a single-core
> machine I tested on the hit was much smaller.
>
>> In the meantime the people who believe jni performance is very good
>> please continue to speak up as I'm sure the vm engineers who have
>> worked
>> to improve this path over the years will appreciate the feedback. :-)
> Its really impressive, congratulations and thanks to the vm engineers
> who made that possible :) ;)
>
> Thanks a lot, lg Clemens
More information about the discuss
mailing list