still no fun with invokedynamic

Rémi Forax forax at univ-mlv.fr
Sun Sep 13 06:53:02 PDT 2009


Le 13/09/2009 02:38, John Rose a écrit :
> On Sep 11, 2009, at 2:36 AM, Jochen Theodorou wrote:
>
>>> You can also try the backport :)
>>> http://code.google.com/p/jvm-language-runtime/source/browse/#svn/trunk/invokedynamic-backport
>>
>> well, I will most probably do, since there is nothing else and the VM
>> implementation is hopefully faster than or of equal speed. In
>> functionality I trust your backport very much ;)
>
> The backport a great option for experimentation, since it does not 
> require a pre-release JVM.  Its performance seems to be comparable to 
> the current MLVM JVM.  Basically what you get is a backend that 
> performs JRuby-like rewrites of JSR 292 bytecodes.  As we work on code 
> quality (JIT optimizer) in the MLVM JVM, I expect that the native JSR 
> 292 implementations will be decisively faster, since the JSR 292 
> version of the code has (potentially) more quasi-static knowledge for 
> the JIT optimizer to exploit.

John, I don't agree with you :)
The backport provide you more than the JRuby-like optimisation,
or perhaps not more but something different.

In fact the backport can be split in two parts, a part that is like the 
JRuby optimisations on method calls, i.e an abstract class
that contains one abstract method by arity. Another part, named 
optimizer, works more like the indy compiler inlining patch,
it is triggered when a method handle/call site is used often and try to 
produce from the method handle adapter tree
a simple sequence of bytecode that the VM JIT will be able to inline.

Unlike JRuby, the backport neither controls the signature of the 
invokedynamic bytecode at call site
nor it controls the signature of the functions/methods of the language.
So at call site the backport may have to do some boxing/unboxing and
at callee part the backport may rely on reflection.

The backport optimizer has the same knowledge (almost) than the MLVM JIT
but because it produce raw bytecode, it can't send its knowledge to the JIT.
So I agree with John that in one year (perhaps less) the MLVM compiler 
is/will be faster.

>
> Also, note that the JVM and the JDK are tightly coupled in the JSR 292 
> implementations.  At least during JDK7 development, you can't mix and 
> match JVMs and JDKs.  The best bet will be to use an JDK7 build 
> updated with the current MLVM stuff, when that becomes available.  But 
> MLVM will always be ahead of the curve.  I wonder if our friends over 
> at EngineYard are making builds from it?  (Hint, hint.)  I know Attila 
> Szegedi also has been making MLVM builds and posting the bits.
>
> The other question, of course, is when the MLVM stuff will get into 
> the JDK7 builds.  That work has been road-blocked by implementation 
> problems in the GC; it's what I've been working on for several weeks 
> (plus vacation).  GC extensions are behind many of the crashes we've 
> been seeing in the MLVM.  As of last night, I have a set of GC changes 
> that pass the JDK7 pre-integration testing, and also begin to support 
> the JRuby tests (the benchmarks, actually).  Pushing these into the 
> JDK7 pipeline will open the way to committing the other pending MLVM 
> changes, so we can get standard builds that are beyond the JavaOne 
> preview functionality.
>
> See you at the Summit!
> -- John

See you there,
Rémi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/mlvm-dev/attachments/20090913/b9d95c40/attachment-0001.html 


More information about the mlvm-dev mailing list