Still crashity crashing
Charles Oliver Nutter
headius at headius.com
Wed Aug 5 17:40:48 PDT 2009
On Wed, Aug 5, 2009 at 7:30 PM, Rémi Forax<forax at univ-mlv.fr> wrote:
> I have seen some small rooms of improvement in the JRuby code
> (apart from replacing DynamicMethod by MethodHandle which is
> what must be done but which is also in my opinion a more than one week job
> :)
Nah, it's a day at most. Here's the process:
1. When generating DynamicMethods that point at real Java code, also
attach target class, method name, and signature info.
2. Use this info at runtime when preparing the call sites.
And even if we went with a stupid-simple solution, I could switch on
JRuby's reflection-based mode and simply unreflect the Method object
each ReflectedMethod (extends DynamicMethod) would contain. The fact
that we can juggle handle information immediately before installing it
in the call site makes everything pretty easy.
And here's a couple questions for you and John:
1. If we desired to expand JRuby's monomorphic inline caching in indy
mode, could we simply chain together a series of GWTs that perform
type comparisons? So we'd have something like
at call site:GWT1
test: is receiver a String
target: call cached String method
fallback: GWT2
test: is receiver an Array
target: call cached Array method
fallback: GWT3
...
fallback: slow lookup and perhaps add to or edit the GWT
chain based on some heuristic
Since the entire sequence would still inline (ideally) it would be
similar to the regenerated call site logic that the DLR does...except
way better, because it wouldn't interfere with the eventual targets
inlining into the caller.
2. Have you (or anyone else) thought much about how indy might
eventually play with the tiered compiler? I could see the two working
extremely well together. In my head, the first tier compilation would
happen quickly and know about method handle chains, but also install
appropriate profiling hooks that can gather information about those
chains. For example, in the GWT case above, it would include branch
profiling to see which of the types in the PIC were getting hit most
often. Then the second tier compiler could do a better job of
optimizing those paths.
I know the tiered compiler work has slowed over the past few months,
but I've also heard plans are still to make it a reality. Tiered
compilation combined with indy would certainly be worth more than the
sum of its parts, since dynamic, multi-stage optimization is the name
of the game for fast dynamic languages.
- Charlie
More information about the mlvm-dev
mailing list