Longjumps considered inexpensive...until they aren't
Charles Oliver Nutter
charles.nutter at sun.com
Sat Jun 21 15:48:46 PDT 2008
Charles Oliver Nutter wrote:
> This is a longer post, but very important for JRuby.
>
> In John Rose's post on using flow-control exceptions for e.g. nonlocal
> returns, he showed that when the throw and catch are close enough
> together (i.e. same JIT compilation unit) HotSpot can turn them into
> jumps, making them run very fast. This seems to be borne out by a simple
> case in JRuby, a return occurring inside Ruby exception handling:
>
> def foo; begin; return 1; ensure; end; end
>
> In order to preserve the stack, JRuby's compiler generates a synthetic
> method any time it needs to do inline exception handling, such as for
> begin/rescue/ensure blocks as above. In order to have returns from
> within those synthetic methods propagate all the way back out through
> the parent method, a ReturnJump exception is generated. Here's numbers
> for without the begin/ensure and with it:
BTW, I did find a problem unrelated to Hotspot optimization that
improved performance substantially, and I describe it here for posterity.
The original call site caching logic in JRuby was structured roughly
like this:
- lookup method
- invoke method
- if all goes well, cache method reference
The idea was that if the method fails exceptionally, we don't want to
cache it. However that completely ignored the fact that non-local flow
control was implemented as an exception. So when a non-local return
bubbled back out through the call sites, it basically caused them to
skip caching logic. This meant that any methods between a non-local
return throw and cache would never cache.
Fixing this (by always caching the method first) brought performance to
a much more reasonable level. Still not as fast as a "soft return"
falling through the stack, but it was a real "d'oh" moment when I found
it. Something to consider for anyone else supporting non-local flow
control in a call sequence with specific sequencing requirements.
- Charlie
More information about the mlvm-dev
mailing list