How do are callers patched when a method is recompiled?

Tom Rodriguez Thomas.Rodriguez at Sun.COM
Mon Oct 6 22:59:53 PDT 2008


On Oct 6, 2008, at 7:27 PM, Yale Zhang wrote:

> Tom,
>
> Thanks for helping. The method you describe is almost exactly as I  
> envisioned it in #1, which has the advantage of being lock free, not  
> needing to store an explicit list of callers to patch, and gradual  
> patching via the detour created. However, I see a potential problem.  
> If you overwrite the 1st instruction in a function when a thread is  
> in the code, then the function might not execute atomically if it  
> has to execute the clobbered instruction again. Is it assumed that  
> the patched instruction is a stack frame save instruction that never  
> get executed > once? Also, I see why the patch can't overwrite > 1  
> instruction due to the possibility of executing an invalid  
> instruction. Does HotSpot attempt to always make the 1st instruction  
> of a function >= 5 bytes (patch size for X86)?

It does create some constraints on what the first instruction of the  
nmethod is and we always align the entry point to at least 8 bytes.   
There are assertions scattered about that check these conditions.   
Instruction cache memory consistency tends to be weaker than the  
normal memory hierarchy so we rely on store atomicity and don't  
require that other threads immediately see the changes to the  
instruction stream.


> Overall, while this method is efficient, I think it creates an ugly  
> implementation due to ISA specifics. Does anybody know how  
> applicable in general this method is on various ISAs? I see that  
> it's used for x86_64, x86, and Sparc.

There are also ia64, ppc and arm ports of hotspot in existence that  
use the same basic mechanism.  x86 uses a branch but sparc uses a  
trapping instruction along with extra logic in the signal handler.  So  
there are lots of ways to approach the entry point patching.  Any ISA  
with fixed size instructions is relatively easy, it's only x86 that  
makes the patching part somewhat tricky.

tom

>
> On Mon, Oct 6, 2008 at 4:58 PM, Tom Rodriguez <Thomas.Rodriguez at sun.com 
> > wrote:
> The basic machinery relies on making methods not entrant and  
> reclaiming them once we're sure no inline cache could still contain  
> a reference to them.  We overwrite the first instructions of the  
> generated code with an instruction sequence that gets us into the  
> handle_wrong_method stub.  Look at NativeJump::patch_verified_entry  
> and SharedRuntime::handle_wrong_method.  This forces any callers to  
> reresolve the call site and either find any new generated code or  
> fall back to the interpreter.  NMethodSweeper takes care of cleaning  
> all the inline caches that might still contain references to the  
> nmethod by cleaning a portion of the code cache at every safepoint.   
> Once we're sure that no inline cache could contain a reference to  
> the not entrant nmethod we mark it for reclamation.  There are some  
> tricky bits in the runtime where code is operating on an nmethod and  
> we want to be sure it isn't swept from underneath us and these use  
> the nmethodLocker which is basically a reference counter that delays  
> the freeing of the nmethod.
>
> We're not attempting to be aggressive in how quickly we reclaim the  
> storage from the code cache and there are some problems with the  
> current code if the rate of invalidation gets too high but those are  
> just implementation details.  There are a lot of ways our current  
> implementation could be improved but it performs adequately in most  
> ways so other than a few bug fixes it largely hasn't changed for a  
> long time.  The main flaw we'll have to address in the future is  
> that if the rate of invalidation and/or the number of nmethods is  
> high it performs way too much work at each safepoint.  It processes  
> a 1/4 of the code cache each time and it probably needs metric that  
> is tied how much work it performs instead.
>
> tom
>
>
> On Oct 5, 2008, at 1:36 PM, Yale Zhang wrote:
>
> Hi. I'm trying to build a dynamic optimization framework for LLVM  
> and have been looking at HotSpot for ideas on how to patch the  
> callers when a method is recompiled. I've spent over an hour looking  
> at functions like new_nmethod, register_method and I can't figure it  
> out.
>
> I've been thinking about the following approaches:
>
> 1. lazy relinking - keep the old code and patch callers only when  
> they refer to the old code. I guess you can do this by patching the  
> old code to find the call site and patch it. Then there's the  
> question of
> when to throw away the old fragment. Could use reference counting or  
> garbage collection.
>
> 2. relink immediately - upon recompiling, all callers are patched.  
> This would be expensive because it would entail either going through  
> every function looking for such a call (more processing) or  
> maintaining a list of callers for each function (more storage).
>
> So, how does HotSpot do it?
>
>




More information about the hotspot-dev mailing list