Possibility of code cache unloading

Eric Caspole eric.caspole at amd.com
Mon Oct 12 09:53:41 PDT 2009


We discovered some app servers do not have really well behaved class  
loader schemes, so that in a long running process with multiple re- 
deployments of the same web app, the old stuff never gets unloaded  
and the first thing to run out of space in this case was the code  
cache. In this event the compiler shuts itself off and does not turn  
itself on regardless of what happens in later GC/sweep cycles.

We have been wondering what it would take to do some kind of code  
cache unloading so long running applications like this won't end up  
having the later redeployments running interpreter-only. Many users  
do not even know when this has happened and may or may not notice a  
gradual mysterious slowdown until they just restart the process.

In various discussions ideas have popped up from marking some amount  
of existing nmethods non-entrant so they will get unloaded in the  
existing code cache, to more elaborate reallocation schemes for the  
whole code cache. If necessary, a short term slowdown before getting  
back to having all the hot methods recompiled seems better than  
restarting the process. Perhaps there could be some JMX notification  
for this situation.

One idea that came out of our testing was the idea of a current  
working set of hot methods. We saw that if enough space could be  
regained in the code cache, the program's normal operation would  
require recompiles only of the current hot method set, which is  
hopefully a lot smaller than the whole code cache size. Then it would  
quickly resume normal operation, and only in the event of one or more  
web app redeployments as mentioned above would the code cache require  
another flush operation, hopefully days or weeks later.

Lastly, it is probably desirable to have a fallback plan of giving up  
and shutting off the compiler if the flush cycles are happening too  
often, for example if the hot method working set size is too close to  
the whole code cache size, and the application performance won't be  
any worse than it gets today.

We'd like to hear if anyone else has a strong opinion or great idea  
on this topic, and what corner case did we not think of. I remember a  
wise crack about this topic at the JVM Languages Summit a few weeks  
ago so it seems someone out there is thinking about it.

Thanks,
Eric




More information about the hotspot-compiler-dev mailing list