RFC: deoptimization & stack bang (8032410: compiler/uncommontrap/TestStackBangRbp.java times out on Solaris-Sparc V9)

Christian Thalinger christian.thalinger at oracle.com
Thu Feb 20 20:28:10 PST 2014


On Feb 20, 2014, at 10:02 AM, Roland Westrelin <roland.westrelin at oracle.com> wrote:

> I’m looking for comments on this rather than a review. I think it deadlocks because when the stack bang in the deopt or uncommon trap blobs triggers an exception, we throw the exception right away even if the deoptee has some monitors locked.
> 
> One solution prototyped on sparc is:
> 
> http://cr.openjdk.java.net/~roland/8032410/webrev.00/
> 
> Rather than propagate the exception from the signal handler, we return to the deopt/uncommon trap blobs and unlock the monitors that the thread have locked in the deoptee. This would need more platform dependent code to support other platforms.
> 
> Rather than do that (and that’s where I’m looking for comments), why wouldn’t the compilers compute the maximum size of the interpreter frames for the nmethod it’s compiling and generate a stack bang from that size rather than the size of the compiled frame. Then we wouldn’t have to worry about banging the stack in the deopt/uncommon trap blobs, the bug above with the locked monitors couldn’t occur (and we wouldn't see the bugs with stack overflows during deoptimization that we’ve had recently). Is there a reason I’m missing why this would be a bad idea?

That’s actually a good idea.  Right now I can’t see a problem with that.  We might throw StackOverflowErrors sooner than currently but that’s only because of an optimization we do (namely compiling code).  If we would run interpreted we would overflow the stack anyway.

> 
> Roland.
> 



More information about the hotspot-compiler-dev mailing list