Loading AOT alongside bytecode / caching JIT optimisations

Christian Thalinger cthalinger at twitter.com
Tue Oct 2 13:55:09 UTC 2018



> On Oct 2, 2018, at 4:22 AM, elias vasylenko <eliasvasylenko at gmail.com> wrote:
> 
> Hello,
> 
> As I understand it Graal AOT has two main advertised advantages, which
> appear to be orthogonal: smaller distributions and faster startup. The
> tradeoffs made to achieve these goals are: no dynamic loading of code, no
> dynamic recompilation, and limited reflection.
> 
> These trade-offs make perfect sense for certain short-lifecycle
> minimal-dependency use cases e.g. cloud/microservices/CLI. But for larger
> applications with more traditional long-term workloads we may start to miss
> them.
> 
> So what if we sacrificed our ability to distribute smaller binaries and
> focused on the singular goal of faster startup, could we eliminate the need
> for some of these trade-offs?
> 
> For example, if we stored out (partially?) AOT compiled code *alongside*
> our original bytecode could we theoretically load it into the standard
> GraalVM instead of SubstrateVM and enjoy fast cold-starts without giving up
> all the bells and whistles?
> 
> Alternatively I recall that some time ago there was a JEP which explored
> caching JIT optimisations between runs in Hotspot, but I don't know what
> came of it.

It’s in OpenJDK since 9:

http://openjdk.java.net/jeps/295 <http://openjdk.java.net/jeps/295>

> Have any of these things been revisited/explored with Graal?
> 
> Thanks for the help, I hope these questions make sense!



More information about the graal-dev mailing list