Loading AOT alongside bytecode / caching JIT optimisations

elias vasylenko eliasvasylenko at gmail.com
Tue Oct 2 08:22:57 UTC 2018


Hello,

As I understand it Graal AOT has two main advertised advantages, which
appear to be orthogonal: smaller distributions and faster startup. The
tradeoffs made to achieve these goals are: no dynamic loading of code, no
dynamic recompilation, and limited reflection.

These trade-offs make perfect sense for certain short-lifecycle
minimal-dependency use cases e.g. cloud/microservices/CLI. But for larger
applications with more traditional long-term workloads we may start to miss
them.

So what if we sacrificed our ability to distribute smaller binaries and
focused on the singular goal of faster startup, could we eliminate the need
for some of these trade-offs?

For example, if we stored out (partially?) AOT compiled code *alongside*
our original bytecode could we theoretically load it into the standard
GraalVM instead of SubstrateVM and enjoy fast cold-starts without giving up
all the bells and whistles?

Alternatively I recall that some time ago there was a JEP which explored
caching JIT optimisations between runs in Hotspot, but I don't know what
came of it. Have any of these things been revisited/explored with Graal?

Thanks for the help, I hope these questions make sense!


More information about the graal-dev mailing list