Loading AOT alongside bytecode / caching JIT optimisations

elias vasylenko eliasvasylenko at gmail.com
Tue Oct 2 15:16:48 UTC 2018


Oh well that is surprising, thank you! All the documentation I've seen has
been about the full closed-world native image AOT and I've missed all
discussion of this entirely. I suppose it is still a preview feature
without much documentation so I hope you'll forgive me for missing it ;)

Thanks for pointing me in the right direction, glad to hear this is here.

On Tue, 2 Oct 2018 at 14:55 Christian Thalinger <cthalinger at twitter.com>
wrote:

> On Oct 2, 2018, at 4:22 AM, elias vasylenko <eliasvasylenko at gmail.com>
> wrote:
>
> Hello,
>
> As I understand it Graal AOT has two main advertised advantages, which
> appear to be orthogonal: smaller distributions and faster startup. The
> tradeoffs made to achieve these goals are: no dynamic loading of code, no
> dynamic recompilation, and limited reflection.
>
> These trade-offs make perfect sense for certain short-lifecycle
> minimal-dependency use cases e.g. cloud/microservices/CLI. But for larger
> applications with more traditional long-term workloads we may start to miss
> them.
>
> So what if we sacrificed our ability to distribute smaller binaries and
> focused on the singular goal of faster startup, could we eliminate the need
> for some of these trade-offs?
>
> For example, if we stored out (partially?) AOT compiled code *alongside*
>
>
> our original bytecode could we theoretically load it into the standard
> GraalVM instead of SubstrateVM and enjoy fast cold-starts without giving up
> all the bells and whistles?
>
> Alternatively I recall that some time ago there was a JEP which explored
> caching JIT optimisations between runs in Hotspot, but I don't know what
> came of it.
>
>
> It’s in OpenJDK since 9:
>
> http://openjdk.java.net/jeps/295
>
> Have any of these things been revisited/explored with Graal?
>
> Thanks for the help, I hope these questions make sense!
>
>
>


More information about the graal-dev mailing list