[External] : Re: Late cleanup of stack objects
Ron Pressler
ron.pressler at oracle.com
Fri Nov 11 15:42:01 UTC 2022
(P.S.
Thinking about it, I was wrong about the extreme case of zero heap vs. unbounded heap, but it can still be a significant difference)
> On 11 Nov 2022, at 14:56, Ron Pressler <ron.pressler at oracle.com> wrote:
>
> But that’s just how HotSpot works, and virtual threads didn’t change that. There are other memory-related optimisations in the JIT, too. For example, every `new Foo`, when running in the interpreter, will allocate memory in the heap. But when compiled, the object could be allocated on the stack (scalar-replaced) or even not at all (which means there can be cases where JITted code consumes zero heap memory, while the same code, when interpreted, consumes an unbounded amount, i.e. will OOME at any heap size). So the memory profile of Java applications running on OpenJDK can be very different depending on JIT behaviour. This might become even more pronounced with Valhalla.
>
> In this case, the behaviour has nothing to do with virtual threads or with “capture.” The optimising compiler simply analyses the method and sees that the bigBuffer variable is unused after the call to slowIO so it optimises it away (and with the local gone, the GC will not find the buffer and it will be collected).
>
> If the interpreter were to perform this or other similar optimisations based on analysing the method before interpreting it, then it wouldn’t be doing its job as the mechanism that quickly starts execution or the mechanism used when debugging. Even when some optimisations are possible in the interpreter, it’s rarely worth it to invest much time in that because the interpreter runs little code over the program’s overall execution.
>
> It does, however, mean that microbenchmarks need to be done carefully.
>
> — Ron
More information about the loom-dev
mailing list