[PATCH] Reduce Chance Of Mistakenly Early Backing Memory Cleanup

Paul Sandoz paul.sandoz at oracle.com
Wed Feb 7 16:31:38 UTC 2018



> On Feb 7, 2018, at 7:22 AM, Vladimir Ivanov <vladimir.x.ivanov at oracle.com> wrote:
> 
> Peter,
> 
>>> Objects.requireNonNull() shows zero overhead here.
>>> 
>>> I guess the main question is whether Objects.requireNonNull(this) behavior in the former test is a result of chance and current Hotspot behavior or is it somehow guaranteed by the spec.
>> I haven't looked into what actually happens in JIT-compilers on your benchmark, but I'm surprised it works at all.
> 
> So, here's why Objects.requireNonNull() keeps the receiver alive in your test case.
> 
> JIT-compilers in HotSpot aggressively prune dead locals [1], but they do that based on method bytecode analysis [2] (and not on optimized IR). So, any usage of a local extends its live range, even if that usage is eliminated in generated code. It means an oop in that local will live past its last usage in generated code and all safepoints (in generated code) till last usage (on bytecode level) will enumerate the local it is held in.
> 
> If there were GC-only safepoints supported, then JITs could still prune unused locals from oop maps, but HotSpot doesn't have them and all safepoints in generated code keep full JVM state, so it's always possible to deoptimize at any one of them (and then run into the code which is eliminated in generated code).
> 
> If there are no safepoints till the end of the method, then nothing will keep the object alive. But there's no way for GC to collect it, since GCs rely on safepoints to mark thread stack. (That's why I mentioned GC-only safepoints earlier.)
> 
> As a conclusion: removing @DontInline on Reference.reachabilityFence() should eliminate most of the overhead (no call anymore, additional spill may be needed) and still keep it working. It's not guaranteed by JVMS, but at least should work on HotSpot JVM (in its current state).
> 
> So, nice discovery, Peter! :-)

Yes, Peter, thank you, for pushing on this. We would of course need to be careful and revisit if any changes occur.

Vladimir, just to be sure I presume your analysis applies to both C1 and C2? what about compilers such as Graal?

Ben, i still think additional performance analysis is still valuable (such performance tests are also useful for another reason, consolidating unsafe accesses using the double addressing mode, thereby removing another difference between heap and direct buffers).

Paul.

> Want to file an RFE & fix it?
> 
> Best regards,
> Vladimir Ivanov
> 
> [1] http://hg.openjdk.java.net/jdk/hs/file/45b6aae769cc/src/hotspot/share/opto/graphKit.cpp#l736
> 
> [2] http://hg.openjdk.java.net/jdk/hs/file/45b6aae769cc/src/hotspot/share/compiler/methodLiveness.cpp#l37
> 
>> Explicit null check on the receiver is an easy target for elimination and should be effectively a no-op in generated code. (And that's what you observe with the benchmark!) Once the check is gone, nothing keeps receiver alive anymore (past the last usage).
>> So, I'd say such behavior it's a matter of chance in your case and can't be relied on in general. And definitely not something guaranteed by JVMS.
>> Best regards,
>> Vladimir Ivanov



More information about the core-libs-dev mailing list