MaxBCEAEstimateSize and inlining clarification
Vitaly Davidovich
vitalyd at gmail.com
Thu Sep 8 13:07:17 UTC 2016
Hi Vladimir,
On Thu, Sep 8, 2016 at 8:38 AM, Vladimir Ivanov <
vladimir.x.ivanov at oracle.com> wrote:
> Vitaly,
>
> The default max size is 150, nearly half the size of FreqInlineSize. Is
>> EA eligibility performed on a method before it's inlined then? I can't
>> imagine that 150 is the limit after inlining. If it's before inlining,
>> how exactly does this work after the method is inlined since the inlined
>> call graph may have quite a bit of code and thus EA may take a while? My
>> understanding is EA is run after inlining to maximize its effectiveness.
>> Or is the MaxBCEAEstimateLevel used as pseudo inlining for the analysis?
>>
>
> Yes, it's sort of "pseudo inlining". EA happens after inlining is over
> (both parse & post-parse phases). For calls with known target, EA performs
> static analysis to compute escape info for arguments. It happens for
> methods smaller than MaxBCEAEstimateSize. MaxBCEAEstimateLevel limits the
> inlining depth during analysis.
By "known target", does that take profiling into account or it has to be
statically known? But basically, it sounds like this is what Roland said --
any methods not inlined for whatever reason (not hot enough, too big, etc)
are also inspected for EA purposes, but with the MaxBCEAEstimateSize and
Level limits.
>
>
> I'm seeing some code that iterates over a ConcurrentHashMap's entrySet
>> that allocates tens of GB of CHM$MapEntry objects even though they don't
>> escape. I'm also seeing some other places where EA ought to be kicking
>> in but isn't. So I'd like to understand the nuances of it a bit better.
>>
>
> I wish -XX:+PrintEscapeAnalysis & -XX:+PrintEliminateAllocations were
> available in product binaries, but they aren't, unfortunately.
Yes, that would be great! Is there a good reason they couldn't be turned
into prod flags for, say, java 9?
> You can build an "optimized" JVM though. It's close to product binaries
> w.r.t. speed, but contains also provides most of diagnostic logic (e.g.,
> all nonproduct flags are available).
> If autoboxing is involved, you can try -XX:+AggressiveUnboxing.
>
So I see this is behind UnlockExperimentalVMOptions (I'm on 8u92). Some of
the instances I'm seeing are, indeed, autoboxing. Is this feature stable?
What additional optimizations does it enable? Or put another way, why is it
experimental? :)
>
> Best regards,
> Vladimir Ivanov
>
Thanks Vladimir, very helpful.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/attachments/20160908/561423c3/attachment.html>
More information about the hotspot-compiler-dev
mailing list