RFR: 8234863: Increase default value of MaxInlineLevel
John Rose
john.r.rose at oracle.com
Thu Dec 12 22:20:57 UTC 2019
On Dec 12, 2019, at 6:41 AM, Vladimir Ivanov <vladimir.x.ivanov at oracle.com> wrote:
>
> On the heuristic itself, it looks like it can be safely generalized to any methods which just call another method irrespective of how many arguments they pass (and in what order).
Yes, I like that heuristic; it boils down to a method body
containing just one non-trivial method invocation, plus
“other cheap stuff”.
I’d want to consider making sure that the “other cheap stuff”
which is disregarded include method invocations that we know
are commonly used in adapters (including those of lambda forms),
such as:
- type adjusting calls: Class.cast, W.valueOf, W.XValue, etc. (incl. internals)
- maybe queries such as Class.isInstance
- resolved non-invocations like getfield, getstatic, ldc
- other stuff that we know “bottoms out” quickly
The point about “bottoming out” is that a tree with one main
branch and no side branches, or shallow twiggy side branches
only, does not expand more than linearly into IR after inlining.
The point about focusing on type adaptation stuff is that such
stuff tends to fold away when stacked on top of itself. There are
only so many type adaptations you can perform repeatedly on
the same value before you reach a fix point, and the JIT is good
at computing such fix points. The result is that such single-branch
trees may be expected to fold into sub-linear IR, compared to the
size of the original branch, and that’s the win we are after.
I suppose the “cheap stuff" calls might be charged to a side counter,
which if it gets huge (>100) stops the inline from going deeper.
Let the prototyping begin.
My $0.02.
— John
P.S. I’m glad we are discussing this. For the record, we have historically
stepped with great caution into changing the inlining heuristics, because
beneficial changes, even if small, can cause rare regressions. In an
ecosystem as large as ours, even rare regressions can be a significant
cost. But now, as we are getting used to the new discipline of updating
on a regular 6-month cadence, I think we have a much healthier process
in which to detect and fix regressions that may stem from changed
heuristics.
P.P.S Do we have a greater risk of regression because there is something
broken about our heuristics? Well, there is an art to building heuristics
which are stable, when your system dynamics have (like ours) a nearly
infinite set of non-linear feedback loops. Our heuristics, being simple,
are reasonably stable. Any new heuristics, like the ones I proposed above,
should be examined for stability as well as effectiveness. In the end, a
perfect heuristic would have to closely emulate the future dynamics of
the system being optimized, and that (as everyone knows) is almost certainly
no cheaper than just running the system un-optimized. In general, there
will always be workloads that defeat any particular heuristic, even if that
heuristic produces good (or null) results 99% of the time.
More information about the hotspot-compiler-dev
mailing list