RFR(S): 8239072: subtype check macro node causes node budget to be exhausted
Vladimir Ivanov
vladimir.x.ivanov at oracle.com
Wed Mar 11 11:18:14 UTC 2020
> At that point, I felt this was all becoming too complicated. That's when
> I came up with the current patch which is simple, robust, grows live
> nodes count incrementally and has no drawback that I can think of. So I
> think it's the right fix for this particular issue.
>
> I agree it only solves that particular issue where a lot of subtype
> checks optimize down to a simple pattern and that there's a bigger issue
> likely made worse by the addition of more macro nodes. But it's also
> true that when c2 operates close to the live nodes limit, all bets are
> off, the entire compilation process is not robust anymore and you'd
> better not be unlucky.
>
> I don't feel comfortable adjusting heuristics if there's no way to
> evaluate the result. And with that patch in, AFAICT, we don't have test
> cases that trigger the problem.
Yes, I agree that adjusting heuristics is not an option right now.
What I have in mind for now and propose to consider as an alternative is
along the following lines [1].
I'd like to point out that the problematic check was adjusted
(relatively) recently (in JDK 9 [2]) and the estimate was bumped >4x
(from 75 to 300 per node) which is quite a drastic change. So, I suspect
what we see for subtype checks is just a first sign and there'll be
subsequent reports unless we fix it.
Best regards,
Vladimir Ivanov
[1]
diff --git a/src/hotspot/share/opto/macro.cpp
b/src/hotspot/share/opto/macro.cpp
--- a/src/hotspot/share/opto/macro.cpp
+++ b/src/hotspot/share/opto/macro.cpp
@@ -2658,12 +2658,6 @@
// Last attempt to eliminate macro nodes.
eliminate_macro_nodes();
- // Make sure expansion will not cause node limit to be exceeded.
- // Worst case is a macro node gets expanded into about 200 nodes.
- // Allow 50% more for optimization.
- if (C->check_node_count(C->macro_count() * 300, "out of nodes before
macro expansion" ) )
- return true;
-
// Eliminate Opaque and LoopLimit nodes. Do it after all loop
optimizations.
bool progress = true;
while (progress) {
@@ -2719,6 +2713,14 @@
}
}
+ // Make sure expansion will not cause node limit to be exceeded.
+ // Worst case is a macro node gets expanded into about 200 nodes.
+ // Allow 50% more for optimization.
+ int expansion_budget_estimate = (C->max_node_limit() -
C->live_nodes()) / 300;
+ if (expansion_budget_estimate < 1) {
+ return true; // no room for expansion left
+ }
+
// expand arraycopy "macro" nodes first
// For ReduceBulkZeroing, we must first process all arraycopy nodes
// before the allocate nodes are expanded.
@@ -2750,6 +2752,19 @@
break;
}
if (C->failing()) return true;
+
+ if (--expansion_budget_estimate < 1) {
+ _igvn.set_delay_transform(false);
+ _igvn.optimize();
+ if (C->failing()) return true;
+
+ // Recompute expansion budget after IGVN is over.
+ expansion_budget_estimate = (C->max_node_limit() -
C->live_nodes()) / 300;
+ if (expansion_budget_estimate < 1) {
+ return true; // no room for expansion left
+ }
+ _igvn.set_delay_transform(true);
+ }
}
// All nodes except Allocate nodes are expanded now. There could be
@@ -2784,6 +2799,19 @@
}
assert(C->macro_count() < macro_count, "must have deleted a node
from macro list");
if (C->failing()) return true;
+
+ if (--expansion_budget_estimate < 1) {
+ _igvn.set_delay_transform(false);
+ _igvn.optimize();
+ if (C->failing()) return true;
+
+ // Recompute expansion budget after IGVN is over.
+ expansion_budget_estimate = (C->max_node_limit() -
C->live_nodes()) / 300;
+ if (expansion_budget_estimate < 1) {
+ return true; // no room for expansion left
+ }
+ _igvn.set_delay_transform(true);
+ }
}
_igvn.set_delay_transform(false);
[2] https://bugs.openjdk.java.net/browse/JDK-8146478
More information about the hotspot-compiler-dev
mailing list