RFR (XS) 8239868: Shenandoah: ditch C2 node limit adjustments
Roman Kennke
rkennke at redhat.com
Mon Feb 24 16:22:50 UTC 2020
> RFE:
> https://bugs.openjdk.java.net/browse/JDK-8239868
>
> We have the block added to Shenandoah arguments code that adjusts MaxNodeLimit and friends (predates
> inclusion of Shenandoah into mainline):
> https://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-August/006983.html
>
> At the time, it was prompted by observing that lots of barriers everywhere really needed to have
> this limit bumped. Today, with simplified LRB scheme, more simple LRB due to SFX, etc, we do not
> need this.
>
> The change above used ShenandoahCompileCheck, which made it into upstream code under generic
> AbortVMOnCompilationFailure. With that, I was able to verify that dropping the block does not yield
> compilation failures due to exceeded node budget on hotspot_gc_shenandoah, specjvm2008, specjbb2015.
> Performance numbers are also not affected (as expected).
>
> Therefore, the adjustment can be removed:
>
> diff -r 5c5dcd036a76 src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp
> --- a/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp Mon Feb 24 11:01:51 2020 +0100
> +++ b/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp Mon Feb 24 17:09:58 2020 +0100
> @@ -193,13 +193,4 @@
> }
>
> - // Shenandoah needs more C2 nodes to compile some methods with lots of barriers.
> - // NodeLimitFudgeFactor needs to stay the same relative to MaxNodeLimit.
> -#ifdef COMPILER2
> - if (FLAG_IS_DEFAULT(MaxNodeLimit)) {
> - FLAG_SET_DEFAULT(MaxNodeLimit, MaxNodeLimit * 3);
> - FLAG_SET_DEFAULT(NodeLimitFudgeFactor, NodeLimitFudgeFactor * 3);
> - }
> -#endif
> -
> // Make sure safepoint deadlocks are failing predictably. This sets up VM to report
> // fatal error after 10 seconds of wait for safepoint syncronization (not the VM
>
> Testing: hotspot_gc_shenandoah; benchmarks, +AbortVMOnCompilationFailure testing
Ok.
Thank you!
Roman
More information about the shenandoah-dev
mailing list