RFR: Enable C2 loop strip mining by default

Per Liden per.liden at oracle.com
Mon Dec 18 14:39:09 UTC 2017


Hi,

In ZGC we're following what G1 is doing here. G1 used to do what 
Shenandoah does, but Roland changed[1] that. As far as I understand, the 
motivation was that just using -XX:+UseCountedLoopSafepoints should 
actually disable strip mining, and instead provide the same behavior as 
we had before we had strip mining.

[1] http://hg.openjdk.java.net/jdk/hs/rev/4d28288c9f9e

cheers,
Per

On 2017-12-16 01:47, Krystal Mok wrote:
> (Not a Reviewer) but Aleksey's version for Shenandoah makes more sense 
> to me.
> 
> Thanks,
> Kris
> 
> On Fri, Dec 15, 2017 at 1:24 PM, Aleksey Shipilev <shade at redhat.com 
> <mailto:shade at redhat.com>> wrote:
> 
>     On 12/15/2017 01:38 PM, Per Liden wrote:
>     > Patch to enable loop strip mining by default when using ZGC. I also noticed that the file had an
>     > incorrect header, so I fixed that too.
>     >
>     > http://cr.openjdk.java.net/~pliden/zgc/c2_loop_strip_mining_by_default/webrev.0/
>     <http://cr.openjdk.java.net/~pliden/zgc/c2_loop_strip_mining_by_default/webrev.0/>
> 
>     Yup. It worked very well for Shenandoah.
> 
>     But, the relevant code block from Shenandoah code is:
> 
>     #ifdef COMPILER2
>        // Shenandoah cares more about pause times, rather than raw
>     throughput.
>        if (FLAG_IS_DEFAULT(UseCountedLoopSafepoints)) {
>          FLAG_SET_DEFAULT(UseCountedLoopSafepoints, true);
>        }
>        if (UseCountedLoopSafepoints &&
>     FLAG_IS_DEFAULT(LoopStripMiningIter)) {
>          FLAG_SET_DEFAULT(LoopStripMiningIter, 1000);
>        }
>     #ifdef ASSERT
> 
>     ...which is slightly different from what you are suggesting for ZGC.
>     Don't you want to enable
>     LoopStripMiningIter when user explicitly sets
>     -XX:+UseCountedLoopSafepoints (which, I guess, are
>     most users concerned with TTSP-related latency)?
> 
>     Thanks,
>     -Aleksey
> 
> 
> 


More information about the zgc-dev mailing list