RFR(L): 8186027: C2: loop strip mining
Roland Westrelin
rwestrel at redhat.com
Thu Nov 23 14:18:28 UTC 2017
Hi Vladimir,
> I am running testing again. But if this will repeat and presence of this
> Sparse.small regression suggesting to me that may be we should keep this
> optimization off by default - keep UseCountedLoopSafepoints false.
>
> We may switch it on later with additional changes which address regressions.
>
> What do you think?
If the inner loop runs for a small number of iterations and the compiler
can't statically prove it, I don't see a way to remove the overhead of
loop strip mining entirely. So I'm not optimistic the regression can be
fixed.
If loop strip mining defaults to false, would there we be any regular
testing on your side?
It seems to me that it would make sense to enable loop strip mining
depending on what GC is used: it makes little sense for parallel gc but
we'll want it enabled for Shenandoah for instance. Where does G1 fit? I
can't really say and I don't have a strong opinion. But as I understand,
G1 was made default under the assumption that users would be ok trading
throughput for better latency. Maybe, that same reasoning applies to
loop strip mining?
Roland.
More information about the hotspot-compiler-dev
mailing list