C2: memory edge of write barriers shouldn't be unconditionally cleared after optimizations
Roman Kennke
rkennke at redhat.com
Tue Sep 13 13:40:28 UTC 2016
Good catch!
Ok!
Roman
Am Dienstag, den 13.09.2016, 14:31 +0200 schrieb Roland Westrelin:
> http://cr.openjdk.java.net/~roland/shenandoah/wb-mem-edge/webrev.00/
>
> Consider this method:
>
> static int m(A a, A b, A c) {
> a.f = 0x42;
> b.f = 42;
> return c.f;
> }
>
> Before final_graph_reshape, the memory graph is:
>
> WB(a)<-WB(b)<-RB(c)
>
> after final_graph_reshape, it's:
>
> WB(b)<-RB(C)
> WB(a)
>
> So valid schedulings of the barriers include WB(b),RB(C),WB(a) and
> calling m(a,b,a) could cause incorrect execution. The compiler
> currently
> doesn't use the faulty scheduling but it's dangerous to assume it
> never
> does/will. We can actually trick the compiler by leveraging implicit
> null checks:
>
> static int m(A a, A b, A c) {
> if (b == null) {
> }
> if (c == null) {
> }
> if (a == null) {
> }
> a.f = 0x42;
> b.f = 42;
> return c.f;
> }
>
> Never taken branches act as null checks. When the compiler tries to
> find
> implicit null checks, it doesn't move the null check down to where a
> memory operation is but hoists a memory operation right before the
> null
> check so in the case above barriers could be executed in the
> following
> order: WB(b),RB(C),WB(a). (Actually compiling that method doesn't
> lead
> to that result and it doesn't lead to broken code but it's out of
> pure
> luck. The compiler just happens to process the null checks in an
> order
> that doesn't trigger bad code).
>
> Having a loop optimization that moves loop independent write barriers
> out of loops should limit the performance impact of this change.
>
> Roland.
>
More information about the shenandoah-dev
mailing list