RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles
Andrew Haley
aph at redhat.com
Mon Jul 4 15:16:42 UTC 2016
[CORRECTED. Please igfnore previous version.]
I'm writing the AArch64 intrinsics for the weak entry points, and
the generated code for volatile compare&swap looks wrong to me.
I'm seeing this:
membar_release
cmpxchg
membar_acquire
... but this is not enough for volatile. It is enough for
AcquireRelease, but for Volatile It needs a trailing membar_volatile
to make the store visible to observers in other threads. Looking at
LibraryCallKit::inline_unsafe_access, it seems to me that the same
code is generated for both Release and Volatile:
// Add the trailing membar surrounding the access
insert_mem_bar(Op_MemBarCPUOrder);
switch (access_kind) {
case Relaxed:
case Release:
break; // do nothing
case Acquire:
case Volatile:
insert_mem_bar(Op_MemBarAcquire);
// !support_IRIW_for_not_multiple_copy_atomic_cpu handled in platform code
break;
default:
ShouldNotReachHere();
}
But it really shouldn't be: a volatile store should be followed by
MemBarVolatile, like so:
switch (access_kind) {
case Relaxed:
case Release:
break; // do nothing
case Acquire:
insert_mem_bar(Op_MemBarAcquire);
break;
case Volatile:
insert_mem_bar(Op_MemBarVolatile);
break;
default:
ShouldNotReachHere();
}
... and this does indeed generate the correct code.
There is a problem, though: this is going to generate suboptimal code
for x86:
lock cmpxchg
lock addl $0x0,-0x40(%rsp)
So I suppose we're going to have to kludge around this in the AArch64
back end.
Andrew.
More information about the hotspot-dev
mailing list