[aarch64-port-dev ] Re-enable membars in back-end
Andrew Haley
aph at redhat.com
Fri Feb 21 09:58:48 PST 2014
Hitherto we have been eliding acquire and release barriers because we
don't need them for volatile accesses. However, acquire and release
barriers are not only used for volatile accesses: they are also used
elsewhere in the VM in places where the6y are necessary.
I want to continue to use ldar and stlr for volatile accesses, so I've
#ifdef'd out the memory barriers used for volatile fields in the
common C2 code. This is rather ugly.
As a result of this change, we now get rather too many barrriers in
the code generated for things like Unsafe.getAndAddInt(). It's
correct, though.
We can improve Unsafe.XXX later, but I want to get this correctness
fix in now.
Andrew.
# HG changeset patch
# User aph
# Date 1392995089 0
# Fri Feb 21 15:04:49 2014 +0000
# Node ID 33be5d2e3580024a075b3ede2e949060e9b9a102
# Parent 9fb1040177d04e702d8e0d683d344989aaf61d46
Re-enable membars in back-end. Remove membars for volatile field accesses
in front-end.
diff -r 9fb1040177d0 -r 33be5d2e3580 src/cpu/aarch64/vm/aarch64.ad
--- a/src/cpu/aarch64/vm/aarch64.ad Tue Feb 18 16:41:40 2014 +0000
+++ b/src/cpu/aarch64/vm/aarch64.ad Fri Feb 21 15:04:49 2014 +0000
@@ -5875,7 +5875,8 @@
format %{ "MEMBAR-acquire\t# ???" %}
ins_encode %{
- __ block_comment("membar_acquire (elided)");
+ __ block_comment("membar_acquire");
+ __ membar(Assembler::Membar_mask_bits(Assembler::LoadLoad|Assembler::LoadStore));
%}
ins_pipe(pipe_class_memory);
@@ -5888,7 +5889,8 @@
format %{ "MEMBAR-release" %}
ins_encode %{
- __ block_comment("membar-release (elided)");
+ __ block_comment("membar-release");
+ __ membar(Assembler::AnyAny);
%}
ins_pipe(pipe_class_memory);
%}
diff -r 9fb1040177d0 -r 33be5d2e3580 src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp
--- a/src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp Tue Feb 18 16:41:40 2014 +0000
+++ b/src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp Fri Feb 21 15:04:49 2014 +0000
@@ -2924,10 +2924,12 @@
void LIR_Assembler::membar_acquire() {
__ block_comment("membar_acquire");
+ __ membar(Assembler::Membar_mask_bits(Assembler::LoadLoad|Assembler::LoadStore));
}
void LIR_Assembler::membar_release() {
__ block_comment("membar_release");
+ __ membar(Assembler::AnyAny);
}
void LIR_Assembler::membar_loadload() { Unimplemented(); }
diff -r 9fb1040177d0 -r 33be5d2e3580 src/share/vm/c1/c1_LIRGenerator.cpp
--- a/src/share/vm/c1/c1_LIRGenerator.cpp Tue Feb 18 16:41:40 2014 +0000
+++ b/src/share/vm/c1/c1_LIRGenerator.cpp Fri Feb 21 15:04:49 2014 +0000
@@ -1737,9 +1737,11 @@
address = generate_address(object.result(), x->offset(), field_type);
}
+#ifndef AARCH64
if (is_volatile && os::is_MP()) {
__ membar_release();
}
+#endif
if (is_oop) {
// Do the pre-write barrier, if any.
@@ -1830,9 +1832,11 @@
__ load(address, reg, info, patch_code);
}
+#ifndef AARCH64
if (is_volatile && os::is_MP()) {
__ membar_acquire();
}
+#endif
}
diff -r 9fb1040177d0 -r 33be5d2e3580 src/share/vm/opto/parse3.cpp
--- a/src/share/vm/opto/parse3.cpp Tue Feb 18 16:41:40 2014 +0000
+++ b/src/share/vm/opto/parse3.cpp Fri Feb 21 15:04:49 2014 +0000
@@ -262,6 +262,7 @@
set_bci(iter().cur_bci()); // put it back
}
+#ifndef AARCH64
// If reference is volatile, prevent following memory ops from
// floating up past the volatile read. Also prevents commoning
// another volatile read.
@@ -269,6 +270,7 @@
// Memory barrier includes bogus read of value to force load BEFORE membar
insert_mem_bar(Op_MemBarAcquire, ld);
}
+#endif
}
void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) {
@@ -276,7 +278,9 @@
// If reference is volatile, prevent following memory ops from
// floating down past the volatile write. Also prevents commoning
// another volatile read.
+#ifndef AARCH64
if (is_vol) insert_mem_bar(Op_MemBarRelease);
+#endif
// Compute address and memory type.
int offset = field->offset_in_bytes();
More information about the aarch64-port-dev
mailing list