[aarch64-port-dev ] RFC: 64 bit literal oops
Nick Gasson (Arm Technology China)
Nick.Gasson at arm.com
Fri Feb 1 02:50:25 UTC 2019
Hi Stuart
Just a few minor comments from looking through your patch.
src/hotspot/cpu/aarch64/c1_CodeStubs_aarch64.cpp
+// SRDM: Use64BitLiteralAddresses isn't initialized for the static
_patch_info_offset to be uses.
+int PatchingStub::patch_info_offset() { return
-NativeGeneralJump::get_instruction_size(); }
"to be uses" -> "to be used"?
src/hotspot/cpu/aarch64/globals_aarch64.hpp
+ experimental(bool, Use64BitLiteralAddresses, false, \
+ "Use 64 bit in literal addresses instead of 48 bits.")
"Use 64 *bits* in ..." or "Use 64 bit literal addresses instead of 48 bit"?
src/hotspot/cpu/aarch64/jvmciCodeInstaller_aarch64.cpp
+ // is therefore disables Use64BitLiteralAddresses until support
is enabled.
s/is/it/
+ // mov, movk, movk, [movk] <- SRDM This fails with
Use64BitLiteralAddresses
Can we make this assert(!Use64BitLiteralAddresses, "...")?
src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
+
+ if (Use64BitLiteralAddresses) {
+ assert(nativeInstruction_at(insn_addr+12)->is_movk(), "wrong
insns in patch");
+ Instruction_aarch64::patch(insn_addr+12, 20, 5, (dest >>= 16) &
0xffff);
+ instructions = 4;
+ }
Does it make sense to add `else assert(dest == 0, ">48 bit literal
address without Use64BitLiteralAddresses");' here?
src/hotspot/cpu/aarch64/nativeInst_aarch64.cpp
+ if (!Use64BitLiteralAddresses) {
+ // movz, movk, movk.
+ return 3 * 4;
+ } else {
+ return 4 * 4;
+ }
It's clearer if the * 4 is * NativeInstruction::instruction_size;
src/hotspot/cpu/aarch64/nativeInst_aarch64.hpp
+ return 4 * 4;
Same here.
src/hotspot/share/aot/aotLoader.cpp
+ if (Use64BitLiteralAddresses && UseAOT) {
+ warning("64-bit Literal Addresses disabled due to UseAOT.");
+ FLAG_SET_CMDLINE(bool, UseAOT, false);
+ }
Shouldn't it be FLAG_SET_CMDLINE(bool, Use64BitLiteralAddresses, false)
to match the warning? Probably don't need to capitalize "Literal Addresses".
Thanks,
Nick
On 31/01/2019 21:55, Stuart Monteith wrote:
> I've redone the patch such that whether or not it is an oop,
> addresses are emitted and patched as 64-bit (movz + 3 x movk) when
> -XX:+Use64BitLiteralAdddresses is passed.
>
> I've annotated some more sections to clarify what instructions are
> being patched or sized, as well as some areas where it hasn't changed
> - it was worthwhile to do thi
> The Aarch64_specific_constant::instruction_size enum has been changed
> to a method. This has encroached on shared code. Arguably this could
> do with even more refactoring in NativeInst_aarch.(cpp|hpp) and the
> users thereof if it is to be consistent.
>
> Regarding JVM CI and Graal - ZGC doesn't support them yet. Graal will
> need to be changed to recognise the option and conditionally emit
> 64-bit addresses (at least in
> AArch64MacroAssembler::movNativeAddress(Register, long)
> I've added some more logic to disable Use64BitLiteralAddresses when
> UseAOT or JVMCI are enabled.
>
> The patch is:
> http://cr.openjdk.java.net/~smonteith/oop64/webrev-20190130/
>
> I've opened a bug in JBS: https://bugs.openjdk.java.net/browse/JDK-8216491
>
> I've been trying to track down the source of some of the constants for
> the sizes of code. In one instance we have:
> http://cr.openjdk.java.net/~smonteith/oop64/webrev-20190130/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.hpp.udiff.html
>
> ... there are apparently 12 instructions for each call stub. The
> emit_static_call_stub() method only uses 8 instructions, but with
> Use64BitLiteralAddresses enabled the warning "compilation bailout:
> exception handler overflow" is generated.
> I've changed the code to emit 14 instructions instead, but I've yet to
> find the call stub that requires 12 instructions.
>
> I'll follow up with the ZGC patch soon - there was a change upstream
> to lea that needs to be accommodated.
>
> BR,
> Stuart
>
>
>
>
>
> On Mon, 3 Dec 2018 at 18:40, Andrew Haley <aph at redhat.com> wrote:
>>
>> On 12/3/18 4:53 PM, Stuart Monteith wrote:
>>
>>> There is another possible option, which is to just avoid 48-bit
>>> literals, and just use 64-bit unconditionally everywhere. It would
>>> mean that we could be easily consistent and 52-bit VA would be easier
>>> to implement on top of it. Again, I seek people's opinions.
>>
>> Could well be. I am starting to wonder if it is worth the effort.
>> Maybe the best thing to do is have a global 48- to 64-bit switch
>> and use it everywhere we want a pointer literal.
>>
>> Please don't put code of any complexity into header files. Put it
>> into .c files instead.
>>
>> The logic in MacroAssembler::movoop is a already a mess, and
>> adding an isOop flag to pass to doesn't help. I think it would
>> make more sense to have explicit 48- and 64-bit mov methods, and
>> let MacroAssembler::mov(Register, Address) control which one is
>> called.
>>
>> This hunk makes no sense. The comment directly contradicts
>> the code:
>>
>> enum Aarch64_specific_constants {
>> instruction_size = 3 * 4, // movz, movk, movk, [movk]. See movptr().
>> instruction_offset = 0,
>> displacement_offset = 0,
>> };
>>
>> It makes no sense to have instruction_size be a constant here.
>> I guess it never did.
>>
>> --
>> Andrew Haley
>> Java Platform Lead Engineer
>> Red Hat UK Ltd. <https://www.redhat.com>
>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
More information about the aarch64-port-dev
mailing list