RFR: JDK-8294947: Use 64bit atomics in patch_verified_entry on x86_64
Dmitry Samersoff
dsamersoff at openjdk.org
Fri Nov 11 12:13:10 UTC 2022
On Wed, 9 Nov 2022 17:19:55 GMT, Vladimir Kozlov <kvn at openjdk.org> wrote:
>> In the void NativeJump::patch_verified_entry() we atomically patch first 4 bytes, then atomically patch 5th byte, then atomically patch first 4 bytes again. But from CMC (cross-modified code) point of view it's better to patch atomically 8 bytes at once.
>>
>> The patch was tested with hotspot jtreg tests in bare-metal and virtualized environments.
>
> src/hotspot/cpu/x86/nativeInst_x86.cpp line 514:
>
>> 512: // complete jump instruction (to be inserted) is in code_buffer;
>> 513: #ifdef AMD64
>> 514: unsigned char code_buffer[8];
>
> Should we align this buffer too (to 8/jlong)?
@vnkozlov
CXX optimizes that code to a few of register operations, and optimize out local variable _code_buffer_, so we need not to care about its alignment.
-------------
PR: https://git.openjdk.org/jdk/pull/11059
More information about the hotspot-compiler-dev
mailing list