RFR: JDK-8294947: Use 64bit atomics in patch_verified_entry on x86_64
Vladimir Kozlov
kvn at openjdk.org
Wed Nov 9 17:42:41 UTC 2022
On Wed, 9 Nov 2022 12:41:59 GMT, Dmitry Samersoff <dsamersoff at openjdk.org> wrote:
> In the void NativeJump::patch_verified_entry() we atomically patch first 4 bytes, then atomically patch 5th byte, then atomically patch first 4 bytes again. But from CMC (cross-modified code) point of view it's better to patch atomically 8 bytes at once.
>
> The patch was tested with hotspot jtreg tests in bare-metal and virtualized environments.
src/hotspot/cpu/x86/nativeInst_x86.cpp line 514:
> 512: // complete jump instruction (to be inserted) is in code_buffer;
> 513: #ifdef AMD64
> 514: unsigned char code_buffer[8];
Should we align this buffer too (to 8/jlong)?
src/hotspot/cpu/x86/nativeInst_x86.cpp line 532:
> 530:
> 531: #else
> 532: unsigned char code_buffer[5];
Should this be aligned?
src/hotspot/cpu/x86/nativeInst_x86.cpp line 562:
> 560:
> 561: // Patch bytes 0-3 (from jump instruction)
> 562: *(int32_t*)verified_entry = *(int32_t *)code_buffer;
Is this store and at line 552 atomic?
-------------
PR: https://git.openjdk.org/jdk/pull/11059
More information about the hotspot-compiler-dev
mailing list