RFR: 8351334: [ubsan] memoryReserver.cpp:552:60: runtime error: applying non-zero offset 1073741824 to null pointer [v7]
Axel Boldt-Christmas
aboldtch at openjdk.org
Thu Oct 2 08:50:54 UTC 2025
On Thu, 18 Sep 2025 15:37:38 GMT, Afshin Zafari <azafari at openjdk.org> wrote:
>> The minimum acceptable value was 0 where using it as address was problematic according to UBSAN.
>> The acceptable value is changed to 64K.
>>
>> Tests:
>> linux-x64 tier1
>
> Afshin Zafari has updated the pull request incrementally with one additional commit since the last revision:
>
> fixed MAX2 template parameter
The implementation looks correct now and the extra asserts when calling `try_reserve_range` are a good improvement.
I am still unsure what this flag is supposed to mean. If it is just a hint we allow the user to give or if it is a requirement. And is it for all heaps, or only heaps with +UseCompressedOops as it is currently being used. (Depending on the answers here, we have bugs)
Then there is the whole constraining a JVM advance option process. See my comment in the code. But I would like this PR to also be approved by someone with a better understanding of this process.
src/hotspot/share/gc/shared/jvmFlagConstraintsGC.cpp line 292:
> 290: "Sum of HeapBaseMinAddress (%zu) and MaxHeapSize (%zu) results in an overflow (%zu)\n",
> 291: value , MaxHeapSize, value + MaxHeapSize);
> 292: return JVMFlag::VIOLATES_CONSTRAINT;
I do not feel like I understand enough of our process when it comes to constraining the JVMs advance options. Even when constraining the nonsensical inputs.
The reasons I am hesitant is that I do not understand this flag at all given that we have this code which ignores the value completely.
https://github.com/afshin-zafari/jdk/blob/3dfa9765b1d6dcd27152bbd2d242209c7748c964/src/hotspot/share/memory/memoryReserver.cpp#L639-L644
So if we did not have the overflow bug in the code, using a too large value would reach that point and put the heap at arbitrary address which is lower than HeapBaseMinAddress.
Also HeapBaseMinAddress is only used for CompressedOops, so I am not sure if we should constrain the value when CompressedOops are not in use. However this goes back to the whole point of how we treat changing constraints on VM options. Should there always be a CSR?
Example of observed change in behaviour:
## Before
[ linux-x64-debug]$ ./images/jdk/bin/java -XX:HeapBaseMinAddress=18446708889337462784 -Xmx64T --version
java 26-internal 2026-03-17
Java(TM) SE Runtime Environment (fastdebug build 26-internal-2025-09-26-0639506...)
Java HotSpot(TM) 64-Bit Server VM (fastdebug build 26-internal-2025-09-26-0639506..., mixed mode, sharing)
## After
[ linux-x64-debug]$ ./images/jdk/bin/java -XX:HeapBaseMinAddress=18446708889337462784 -Xmx64T --version
Sum of HeapBaseMinAddress (18446708889337462784) and MaxHeapSize (70368744177664) results in an overflow (35184372088832)
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
-------------
Marked as reviewed by aboldtch (Reviewer).
PR Review: https://git.openjdk.org/jdk/pull/26955#pullrequestreview-3293345884
PR Review Comment: https://git.openjdk.org/jdk/pull/26955#discussion_r2397792042
More information about the hotspot-gc-dev
mailing list